[Analysis] SEBI’s AI/ML Framework – Ensuring Fair and Safe Use in Stock Markets
- Blog|Advisory|Company Law|
- 10 Min Read
- By Taxmann
- |
- Last Updated on 26 June, 2025
Responsible AI Guidelines for a Safer & Smarter Stock Market refer to a regulatory framework designed to ensure that Artificial Intelligence (AI) and Machine Learning (ML) technologies are used in India’s securities markets in a fair, transparent, accountable, and secure manner. These guidelines, proposed by SEBI, aim to strike a balance between technological innovation and investor protection. They set standards for model governance, data privacy, bias mitigation, ethical use, and investor disclosures—ensuring that AI-driven systems enhance market efficiency without compromising trust, safety, or fairness.
Table of Contents
- Introduction
- SEBI’s Rationale Behind Introducing AI Guidelines
- Quick Recap – What is AI and ML?
- What India and the World Say About AI/ML Ethics?
- What India and the World Say About AI Ethics?
- What SEBI Recommends? – 5 Big Themes
- Conclusion
1. Introduction
Imagine your trading app not only tracks the market but also sends you a stock recommendation, or your investment portfolio is automatically rebalanced to match your goals, without any human intervention. These are no longer futuristic scenarios. Artificial Intelligence (AI) and Machine Learning (ML) technologies are already being used in India’s stock markets to make investing faster and smarter.
However, such automation comes with risks. What if it reads the market wrong and gives poor advice? Or does it mistakenly block your access to your own money? These challenges call for a robust regulatory framework.
To address this, the Securities and Exchange Board of India (SEBI) released a consultation paper on June 20, 2025, seeking public comments on a proposed framework for the responsible use of AI/ML in the securities market. The objective is to ensure these technologies are used in a manner that is fair, transparent, accountable, and in the best interest of investors. The deadline to submit feedback is July 11, 2025.
The proposal balances innovation with investor protection—ensuring that as technology plays a larger role in shaping your financial future, it does so responsibly and under clear regulatory safeguards.
2. SEBI’s Rationale Behind Introducing AI Guidelines
AI is no longer just a buzzword. Today, stock exchanges use it to catch market manipulation in real-time. Brokers use it to tailor services and detect suspicious trades. Mutual funds use it to answer your questions through chatbots. But imagine if an AI model incorrectly flags a legitimate transaction as fraud or recommends poor stock advice due to bad data—that’s a problem.
SEBI already asked regulated entities to disclose when they use AI. Now, it wants to go a step further by laying down a framework that promotes safe, fair, and transparent use. It aims to ensure that, as technology helps shape financial future, it does so with proper checks in place to protect the interests of investors.
3. Quick Recap – What is AI and ML?
Artificial Intelligence (AI), a term first coined by data scientist John McCarthy, refers to the science and engineering of making intelligent machines. It involves developing systems that can mimic human intelligence to solve problems.
Further, according to India’s National Strategy for AI (2018), AI is
“a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act.”
Machine Learning (ML) is a subset of AI that enables computers to learn from data without being explicitly programmed. ML models identify patterns in large datasets and use them to make predictions or decisions, improving performance as more data becomes available.
4. What India and the World Say About AI/ML Ethics?
The working group studied the guidelines on use of AI/ML, adopted by various domestic and international organisations.
4.1 Niti Aayog Guidelines on Use of AI/ML in India (Domestic)
In India, NITI Aayog released an approach document in February, 2021 – “Principles for Responsible AI”. Principles for Responsible AI” identified core principles for responsible management of AI:
- Principle of Safety and Reliability
- Principle of Equality
- Principle of Inclusivity and Non-discrimination
- Principle of Privacy and Security
- Principle of Transparency
- Principle of Accountability
- Principle of protection and reinforcement of positive human values
In August 2021, NITI Aayog released a follow-up document outlining how to effectively implement the seven principles of responsible AI.
4.2 IOSCO Guidelines on Use of AI/ML (Global)
The International Organisation of Securities Commissions (IOSCO) published released a consultation paper.
In June 2020 after conducting surveys and discussions with market intermediaries to identify how AI and ML are being used and the associated risks. Following potential risks and harms were identified that may arise due to use of AI and ML:
- Governance and oversight
- Algorithm development, testing and ongoing monitoring
- Data quality an bias
- Transparency and explainability
- Outsourcing
- Ethical concerns
5. What India and the World Say About AI Ethics?
Indian securities market participants are increasingly integrating AI/ML into their operations. This growing adoption reflects a significant shift toward tech-driven decision-making across the market.
5.1 AI/ML Applications in Stock Exchanges
Indian stock exchanges use AI/ML for real-time market surveillance, cybersecurity, and member support using chatbots, automating data input tasks for Member Compliance, ML algorithms. These technologies help in analysing the large datasets, including social media sentiment, to better understand market trends and investor behaviour.
5.2 AI/ML Use by Brokers
Brokers are increasingly adopting AI/ML technologies for tasks such as KYC and document verification, personalised product recommendations, chatbot-based customer support, digital account opening, trade surveillance, anti-money laundering checks, order execution, and predicting customer preferences.
5.3 AI/ML in Mutual Fund Operations
Mutual funds are utilising AI/ML tools to enhance customer support through chatbots, strengthen cybersecurity, improve surveillance mechanisms, and segment customers for better-targeted services.
6. What SEBI Recommends? – 5 Big Themes
To ensure the responsible and ethical use of AI/ML in India’s securities markets, SEBI’s dedicated Working Group has put forward key recommendations focused on strong safeguards, ongoing monitoring, and active human oversight throughout the lifecycle of AI/ML systems. These efforts are aimed at protecting both investors and market integrity as technology evolves and reshapes financial operations.
The Working Group’s report, submitted to SEBI’s Committee on Financial and Regulatory Technologies (CFRT), proposes to introduce a regulatory framework specifically for AI/ML applications that go beyond internal business operations and directly impact clients or investors.
Now, through this consultation paper, SEBI is inviting public attention and feedback on a proposed framework carefully shaped by expert analysis, stakeholder input, and internal deliberations. The guidelines focus on five core principles for responsible AI/ML use:
- Model Governance
- Investor Protection and Disclosure
- Testing Framework
- Fairness and Bias
- Data Privacy and Cybersecurity
6.1 Model Governance
SEBI’s working group recommended a comprehensive framework to ensure responsible AI/ML use in securities markets. This includes skilled oversight teams, risk controls, ethical practices, robust data governance, third-party accountability, and continuous monitoring—all aimed at maintaining transparency, fairness, and regulatory compliance.
6.1.1 Establishing a Competent AI Oversight Team
It is recommended that all market participants set up a dedicated internal team with the technical expertise needed to oversee AI/ML models. This team must actively monitor performance, implement necessary controls, and ensure security across the model’s entire lifecycle. It is also recommended to maintain clear documentation covering model development, validation, versioning, and troubleshooting.
6.1.2 Implementing Strong Risk Management Protocols
Market participants are expected to establish robust governance and risk control frameworks to safeguard against potential risks, particularly during volatile market conditions. The robustness of AI/ML systems can be reinforced by careful training, and retraining, of ML models.
6.1.3 Define Clear Error Handling and Business Continuity Plans
Market participants must have clearly defined procedures for identifying and responding to AI/ML failures. SEBI has recommended setting up fallback mechanisms and backup plans that ensure uninterrupted services and operational continuity in case of system errors or disruptions.
6.1.4 Leadership Accountability in AI Oversight
A key recommendation from SEBI is to designate a senior-level officer, preferably with relevant technical expertise, who will be accountable for the entire AI/ML lifecycle. This includes overseeing model development, testing, deployment, and ongoing monitoring, ensuring top-level commitment to ethical and responsible AI practices.
6.1.5 Managing Third Party AI/ML Vendors
Many firms outsource AI capabilities, which can be efficient but risky. Such market participants must manage relationships with AI/ML service providers through clear contracts defining roles, responsibilities, and performance standards. Despite outsourcing, participants remain fully responsible for regulatory compliance.
6.1.6 Monitor AI Models Continuously and Share Performance Data
Continuous review and monitoring are required because AI/ML models can evolve by learning from live data. Participants must periodically share model accuracy reports with SEBI.
6.1.7 Establish Robust Data Governance Frameworks
AI models aren’t static, they learn and adapt constantly. Without ongoing monitoring, errors can creep in unnoticed. Market participants must clearly define data governance norms, including access controls, encryption, and requests for data unmasking must be established and recorded.
6.1.8 Independent Audits for Fairness and Transparency
Independent audits bring an objective lens to AI fairness, transparency, and reliability. These audits should be submitted to regulators, facilitating proactive oversight. This external validation is a powerful way to build confidence among investors and stakeholders.
6.1.9 Ensuring User Autonomy and Cultural Sensitivity
AI should assist, not replace, human decision-making. Systems must allow users to maintain control and accommodate diverse cultural values, especially in a country as varied as India. Ignoring these factors risks alienating users and undermining trust.
6.1.10 Upholding Ethical AI Practices
Market participants must ensure AI/ML applications follow ethical guidelines and responsible practices.
6.1.11 Secure and Detailed Logging of AI Activity
In today’s fast-paced markets, where AI-powered systems can execute thousands of trades per second, maintaining detailed and secure logs is no longer optional, it’s essential. Market participants must maintain detailed and secure logs of their AI/ML systems to ensure that all events can be traced in the correct order if needed.
6.1.12 Flexible Feedback Control Mechanisms
In dynamic market environments, conditions can change in seconds due to global events, policy shifts, or unexpected volatility. Market participants must have the ability to switch between manual and automated feedback mechanisms as needed.
6.1.13 Compliance with Legal and Regulatory Standards
AI/ML models must fully comply with applicable legal and regulatory requirements.
The proposed Model Governance framework lays a solid foundation for the responsible integration of AI/ML technologies in India’s Securities market. Focusing on competent oversight teams, continuous monitoring, and robust risk management will significantly enhance transparency and accountability. This framework is a strong step towards fostering responsible AI innovation while safeguarding market integrity and investor protection.
6.2 Investor Protection and Disclosures
In today’s tech-driven financial world, investors often interact with AI tools without even knowing it whether it’s a chatbot resolving queries, or automated systems executing trades on their behalf. That’s why it’s crucial for market participants to clearly disclose when AI/ML is being used in these client-facing operations. Market participants must transparently disclose AI/ML use in client-facing operations with clear communication.
6.2.1 Transparent Use of AI/ML in Key Operations
Market participants must inform clients when AI/ML technologies that directly affect their financial decisions are used in business operations. This disclosure is essential to promote trust, transparency, and accountability. Algorithmic trading (including high-frequency trading), Asset Management/Portfolio Management, Advisory and support services are examples of client-facing AI/ML applications.
6.2.2 Clear Communication for Informed Decisions
Disclosures must be written in simple, easy-to-understand language to help clients clearly grasp the AI-powered services and products offered, enabling them to make well-informed choices. For example, when an investor uses an app to get AI-generated stock tips or portfolio advice, the platform should clearly state how the model works, its limitations, and what data it relies on.
6.2.3 Grievance Redressal Aligned with SEBI Norms
As reliance on AI/ML grows, so does the potential for system errors, incorrect recommendations, or biased outcomes. In such cases, a clearly defined and accessible grievance redressal mechanism aligned with regulatory standards becomes vital. It ensures that investor complaints related to AI systems are addressed fairly, timely, and transparently.
The emphasis on transparent disclosure of AI/ML use in client-facing operations is commendable and crucial for building trust and accountability in the financial markets. The focus on simple, clear communication is essential to ensure accessibility of information and will also empower investors to make well-informed decisions.
6.3 Testing Framework
With market conditions shifting by the second due to economic news, geopolitical events, AI/ML models must be battle-ready before they’re unleashed.
6.3.1 Continuous Testing and Monitoring
Market participants are expected to perform routine and ongoing testing of AI/ML models throughout their lifecycle. For instance, a model that performs well under stable conditions must be revalidated if market volatility spikes or if a new data source is introduced. Continuous monitoring helps identify and resolve performance drifts proactively.
6.3.2 Pre-deployment Testing Environment
AI/ML models must be evaluated in an isolated, non-production environment before they are deployed live. This testing should simulate both normal and stressed market conditions such as high-volume trading days or geopolitical shock events, to confirm model stability and robustness before client exposure.
6.3.3 Shadow Testing with Live Data
Before full deployment, models should undergo shadow testing using real-time data to ensure quality and reliability before deployment in production environment.
6.3.4 Comprehensive Documentation and Data Retention
To ensure traceability and explainability, detailed records of AI/ML model must be maintained for at least 5 years. This documentation supports accountability and is crucial for internal audits, dispute resolution, and future upgrades.
6.3.5 Ongoing Monitoring for Model Evolution
Since AI/ML models adapt based on new data inputs over time, participants must actively track their evolution to prevent unexpected outcomes. Hence, continuous monitoring tools and alerts must be deployed to track changes in model behaviour and ensure regulatory alignment.
These recommendations aim to ensure that AI/ML systems deployed in the securities market are not only high-performing but also safe, transparent, and adaptable to real-time market dynamics. This fosters investor confidence and upholds the integrity of the financial ecosystem.
6.4 Fairness and Bias
To uphold trust and prevent biased outcomes in AI-powered financial services, it’s recommended that market participants embed fairness at the core of their AI/ML systems, especially when these systems influence investment decisions or allocate financial products.
6.4.1 Preventing Bias and Discrimination
AI/ML models must treat all customers fairly and should not favor or discriminate against any group.
6.4.2 Ensuring Data Quality and Diversity
Imagine a fraud detection model trained only on urban transaction patterns, it may overlook rural anomalies. To avoid such blind spots, data feeding into the models must come from reliable, relevant, and diverse sources. Market participants must check that data sources are reliable, relevant, and complete to meet the model’s goals.
6.4.3 Proactive Bias Management and Awareness
Market participants must implement appropriate processes to find and eliminate bias in data. Specific training courses must be provided to raise awareness amongst their data scientists.
By focusing on equity at every stage from data collection to live deployment, these recommendations help to ensure that AI systems act fairly and build investor confidence in India’s increasingly automated financial ecosystem. Fair and non-discriminatory AI/ML models are crucial for building trust among all customers.
6.5 Data Privacy and Cybersecurity
As financial institutions increasingly adopt AI/ML systems that process sensitive investor data in real-time, whether through robo-advisors, online onboarding, or trade execution, it is essential that data privacy and cybersecurity are prioritized. These recommendations aim to ensure that, as technology advances, investor protection keeps pace.
Market participants must ensure strong data privacy and security policies, comply with all legal standards, and promptly report any technical issues or data breaches to SEBI and relevant authorities.
6.5.1 Robust Data Privacy and Security Policies
Market participants must implement clear and comprehensive policies to safeguard data privacy, security, and cybersecurity in all AI/ML operations. This includes encrypting customer information used in KYC verifications and applying multi-layered defences to guard against cyber intrusions, especially in an era of increasing phishing attacks and ransomware threats targeting financial platforms.
6.5.2 Adherence to Legal and Regulatory Standards
All collection, usage, and processing of investors’ personal data must strictly comply with India’s data protection laws and relevant sectoral regulations.
6.5.3 Promptly Report Breaches and Disruptions
Any technical glitches or data breaches such as a chatbot leak of personal trading data or a failed algorithm exposing confidential investor profiles, must be promptly reported to SEBI and relevant authorities in accordance with existing regulatory requirements.
The focus on strong data privacy and cybersecurity policies is essential to protect investor information in AI/ML applications. Strict compliance with legal standards ensures trust and accountability. Prompt reporting of any technical issues or breaches to SEBI further strengthens transparency and safeguards market integrity.
7. Conclusion
SEBI’s comprehensive guidelines for the responsible use of AI and ML mark a significant milestone in shaping a safer, smarter, and more transparent securities market. By laying down clear principles around model governance, investor protection, fairness, continuous monitoring, and robust data privacy, SEBI is promoting innovation and ensuring the growth, stability, and fairness for all.
Disclaimer: The content/information published on the website is only for general information of the user and shall not be construed as legal advice. While the Taxmann has exercised reasonable efforts to ensure the veracity of information/content published, Taxmann shall be under no liability in any manner whatsoever for incorrect information, if any.

Taxmann Publications has a dedicated in-house Research & Editorial Team. This team consists of a team of Chartered Accountants, Company Secretaries, and Lawyers. This team works under the guidance and supervision of editor-in-chief Mr Rakesh Bhargava.
The Research and Editorial Team is responsible for developing reliable and accurate content for the readers. The team follows the six-sigma approach to achieve the benchmark of zero error in its publications and research platforms. The team ensures that the following publication guidelines are thoroughly followed while developing the content:
- The statutory material is obtained only from the authorized and reliable sources
- All the latest developments in the judicial and legislative fields are covered
- Prepare the analytical write-ups on current, controversial, and important issues to help the readers to understand the concept and its implications
- Every content published by Taxmann is complete, accurate and lucid
- All evidence-based statements are supported with proper reference to Section, Circular No., Notification No. or citations
- The golden rules of grammar, style and consistency are thoroughly followed
- Font and size that’s easy to read and remain consistent across all imprint and digital publications are applied