
India
AT A GLANCE
AI is increasingly used across the justice system, particularly by law enforcement for data analysis, crime detection, surveillance, and predictive policing. Tools such as facial recognition and AI-enhanced CCTV analysis (e.g., AMPED FIVE) improve investigations and evidence handling. Prosecutors employ AI to support case management, including filing first information reports and analysing witness testimony. Courts use AI under the e-Courts project for automated case handling, legal research, translation (SUVAS), and summarisation and legal research (SUPACE), while platforms like Nyaay AI and Adalat AI enhance efficiency and accessibility. Training programmes for judges and police are expanding AI literacy to strengthen justice delivery.
India does not have dedicated AI legislation. Local policies and guidelines for lawyers and judges are emerging, and general rules of criminal procedure also apply. There are plans for a Digital India Act to replace existing data protection legislation, which may contain provisions expressly regulating AI.
USE
Law enforcement
Data review and analysis
In a press release dated 25 February 2025, the Indian Ministry of Law and Justice confirmed that AI is being integrated into policing and law enforcement to enhance crime detection, surveillance, and criminal investigations. Some examples of such applications are surveillance and investigation by way of facial recognition systems with databases and AI-powered forensic analysis for examination of evidence. Delhi Police have been using AI-based image enhancement and facial recognition tools to aid investigations. For example, the AMPED FIVE system is used to clarify low-quality CCTV footage. These enhanced images are then matched against various government and private databases, including driving licences, voter rolls, and vehicle records.
According to the Ministry of Law and Justice, AI is also being used for improving crime tracking and intelligence systems such as Crime and Criminal Tracking Network Systems (CCTNS) and integration with India's e-Prisons database (for digital management and tracking of information about inmates and prisons) and e-Forensics database (for storing, managing and sharing forensic evidence data and laboratory reports electronically).
Predictive analytics
The Ministry of Law and Justice has confirmed that AI is used in India for predictive policing by analysing crime patterns, high-risk areas, and criminal behaviour, enabling law enforcement to take proactive measures.
Prosecutors
Case management
In its press release dated 25 February 2025, the Ministry of Law and Justice also confirmed that AI is being used for assistance in filing first information reports, and for tools to examine and enhance witness testimony and courtroom evidence.
Courts
As at September 2025, AI use in Indian courts is focused on improving efficiency and reducing time spent on routine tasks. In India’s judiciary, there are over 50 million cases backlogged (including but not limited to criminal matters), which would take human judges more than 300 years to resolve at the current pace.
Case management
India’s e-Courts initiative, launched in 2004 and spearheaded by the Supreme Court of India, is a government-funded nationwide project incorporating AI for automated case management, legal research, and document digitisation, aiming to reduce case backlogs and improve judicial efficiency (including in criminal proceedings). The project is being implemented under the joint partnership of the Department of Justice, the Ministry of Law and Justice, the Government of India and the Supreme Court of India’s e-Committee, in a centralised manner through the respective High Courts.
Phase I |
Phase I of the project was approved in 2010 and enabled computerisation of 14,249 district and subordinate courts by 2015 |
Phase II |
Phase II was commissioned in 2015 and involved enhancing judicial services for litigants and lawyers by providing them with technology-enabled infrastructure. It involves, inter alia, improved ICT infrastructure and video conferencing |
Phase III |
Phase III was approved in 2023, with the main objective of creating a unified technology platform for the judiciary. AI technologies and blockchain are being incorporated across High Courts in India up to 2027. A formal data-governance framework is required for all AI technologies being used in e-Courts software applications. As at September 2025, these projects are operational but are yet to realise full implementation |
Potential AI-driven enhancements involved in Phase III include:
- Intelligent scheduling and case timeline prediction: AI-powered tools are deployed to optimise court scheduling, reduce delays, and ensure timely hearings. Predictive analytics can assess case timelines, offering valuable insights into probable outcomes based on historical data.
- Automated filing and case management: AI-driven document automation and OCR technologies are enhancing the accuracy and speed of filing legal documents. These systems minimise manual data entry, improving efficiency and reducing administrative burdens on court staff.
- Natural language processing for legal research and translation: natural language processing tools revolutionise legal research by providing judges and lawyers quick access to relevant precedents and judgments. Additionally, AI-based translation systems facilitate multilingual access to legal resources.
- Enhanced case information system and chatbots: AI-powered chatbots are deployed to assist litigants by providing real-time case updates and procedural guidance. It reduces the dependency on court officials for basic queries and improves use engagement with the judicial system.
- Data security and privacy measures: acknowledging the imperative of data protection, a sub-committee composed of High Court judges and technical specialists has been constituted to guarantee secure connectivity and authentication protocols. This committee evaluates and enhances the e-Courts project’s digital infrastructure to protect citizen privacy.
The Supreme Court of India has also piloted two AI initiatives:
SUVAS |
The Supreme Court Vidhik Anuvaad Software (SUVAS) is an AI tool for translating documents and orders from English into regional languages, such as Hindi, Punjabi and Gujarati. Launched in November 2019, it uses machine assisted translation, trained by AI. SUVAS has been used to translate criminal judgments and, overall, has already translated over 31,184 Supreme Court judgments into 16 regional languages, including Hindi, Tamil, Marathi, Bengali, and Kannada. Translated rulings carry a disclaimer that relieves the Supreme Court of all mistakes in it. While developing the SUVAS software, the Supreme Court faced a problem in the lack of unified vocabulary for legal jargons in regional languages which led to inaccuracies in the translation process. In response, the Bar Council of India has been working on developing a legal vocabulary that will be used across all courts in India for translation in several regional languages. |
SUPACE |
The Supreme Court Portal for Assistance in Court Efficiency (SUPACE) is an AI-driven tool assisting judges by summarising case files. The system processes facts and makes them available to judges looking for an input for a decision. SUPACE operates in four parts:
|
Moreover, the Supreme Court is testing (with IIT Madras) prototypes of AI tools for resolving filing defects, data and metadata extraction, and such tools will be integrated with the electronic filing module and the case management software for smart scheduling and backlog reduction.
‘Nyaay AI’ is an AI platform used by the Indian Supreme Court and 16 of India’s 25 High Courts and co-founded by PanScience Innovations. The platform is a commercial tool and was co-created with the judiciary to ensure alignment with real judicial needs.
Our suite of products spans defect detection, e-filing automation, case clustering, metadata extraction, bench allocation, headnote generation, live transcription, multilingual translation, and judgment analysis—all designed to make courts faster, more efficient, and more accessible.
![]()
‘Adalat AI’ is a legal-tech nonprofit, which provides AI-based real-time transcription in Indian courts, converting court proceedings (witness statements, cross-examinations, judgments, orders) into text in real time, assisting or augmenting the work of human stenographers. It also offers translation support, enabling the translation of statements and judgments into regional languages to ensure participants understand. As at September 2025, Adalat AI has been integrated in over 2,000 courts across eight Indian states, with the goal to reach 50% of all courts in India by the end of 2025. Some courts have reported that Adalat AI has reduced case timelines by 30-50% by alleviating pressure on overburdened courts.
A crucial aspect of the criminal justice system is ensuring that case papers are easily accessible in multiple languages. The accused may not know English, or even Hindi, but could speak one of India’s many other languages or dialects. For justice to be meaningful, documents must be available in a language the individual understands. This requires a strong, well-designed system. Unfortunately, the current framework is outdated — it hasn’t been updated for seven or eight years — while technology has advanced rapidly. This gap calls for a serious re-examination, especially since, based on personal experience, the existing system still falls short.
![]()
Legal research, analysis and drafting support
The SUPACE system piloted in the Supreme Court (discussed above) is an AI portal designed to make research easier for judges by highlighting relevant legal precedents to the judge user. The former Chief Justice of India has confirmed that the tool will not spill over into judicial decision-making.
India’s National Informatics Centre has also developed a retrieval-augmented-generation-based judicial search assistant for Supreme Court justices (a RAG-based model combines information retrieval with language generation to produce more accurate answers). It will serve as a specialised search service to help judges quickly find specific information. The tool will first be available to all of India’s Supreme Court judges with plans to scale the judicial assistant to the entire system.
Judges in India have voluntarily disclosed usage of AI tools. There are examples of judges using ChatGPT, clarifying that this was not done to guide judicial opinion or delegate decision-making. These are discussed below (see ‘Cases’).
Decision-making support
India’s government has announced plans to infuse AI in the judiciary to expedite verdicts in ‘petty crime’ and traffic cases, introducing ‘robo judges’, whereby human judges will use AI to process case information, background details, and past orders to help judges deliver verdicts faster.
Defence
In a survey conducted by Manupatra Academy between 9 May 2015 and 19 May 2025, it was revealed that more than half of the legal professionals surveyed (including those working on criminal cases) currently utilise AI tools in operations, while 73.7% have interacted with generative AI applications for their work. However, more than half of the surveyed users noted barriers to widespread adoption of AI in legal enterprises, particularly data privacy and the security of sensitive client information. Users also expressed reservations about the quality and reliability of AI-generated content.
Administrative support
‘Lucio’ is an AI-powered legal assistant that helps law firms, in-house teams, and researchers extract, analyse and manage legal documents efficiently. Lucio has been integrated into the Bar Council of Delhi’s e-research library, launched in July 2025.
TRAINING
AI training is being dispensed to members of the Indian judiciary. In the May 2025 survey by the Manupatra Academy (see above), 88.6% of the legal professionals surveyed believed that AI training is crucial.
In December 2024, the National Judicial Academy in Bhopal hosted a workshop for judges of various High Courts across India on information and communication technologies, including dedicated sessions on the use of AI in the courtroom.
Moreover, from April 2025 judicial officers presiding over trial courts began receiving training on the use of AI tools to aid and expedite judicial decision-making, particularly in ‘routine cases’. While no official reports are available, the press reporting on these training efforts clarifies that the training pertains to the use of AI tools to process case information, background details, and past orders towards greater efficiency, rather than in an effort to augment or replace judicial decision-making.
Police training institutions have also begun integrating AI into their professional development programmes. At the Police Training Centre in Medchal, Hyderabad, officers attended a refresher course in August 2025 led by an AI expert. The session ‘offered invaluable hands-on and insightful training on how AI can be leveraged in modern policing’. Training focused on phishing and scam detection, deepfake identification, social media analysis, mobile tracking, digital evidence collection, and voice recognition.
For lawyers, AI training programmes are being offered by private educational or training organisations in partnership with or certified by governmental authorities. For example:
- The 'Artificial Intelligence, Law and Justice’ course was offered by IIT Madras in coordination with the National Programme on Technology Enhanced Learning under the aegis of the Ministry of Human Resource Development and the Government of India's ‘Swayam’ initiative.
- LawSikho is a certificate programme jointly offered by Lawsikho, a private training company, NALSAR University, and the National Skill Development Corporation, a company set up as a public-private partnership under the aegis of the Ministry of Skill Development and Entrepreneurship, Government of India.
Moreover, in May 2024, the Bar Council of India directed that all centres of legal education regulated by the Council incorporate “modern technologies”; including AI into their curriculum.
Today’s police force must be as smart as the criminals they face. Our constables and officers must be equipped not only with weapons and discipline but also with awareness and digital intelligence. Training like this prepares them for real-world challenges
![]()
REGULATION
There is no general statute regulating the use of AI in India although some sectoral regulations and guidance have been issued. For example, the Union Ministry of Electronics and Information Technology has issued guidance to platforms and intermediaries when deploying or developing AI in a 2024 advisory. There is also no general statute on the use of AI tools in court but local guidelines for lawyers are beginning to emerge.
Guidelines for practitioners
Judges and lawyers are bound by their general ethical and professional obligations, which may apply to the use of AI tools even if AI is not mentioned. For example, the Bar Council of India has issued Rules of Professional Standards which include the duties advocates owe to the court and to the client. In addition, local guidelines on the use of AI in court are beginning to emerge.
Bombay Bar Association: Guidelines on the use of AI by the judiciary
In early July 2025, the Bombay Bar Association issued guidelines on the use of AI to its members, described as ‘the first of their kind to govern the ethical and practical use of artificial intelligence in the judicial process’.
State of Kerala: Policy Regarding Use of Artificial Intelligence Tools in District Judiciary
Later in July 2025, the Kerala High Court became the first state in India to introduce a Policy Regarding Use of Artificial Intelligence Tools in District Judiciary. The policy applies to ‘the District Judiciary in Kerala’ and the ‘employees assisting them in their diverse judicial work’. Violations may result in disciplinary action. The policy warns against the ‘indiscriminate use’ of AI which may result in privacy violations, data security breaches, and the ‘erosion of trust in judicial decision-making’. The policy is aimed at ensuring that ‘AI tools are used only in a responsible manner’ and in compliance with ‘ethical and legal obligations’ to ensure ‘human supervision, transparency, fairness, confidentiality, and accountability’. The policy prohibits the use of AI tools ‘as a substitute for decision-making or legal reasoning’. Only AI tools approved by the High Court of Kerala or the Supreme Court of India may be used. AI may be used ‘solely as an assistive tool’ under strict human supervision. Any results generated by AI tools must be verified by judicial staff, qualified translators or the judges themselves. Courts must keep a ‘detailed audit’ of all instances of the use of AI tools. Members of the judiciary and employees assisting them must participate in training programs on the ethical, legal, technical and practical aspects of AI.
Criminal procedure rules
Certain aspects of Indian evidentiary and criminal procedure will likely regulate the use of AI in criminal proceedings even though AI is not specifically targeted or mentioned. Some key provisions are set out below.
First, section 63 of the Bharatiya Sakshya Adhiniyam, 2023, an Act ‘to consolidate and to provide for general rules and principles of evidence for fair trial’, directs how ‘electronic records’ are to be proved in evidence. This will likely cover how AI generated materials, server logs, and related matters are required to be proved in both civil and criminal proceedings.
Second, section 72 of the Bharatiya Nyaya Sanhita, 2023, an Act to consolidate and amend the law on criminal offences, prohibits disclosure of the identity of victims of sensitive offences, particularly sexual assault and other serious crimes, to prevent stigma, trauma, or threats to victim safety. Similarly, section 23 of the Protection of Child Sexual Offences Act, 2012 prohibits disclosure of identity (including name, address, school, family details, or photographs) of a victim. These provisions may affect the data sets that can be collated for use with AI tools.
Finally, section 94 of the Bharatiya Nagarik Suraksha Sanhita, 2023, an Act ‘to consolidate and amend the law relating to Criminal Procedure’, authorizes courts or investigating officers to call for the production of a document or other thing in the course of an investigation or criminal proceeding. A document is defined under the Bharitiya Sakhsya Adhiniyam (see above) to include an electronic record. This will therefore likely cover the power of Indian law enforcement agencies and criminal courts to call for the production of records relating to AI platforms.
Data protection legislation
There are various laws that do not directly seek to regulate AI, but may affect the development or use of AI in India. These laws are designed to be technology-agnostic (i.e., the principles in these laws are intended to apply, regardless of which technologies are in use). Key examples are set out below.
Information Technology (Sensitive Personal Data and Information) Rules, 2011
In respect of data privacy, the Information Technology (Sensitive Personal Data and Information) Rules, 2011 (SPDI Rules) issued under the Information Technology Act, 2000 (see below) govern the collection and disclosure of sensitive personal data and information, defined to include, inter alia, biometric data (including facial patterns, fingerprints, etc.) and financial information. While the SPDI Rules apply to ‘bodies corporate’, who are collecting sensitive personal data and information, and may not apply to the collection of such data by law enforcement agencies, in the event that third party services are used by law enforcement agencies to process or analyse sensitive personal data or information, and particularly where such third parties store the sensitive personal data or information on their own servers, the SPDI Rules may govern the collection and/or processing of such data.
Information Technology Act, 2000
In addition, any unauthorized access to user data uploaded on an AI platform (by third parties) and/or illegal dissemination of such data by the AI platform, may have legal consequences under the Information Technology Act, 2000, including the following provisions:
Section 43 |
Imposes a civil penalty (requiring payment of compensation to a victim) for damage to computer systems, including unauthorised access, data theft, introducing computer viruses, disrupting systems, or denying access to authorised users |
Section 43A |
Holds bodies corporate liable to pay compensation for failure to protect sensitive personal data, where negligence in implementing security practices causes wrongful loss or gain |
Section 66 |
Criminalises the act of any person fraudulently committing the acts covered under section 43 (above) and prescribes a jail term of up to 3 years and/or a fine |
Section 72 |
Penalises any person who, having access to electronic records or information by virtue of powers under the Act, discloses such information without consent, with imprisonment of up to two years or fine |
Section 72A |
Provides punishment for disclosure of information in breach of lawful contract. Any person, including an intermediary, who secures access to personal information while providing services under a contract and discloses it without consent, causing or likely to cause wrongful loss or gain, is punishable with a fine |
Digital Personal Data Protection Act, 2023
While the Digital Personal Data Protection Act, 2023 (DPDPA) has been passed by Parliament, it has not yet come into force as at September 2025. The DPDPA provides that the state or its instrumentalities are authorised to process personal data for certain uses which are likely to include investigative operations by law enforcement agencies. But the same considerations applicable to private contractors/service providers under the SPDI Rules are also likely to apply to such services under the DPDPA (once in force). In the event of a failure by such an AI service provider to maintain reasonable security practices in respect of the data collected or processed by it, such service provider may be subject to a fine, which may extend to INR 50 Crore (approximately USD 5.64 million at September 2025 exchange rates).
Human rights
Domestic and international human rights instruments may be relevant to the use of AI tools in court. For example, article 14 of the Indian Constitution guarantees equality before the law and article 21 protects a person’s life and liberty. Indian courts have recognised that the right to privacy flows from the protections to life and liberty under article 21 of the Indian Constitution. At the same time, articles 14 and 17 of the International Covenant on Civil and Political Rights and articles 16 and 40 of the Convention on the Rights of the Child, to both of which India has acceded, protect the right to a fair trial and the right to privacy.
Outlook
As regards the approach to AI regulation in general, the National Institute for Transforming India (NITI Aayog), a government thinktank, published a paper on India’s National Strategy for Artificial Intelligence in 2018. While not limited to the legal sector, the NITI Aayog identified concerns over ‘[p]rivacy and security, including a lack of formal regulations around anonymisation of data’ as one of the challenges to AI adoption in India.
In 2022, a draft Digital India Act, 2023, was introduced to replace the Information Technology Act, 2000 (see above). The bill is expected to include provisions that will regulate AI, although details remain to be confirmed. In 2024, the Union Ministry of Electronics And Information Technology published an AI Governance Guidelines Report, which recommended that the government ‘suggest specific measures … under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework’ and encourage ‘the participants in the AI ecosystem to self-regulate each other and develop lightweight but effective outcomes-focused regulations for timely intervention on the part of the regulator which allow the liability to be attributed to the defaulting parties’.
With respect to the legal sector, the Research Unit of the Ministry of Justice concluded in a February 2025 press release on Digital Transformation of Justice: Integrating AI in India's Judiciary and Law Enforcement that ‘responsible AI adoption requires strong data security, legal reforms, and transparency to ensure it supports rather than replaces human judgment in judicial processes’ and the future of AI in the legal sector would be ‘shaped by AI-powered legal research, blockchain-secured case records, judicial transparency through AI analytics, and enhanced cybersecurity in law enforcement’.
CASES
The use of AI tools has been discussed in a number of cases before the Indian courts.
In some instances, AI tools have been consulted in the decision-making process. For example, in the judgment of Jaswinder Singh v. State of Punjab, CRM-M-22496-2022 (27 March 2023), the Punjab and Haryana High Court asked ChatGPT for feedback to gain a more comprehensive understanding of whether bail should be granted in cases in which the alleged crime involved ‘cruelty’. The judge clarified that this reference was solely intended to provide a broader understanding of bail jurisprudence without dictating the merits of the case. Similarly, in the case of Md Zakir Hussain v. State of Manipur, WP(C) No. 70 of 2023 (23 May 2024) the Manipur High Court used ChatGPT for research - but did not delegate the decision-making - in a service matter when the state government failed to furnish the judge with essential information on the service rules of Village Defence Force personnel.
In Christian Louboutin SAS v. The Shoe Boutique, CS(COMM) 583/2023 (22 August 2023), the plaintiffs relied on a ChatGPT response affirming that they were known for their iconic, red-soled shoe to support an argument that the defendant had breached trademark restrictions. The Delhi High Court found in favour of the plaintiff but rejected the ChatGPT response, observing that:
[T]he responses from ChatGPT cannot be the basis of adjudication of legal or factual issues in a court of law. The response of a Large Language Model (LLM) based chatbots such as ChatGPT depends upon a host of factors including the nature and structure of the query put by the user, the training data etc. Further, there are possibilities of incorrect responses, fictional case laws, imaginative data etc. generated by AI chatbots. Accuracy and reliability of AI generated data is still in the grey area. At the present stage of technological development, AI cannot substitute either the human intelligence or the humane element in the adjudicatory process. At best the tool could be utilised for a preliminary understanding or for preliminary research and nothing more.
![]()