Skip to content
brazil-flag-png-large

Brazil

Tools Tools
APOIA | ASSIS | ATHOS | ChatPGT | Claude | Detecta | Etiquetas Inteligentes | HORUS | Jarvis | Legal Intelligent Advisor | LuminarIA | POTI | Projeto Sócrates | RADAR | MANADUS | SAAJUS | SCRIBA | Smart Sampa | Specialised systems | VICTOR
Tasks Tasks
Case management | Charging support | Data review and analysis | Decision-making support | Evidence review and analysis | Legal research, analysis and drafting support | Operational support |
User Users
Law enforcement | Prosecutors | Courts | Defense
Scope Scope
Nationwide
Training Training
Yes, mandatory and mainly for judges
Regulation Regulation
The Brazilian National Council of Justice has issued detailed guidelines on the use of AI within the judiciary (Resolution No. 615/2025). The Brazilian Bar Association likewise issued guidance for practitioners. Brazil is seeking to establish a comprehensive general AI framework
Insight Insights
Monitoring compliance with judicial regulations on the use of AI remains a challenge

AT A GLANCE

Brazil is a global leader in AI adoption in criminal justice, with tools spanning law enforcement, prosecutors, courts (for criminal and civil matters), and defence. Police use platforms like Detecta and satellite-based monitoring, while cities expand facial recognition and integrated surveillance (such as Smart Sampa). Prosecutors rely on systems such as LuminarIA, Jarvis, and video-analysis AI to manage cases and evidence efficiently. Courts deploy over 140 AI tools for case management, legal research, and predictive analysis, including APOIA, ASSIS, and VICTOR. Public Defenders are considering piloting AI for drafting and case review. Judges receive mandatory training on AI use, risks, and biases.

Brazil is also taking a leading role in shaping global AI regulation. It is currently in the process of establishing a comprehensive AI regulatory framework inspired by the EU’s AI Act and is one of a few countries with detailed judicial guidelines on the use of AI in the judiciary. The judicial guidelines (Resolution No. 615/2025) establish a risk matrix for AI systems in use and introduce specific provisions on the use of generative AI. They emphasise the need for human oversight, transparency in the use of AI, explainability (i.e., enabling users to understand how and why a specific outcome was produced) as well as respect for fundamental rights, including the 'right to a full defence' and due process. Federal judges interviewed noted that while there is an interest in adopting AI broadly within the judiciary, it is a deliberate choice not to extend its use into the criminal sphere, except in areas peripheral to adjudication and, above all, only where it can be ensured that defendants are in no way placed at a disadvantage.

USE

Since September 2020, Brazil’s National Council of Justice, in collaboration with the UN Development Programme, has been led by the Justice 4.0 Program, a technological modernisation initiative, focusing on driving digital transformation across the Brazilian judiciary. Today, Brazil is a global leader in AI adoption. In courts, AI tools span civil and commercial matters, though since the National Council of Justice’s Resolution No.332/2020, there has been a clear discouragement of the development and use of AI tools in criminal adjudication. 

Law enforcement 

In August 2020, the National Council of Justice (CNJ) the administrative body overseeing Brazil’s  judiciary - issued guidelines on the development and use of AI in the judiciary by adopting Resolution No.332/2020. At that time, only predictive AI systems were in use – though none were applied in criminal matters. Indeed , influenced by the case of Loomis v Wisconsin [2016] 881 N.W.2d 749 (Wisconsin Supreme Court), the resolution expressly discouraged the use of predictive AI models in criminal matters albeit with some narrow exceptions. It was designed for computational solutions ‘aimed at supporting procedural management and enhancing the effectiveness of judicial services’.  In March 2025, the CNJ updated these guidelines in response to the emergence of generative AI and their growing use within the judiciary by adopting Resolution No.615/2025 (‘Resolution’), revoking the earlier Resolution No.332/2020. The preamble of the new resolution acknowledges AI’s potential role in supporting decision-making and states that specific regulations on the use of generative AI are ‘indispensable’. Federal judges interviewed, however, noted that while there is an interest in adopting AI broadly within the judiciary, it is a deliberate choice not to extend its use into the criminal sphere, except in areas peripheral to adjudication and, above all, only where it can be ensured that defendants are in no way placed at a disadvantage.

WhatsApp Image 2025-10-02 at 16.18.36

Operational support 

São Paulo has implemented a smart surveillance and policing platform called ‘Detecta’, with the objective to connect police data, automate threat detection, and enable preemptive responses. The platform integrates multiple data sources, such as: civil and military police databases, digital incident reports, criminal photo registries, vehicle and driver data, and real-time CCTV feeds. Detecta was promoted as a multi-platform tool capable of automatically detecting potentially suspicious behaviours, for example, identifying a motorcycle parked in the middle of traffic as potentially suspicious. 

The Federal Police of Brazil, the Brazilian Ministry of Justice and Public Security, and Planet Data and SSCON Geospatial have collaborated to leverage satellite data and develop a change detection alert system, making near real-time information regarding illicit activities, such as environmental crimes and illegal mining, accessible to Brazilian government agencies. 

Data review and analysis

AI-driven facial recognition systems have been adopted across at least 30 Brazilian cities as of 2019, deployed for public safety and fraud prevention. Issues of discrimination based on race and gender identity have arisen. In 2019, for example, a black woman was arrested in Rio de Janeiro after her face was mistakenly identified by a smart camera installed in the Copacabana region during a pilot project. She was mistaken for a suspect who had already been serving a sentence since 2019. 

Eye with Code Reflection.png

Nonetheless, the use of technology for public safety remains in the plans of several public managers, mayors, and governors. For example, ‘Smart Sampa’, a project by the city of São Paulo, aims to roll out a single video surveillance platform that integrates and supports the operations of emergency and traffic services, the city’s public transport network, and police forces. In 2024, up to 20,000 cameras will be installed, and an equal number of third-party and private cameras will be integrated into the network. The combination of real-time analytics and facial-recognition technology, which detects and compares faces in a given space using AI algorithms, is meant to expedite the process of identifying wanted criminals, stolen cars, missing persons, and lost objects. 

Prosecutors 

Case management 

The Public Prosecutor’s Office of the Federal District and Territories uses a tool called ‘LuminarIA’, developed to automate the processing of low-complexity cases. The system analyses processes, verifies requirements, and suggests appropriate measures, optimising prosecutors’ time.  

Another notable initiative is ‘Jarvis’, a hearing transcription and analysis system. It allows prosecutors to access structured summaries of testimonies, enabling them to compare versions and identify inconsistencies.  

Charging support 

'Etiquetas Inteligentes' is used by the Ministério Público de São Paulo (Public Ministry of São Paulo, MPSP). It is an AI-driven feature that automatically identifies the procedural stage of case files, such as penalty-calculation adjustments, sentence progression, and remission. It assists by suggesting the proper type of petition to be used, reducing manual triage. Since April 2023, it has been applied in over 3,000 cases across four MPSP units: Bauru, Ribeirão Preto, Presidente Prudente, and São José do Rio Preto.

Evidence review and analysis

The Ministério Público do Estado do Rio Grande do Sul (Public Ministry of the State of Rio Grande do Sul), in partnership with Xertica.ai, has deployed a generative-AI solution that reduced video analysis time by up to 90%. This tool enables automatic transcription with diarisation, generation of summaries, detection of contradictions, and sentiment and bias analysis. The tool has processed over 23,400 videos from November 2024 to May 2025, saving over 11,500 hours of work. 

Courts 

AI has already been adopted in at least half of the Brazilian courts, including the Brazilian Supreme Federal Court (Brazil’s constitutional court) and the Superior Court of Justice (Brazil’s highest court for non-constitutional matters and matters not reserved for specialised courts). This practice stems back over six yearsa survey carried out in 2021 found that 47 of the federal, state and specialised courts in Brazil (of 91 tribunals) were using some type of AI since 2019.

Today, Brazilian judges have adopted 140 predictive AI systems (though most may also be used in civil proceedings), with both top-down and bottom-up approaches to their roll-out, and with additional emerging generative AI tools assisting in drafting and legal research. These predictive systems are used for, for example: case classification, similarity and clustering, mass-litigation detection, and forecasting workloads.

Case management 

As part of the Justice 4.0 Program, ‘Plataforma Codex’ was developed by the Tribunal  de Justiça de Rondônia in partnership with the National Council of Justice to serve as a ‘data lake’ for procedural data, consolidating content from judicial case files into a centralised and standardised repository. In March 2022, the National Council of Justice launched ‘Plataforma Codex’ as the official, mandatory tool for extracting judicial data across courts in Brazil through Resolution No. 446/2022. As at August 2025, the database contains more than 386 million lawsuits, and decisions can be uploaded in less than two hours. This tool can be incorporated into AI systems. 

Examples of simple automation systems (without an ‘intelligent’ model) used for case management in Brazilian courts are: 

MANADUS and SCRIBA 

Developed by the Roraima Court of Justice, MANADUS has the goal to assist in case distribution to bailiffs according to zoning and location criteria.  The tool seeks to guarantee enforcement of warrants and provide data updates and real-time court summons: the bailiff can immediately register in the court’s system that a party has been officially served with court papers, and the tool can be used as an app on the bailiff’s mobile device. 

SCRIBA conducts the automatic transcription of hearings and sessions, but still cannot discern from different voices and it is up to a civil servant to manually identify each speech to its corresponding interlocutor. 

Both projects do not yet use AI in their working structure, but they must incorporate machine learning techniques for risk classification of compliance with the warrant and the allocation of bailiffs according to their ability to comply.

POTI

A project conducted by the Rio Grande do Norte Court of Justice in partnership with the Federal University of Rio Grande do Norte, delivering products to automate bank account blocking procedures. The system automatically searches for specific amounts in bank accounts, and has the function of updating the value of tax enforcement action and transferring the blocked amount to official accounts. 

RADAR

Developed by the Minas Gerais Court of Justice to deal with the identification of repetitive demands. 

SAAJUS

Developed and implemented by the Federal Justice of Rio Grande do Norte in partnership with the Federal University of Rio Grande do Norte to streamline the processing of legal proceedings. The system reads the petition for tax foreclosures and active debt certificates, captures all the data, prepares the initial order and moves the process for signature. 

 

Other systems have been adopted by Brazilian courts that implement intelligent models. For example, the Superior Court of Justice uses the ATHOS system, an AI-based tool created in 2019 with the main role of identifying, before case assignment, appeals that may fall under the ‘repetitive resources’ procedure, a mechanism used to resolve numerous cases involving the same legal issue efficiently. It clusters cases based on semantic similarity and flags those with convergent or divergent judicial positions.  

In the Superior Court of Justice, Projeto Sócrates’, through AI, seeks to reduce by 25% the time it takes to issue appellate judgments. The system analyses the appeals received by the Court from 300,000 resolved cases and groups cases that are similar, to decide them together. According to the Minister of Justice, Ricardo Villas, the aim is to implement this system to produce automated draft decisions based on previous judicial decisions, whilst retaining human review before a final decision is taken.

Legal research, analysis and drafting support 

Judges in Brazil issue, on average, nine final judgments per working day: the overall number of rulings rendered daily is estimated to be around 100 to 150. Therefore, courts in Brazil have explored the potential use of AI systems  for legal research, analysis and drafting support, and decision-making support (see below). 

The ‘APOIA system’ (Assistente Pessoal Operada por Inteligência Artificial) is a generative AI assistant, integrating multiple AI tools (including ChatGPT and Gemini) implemented into the national Digital Platform of the Brazilian Judiciary (PDPJ-Br). It supports tasks such as drafting reports, summarising case files, and identifying applicable law. This tool was developed initially by the Federal Regional Court of the Second Region, and then incorporated into the PDPJ-Br and made available to Brazilian courts. APOIA is a secure, institutional alternative to ad-hoc private tools, emphasising responsible, ethically governed AI use and data protection. APOIA also includes a collaborative ‘prompt bank’ for reusing effective instructions across courts. 

Progetto senza titolo-3

Brazilian courts have also adopted their own systems to assist with legal research and drafting support: 

ASSIS

At the Tribunal de Justiça do Rio de Janeiro, the Assistente de Inteligência Artificial Generativa (ASSIS) system generates drafts of judicial decisions, sentences, and reports using GPT-4 based generative models. The system tailors’ output to each judge’s writing style and judicial record, drawing from their prior decisions and reports, and also enables judges to ask questions about case documents and access relevant information from electronic case files directly. The system operates securely, with data governance and audit trials. It does not reuse data for AI training. 

ATHOS 

As well as assisting with case assignment, the ‘ATHOS system’ in the Superior Court of Justice, discussed above, highlights key matters like the overruling of precedents or cases of notable relevance.

Legal Intelligent Advisor (LEIA) and HORUS

The ‘LEIA’ is a system used across various Courts of Justice and developed by Softplan to read case files in PDF format, identify cases that potentially match with prior legal precedents, and connect them with those legal precedents. Across the Federal District, the HORUS system also performs similar functions. 

Projeto Sócrates

Used by the Superior Court of Justice, ‘Projeto Sócrates’ (mentioned above) performs semantic analysis of the procedural documents in a case, researching court judgments that can serve as a precedent for the process under examination. 

VICTOR 

At the Supreme Federal Court, the ‘VICTOR system’ is used by court officials and aims to analyse compliance with the constitutional requirements of admissibility, and accelerate analysis of cases that reach the Supreme Court by using document analysis and natural-language processing tools. 

 

Decision-making support 

Brazilian courts are currently exploring the use of machine learning for sentences using historical data, potentially in criminal cases. Examples include a second phase of the SINAPSE platform(in the Rondônia Court of Justice/TJ-RO and sponsored by the National Council of Justice for the development and large-scale availability of AI prototypes) and a system called ‘Jerimum/Clara’ (in the Rio Grande do Norte Court of Justice). These systems are not currently in use. 

In 2024, a Brazilian judge signed a draft decision prepared by a court clerk who had used ChatGPT to generate a sentence, without informing him.  The judge relied on the caselaw presented by the clerk, which resulted in false facts and fabricated jurisprudence being included in the final judgment. As these false premises formed the basis of the judge’s decision, the defendant was wrongly convicted. After a complaint was filed with the Office of the Chief Inspector of the Federal Judiciary of the First Region, the National Council of Justice deemed it necessary to investigate the case, aiming to rectify the situation and prevent similar occurrences in the future. Before the National Council of Justice decision to investigate, a regional inspectorate had decided to archive the investigation, as it did not detect any ‘disciplinary infraction’ on the part of the judge or his assistant. The result of the final administrative review is confidential. 

Defence 

Legal research, analysis and drafting support 

The Brazilian government is developing strategies to integrate AI into Public Defender offices to improve access to justice and efficiency. AI tools, especially those based on large language models, are being explored for tasks such as streamlining case analysis, summarising judicial documents, and drafting procedural documents.

TRAINING

Judges in Brazil are receiving training on the use of AI. Courts are required to offer continuous education for judges and court staff on the risks of automation, algorithmic bias, and critical analysis of AI-generated outcomes. There are mandatory courses in AI use for judges, with specific practical training on tools such as ChatGPT and Claude. For the courts that have already adopted their own institutional system (such as ASSIS), there is also training in those systems. 

Judicial AI training has evolved from introductory courses on the nature of AI and its risks to more hands-on programs that demonstrate how judges can employ generative tools—such as ChatGPT, Claude, or court-developed systems—to draft, summarise, review case materials, and support legal analysis.

Judge Isabela Ferrari, September 2025

WhatsApp Image 2025-10-02 at 15.20.23

REGULATION

As at August 2025, there is no national law governing the use of AI in criminal proceedings, but the Brazilian National Council of Justice (Conselho Nacional de Justiça (CNJ)) has issued detailed guidelines. The Brazilian Bar Association has likewise issued guidance for practitioners. In addition, Brazil is seeking to establish a comprehensive AI regulatory framework inspired by the EU’s AI Act.

2

Guidelines for practitioners

Judicial guidelines

The updated ‘guidelines for the development, use and governance of artificial intelligence solutions within the judiciary’ (unofficial English translation available here) categorise AI solutions into low-risk, high‑risk and excessive-risk. Rather than providing definitions of each category, the annex of the Resolution includes a list of ‘purposes and contexts’ to exemplify what falls under each category.  Courts are required to evaluate the risk level of AI solutions based on this categorisation and factors such as ‘the potential impact on fundamental rights, model complexity, financial sustainability, intended and potential uses, and the amount of sensitive data used’. Excessive-risk AI solutions are prohibited while high-risk AI solutions are subject to specific requirements and safeguards, including continuous monitoring and algorithmic impact assessment prior to their deployment. The Resolution specifically addresses the use of Generative AI solutions which are subject to additional requirements. 

Low-risk solutions: AI solutions that support judicial administration and case management, or assist with legal research, analysis, and drafting, are considered low risk, provided they are overseen by a human and do not replace human judgment and evaluation. This includes, for example, ‘detecting decision-making patterns’ to ensure consistent case law, ‘producing supporting texts to facilitate the drafting of judicial acts’, ‘transcribing audio and video to assist judges’, and ‘anonymising documents’. 

High-risk solutions: The evaluation of evidence, especially when this ‘can directly influence judicial decisions’; ‘identification of profiles and behavioural patterns’; ‘investigation, evaluation, classification, and legal characterisation of facts as crimes’; ‘formulation of conclusive judgments’ based on the application of legal norms to specific facts; and the ‘performance of facial or biometric identification and authentication to monitor behaviour’ are generally considered high risk. High-risk solutions must undergo regular auditing and continuous monitoring ‘to mitigate potential risks to fundamental rights, privacy, and justice’. Before deploying high-risk models, courts must carry out an algorithmic-impact assessment with public participation ‘whenever possible’ and make the findings public. They also need to implement additional governance measures, including ‘to mitigate and prevent discriminatory biases’. Courts must ‘enable explainability’ of  AI-generated outcomes ‘whenever technically feasible’, ‘while respecting copyright, intellectual property and industrial and commercial confidentiality’, and use training data that is adequate and representative. 

Excessive risk solutions: The Resolution prohibits developing and using AI solutions that pose ‘excessive risks to information security, the fundamental rights of citizens, or the independence of judges’, including solutions that 

  • ‘do not allow human review of the data used and the results proposed’ or ‘that create an absolute reliance on the proposed outcome by the user, without the possibility of modification or review’;
  • ‘assign value to personality traits, characteristics, or behaviours of individuals or groups’ to ‘evaluate or predict the commission of crimes or the likelihood of recidivism in the reasoning of judicial decisions’; 
  • ‘classify or rank individuals based on their behaviour, social status, or personal traits for the purpose of assessing the plausibility of their rights, legal merits, or testimonies’; and
  • ‘identify or authenticate biometric patterns for emotion recognition’.

Generative AI systems: Generative-AI systems – given special focus in a dedicated chapter – may be used by judges and judiciary staff ‘to support case management or assist decision-making’. While such AI solutions should ‘preferably’ be provided and monitored by the courts, judges may also use commercial solutions they acquired through private subscriptions provided that

  • they have undergone specific training;
  • that the tool is only ‘supportive’ and not used for purposes classified as high risk or excessively risky; and
  • that the company providing the generative AI system complies with data-protection and intellectual-property standards and does not use the data entered to train the AI system. 

It is left to the judge’s discretion whether to disclose in judicial decisions that generative AI was used to assist drafting. Judges and court staff who use commercial generative AI solutions must periodically report their use to the local Judicial Oversight Office, which submits consolidated information to the Judiciary National Intelligence Committee. The Committee, established by the Resolution to oversee and implement the guidelines, is tasked with drafting and updating a best-practice manual on the proper, ethical, and efficient use of generative AI.

Key principles and rulesThe Resolution set out principles for the development, deployment and use of AI solutions by the judiciary, without specifying the mechanisms for their implementation:  

Respect for fundamental rights

Courts must ensure compatibility with fundamental rights through compatibility assessments and monitoring mechanisms. If there are ‘reports or indications of violations of fundamental rights’, the Brazilian Bar Association, the Public Prosecutor’s Office, and ‘other legitimate entities’ must be granted access to the algorithmic-impact assessment. 

Due process and right to a full defence

Courts must be guided by ‘due process, the right to a full defence, the principle of adversarial proceedings, the physical presence of the judge, and the reasonable duration of proceedings, ensuring full respect for the prerogatives and rights of stakeholders in the justice system’.

Human oversight and risk-based supervision

Human participation and oversight is required at ‘all stages of the development and implementation cycles’ with narrow exceptions. The level of human oversight may also depend on ‘the degree of risk involved’, and ‘the level of automation and impact’. Under no circumstances may the AI system restrict or replace the ‘final authority’ of the judge. 

Transparency, explainability, traceability and auditability

Courts must ensure transparency regarding the use and governance of AI systems and publish reports on ‘the system’s functionality, purposes, the data processed, and supervision mechanisms’. The National Council of Justice, which validates and audits AI solutions, must be notified about them through the ‘Sinapses platform’. The individual use of AI must be automatically recorded in the court’s internal system ‘for statistical, monitoring, and auditing purposes’. Judges are, however, under no obligation to disclose the individual use of AI in judicial decisions. AI models must ‘include explainability mechanisms, whenever technically feasible, ensuring that their decisions and operations are understandable and auditable by judicial operators’. The data used in the development of AI systems must be ‘representative’, ‘secure’, ‘traceable’, ‘auditable’, and ‘preferably from a governmental source’.

Non-discrimination and bias prevention

Courts are required to implement measures to mitigate the risk of discriminatory biases, promote plurality, and ensure ‘that AI systems assist in fair trials’ by ‘minimising the marginalisation of individuals and judgment errors arising from bias’. 

Data protection

The protocol mandates compliance with data protection regulations, requiring anonymisation and encryption, and prohibits the use of judicial data to train commercial AI models.

The Resolution provides that the Judiciary National Artificial Intelligence Committee, which is yet to be established, will be responsible for monitoring compliance with the principles and rules established in the Resolution. The Resolution does not prescribe any sanctions or disciplinary measures in the case of non-compliance, but there are general laws governing the conduct of judges which prescribe sanctions for improper or erroneous conduct. In the absence of specific rules would apply to e.g., the misuse of AI.  However, practitioners noted that monitoring compliance with the Resolution remains challenging.

Brazilian Bar Association 

In November 2024, the Federal Council of the Brazilian Bar Association (OAB) approved a set of recommendations on the use of generative AI in legal practice. These guidelines require lawyers to comply with applicable law and the OAB’s Code of Ethics and Discipline. They emphasise the lawyer’s obligation to ensure confidentiality and privacy as well as human oversight of AI. Before deploying AI, lawyers must inform their clients in writing about its intended use as well as the potential risks and obtain their explicit, informed consent. 

Criminal procedure rules

Even though the Brazilian Criminal Procedure Code does not specifically target AI, its rules on, for example, the admissibility of evidence, apply to all evidence and would also apply to AI-generated or AI-assisted evidence. 

Data protection legislation

The General Data Protection Law (Lei Geral de Proteção de Dados) – Law No. 13,709/2018 applies to the processing of personal data by both public and private entities. It may be argued that any AI using personal data must comply with the protections established by the law.

Human Rights 

The use of AI in criminal proceedings must be consistent with procedural guarantees and fundamental rights included in the Brazilian Constitution, including due process, equality before the law, the right to a fair trial and to privacy. Fair trial and privacy guarantees under regional and international human rights treaties to which Brazil is a party, such as articles 8 and 11 of the Inter-American Human Rights Convention, articles 14 and 17 of the International Covenant on Civil and Political Rights or articles 16 and 40 of the Convention on the Rights of the Child, may also be relevant. 

Outlook 

In December 2024, the Federal Senate approved Bill No. 2338/2023 on the development, deployment, and use of AI systems in Brazil. The proposed bill aims to protect fundamental rights, promote responsible innovation, ensure the implementation of secure and reliable AI systems that benefit people, democracy, and technological and economic development. It proposes a risk-based model, categorising AI systems into ‘excessive risk’ (prohibited), ‘high risk’ (permitted under strict conditions), and ‘low/minimal risk’. Within the proposed framework, excessive-risk systems include those that assess personal traits, characteristics, or past behaviours for predicting crime or recidivism, enable social scoring or real-time biometric identification in public spaces (except in narrowly defined circumstances, such as criminal investigations with prior judicial authorisation). Systems used in the administration of justice (excluding those used for administrative tasks), criminal investigations, and public security are classified as high-risk, triggering obligations that include algorithmic-impact assessments, governance measures, transparency, bias mitigation, human oversight, and detailed documentation. The draft bill also enshrines individual rights such as access to information about the use of AI, explanations of AI-driven decisions, human intervention, non-discrimination, privacy, and contestability. However, the draft bill still remains subject to change. It must still be scrutinised and voted on in the House of Representatives before presidential approval. There is currently no expected date for the next legislative developments.