
Argentina
AT A GLANCE
Argentina is integrating AI across its justice system to address inefficiencies and modernise processes. Law enforcement has adopted predictive analytics, facial recognition, and social media monitoring through the AI Security Unit, though Buenos Aires’ earlier facial recognition system was suspended by the courts. Prosecutors use PROMETEA to automate routine tasks, boosting productivity and reducing trial timelines, while also trialling Clearview AI in child-exploitation cases. Courts are rolling out tools like San Luis’ IURIX Mind for case management, AymurAI for gender-based violence data, and La Pampa’s Genaro for judgment drafting quality, while judges experiment with ChatGPT. Defence lawyers employ DoctIA for caselaw research. Nationwide training initiatives have equipped thousands of judicial staff and lawyers to use AI ethically and effectively.
There is no nationwide legislation governing the use of AI in criminal proceedings in Argentina, but judicial authorities in several provinces一including Jujuy, Río Negro, Santa Fe, San Juan, San Luis, and as well as the City of Buenos Aires一have adopted protocols for the ethical and responsible use of generative AI. All protocols strictly prohibit delegating decision-making. The Buenos Aires Bar Association likewise issued some guidance for practitioners. Argentina is also in the process of establishing a comprehensive AI regulatory framework.
USE
Argentina’s judiciary has faced structural challenges in the past, including low case resolution rates, and protracted proceedings. A series of institutional reforms have been introduced in response, with a strong focus on digital transformation.
Law enforcement
Predictive analytics
In July 2024, Argentina created the Artificial Intelligence Applied to Security Unit, made up of police officers and agents from other security forces, which will use ‘machine-learning algorithms to analyse historical crime data to predict future crimes.’ It is also expected to deploy facial recognition software to identify ‘wanted persons’, patrol social media, and analyse real-time security camera footage to detect suspicious activities.
Data review and analysis
Buenos Aires used a live facial recognition system between 2019 and 2022, with security forces in the city employing live footage to compare individuals against the country’s national fugitive database, in order to identify potential criminals. The system worked through video monitoring systems set up throughout the city, including in the three main railway stations and on the underground transport network, which is used by more than 1.3 million passengers per day. The use of technology was temporarily suspended in April 2022 by a court order which alleged that the system had been misused to run unauthorised searches. In September 2022, a city court declared the current conditions under which the system was operating to be unconstitutional. This was confirmed by the Court of Appeals of the City of Buenos Aires in 2023.
In March 2024, officials in Argentina took part in a five-day trial of a facial-recognition tool for online child-exploitation cases, developed by American company Clearview AI. The tool allows law enforcement units to upload images and run them through a database of billions of public photos from the Internet. In combination with participants from 9 other countries, the tool was used on a total of 2,198 images and 995 videos, hundreds of them from cold cases. In just three days, they identified 29 offenders and 110 victims. By June 2024, at least 51 victims had been rescued because of the effort.
Prosecutors
Charging support
In 2017, the Buenos Aires’ Prosecutor’s Office launched ‘PROMETEA’, an AI system that automates around 50% of the ordinary workload of prosecutors and produces rulings with AI. PROMETEA can ‘create reports, segment documentation on a content basis, download files where the relevant information was found, create indicators with comparative graphics, and automatically provide answers from a given input, among many other tasks, [including] . . . issu[ing] legal judgments and orders.’
The tool was created by ZTZ Tech Group, a private technology company, and was trained on 300,000 scanned court documents from 2016-2017, including 2,000 rulings. Though the app was initially introduced to assist with simple automation tasks, it now includes an intuitive chatbot interface. The app can predict case outcomes with 96% accuracy in under 20 seconds. PROMETEA has successfully increased the productivity of the Office by nearly 300%: the app has reduced the length of resolution of cases requiring trial from 167 days to 38 days (a 77% reduction), and officials are now able to automatically process around 490 cases per month (an increase from 130). So far, the application has also generated 33 suggested rulings (which have all been approved), and is being used in 84 pending cases. Prosecutors are required to review the tool’s findings, but do not need to declare when PROMETEA has been used.
Prometea is a tool that the prosecutors use to automise bureaucratic processes and draft legal opinions, allowing prosecutors to use their employees to work on the most difficult cases. PROMETEA is used mainly for traffic violation cases, or to draft basic legal opinions.
![]()
Legal research, analysis and drafting support
Prosecutors also use commercial tools such as ChatGPT on an individual basis, for analysing caselaw.
Evidence review and analysis
Argentinian prosecutors also took part in the trial of Clearview AI’s facial recognition tool, discussed above.
Courts
Case management
In October 2024, the Superior Tribunal de Justicia of San Luis approved the implementation of a generative AI programme through Agreement No. 202-STJSL-202. This initiative involves two key systems, developed by UniTech:
- IURIX Mind: a cognitive assistant integrating advanced natural language models specifically tailored to the legal context of case files. It enables judges, court staff, officials, and lawyers to interact more efficiently with case files.
- IURIX Cloud Native: a digital platform hosted on Amazon Web Services (AWS) that supports the judiciary’s electronic case management, ensuring security and reliability.
‘AymurAI’ is a pilot project by DataGénero, supported through the A+ Alliance’s Feminist AI Research Network. It aims to use AI to systematically gather and publish judicial data on gender-based violence in criminal courts. The tool is now used in seven of 23 Argentinian provinces, and was co-created with court officials to ensure practical usability. The tool anonymises sensitive information, focusing on cases identified by judges as gender-based violence. The platform seeks to identify patterns of violence that may lead to feminicide, support policy-making, promote transparency, and foster accountability in the judiciary.
Here is an example scenario that illustrates how Yasmín, a staff member from Criminal Court 10, uses AymurAI in her daily tasks: In the morning, Yasmín logs into AymurAI, a desktop application installed on the computers of the criminal court, which are connected to a server in the Consejo de la Magistratura. She can see that all the legal rulings signed by the judge the previous day are available in ODT format, ready to be processed and published. Yasmín loads the legal rulings into AymurAI by selecting the folder where they are stored. AymurAI processes the legal rulings one by one. For each legal ruling, AymurAI identifies a set of entities and suggests anonymizing some of them by replacing them with meaningful labels. For example, the address of the crime scene is replaced with "<ADDRESS>". The anonymized entities are displayed on the screen for Yasmín to review. She can accept the proposed anonymization (in correct cases) or reject them as false positives. Yasmín can also inform AymurAI if any entity was not captured (false negatives), as this could have serious consequences if it becomes public. Once Yasmín completes the review of the first legal ruling, AymurAI presents the second one, and the process continues until all the rulings are processed.
![]()
Ivana Feldfeber, Aymur AI
The Superior Tribunal de Justicia (STJ) of the Río Negro Province has developed a proprietary AI system specifically for standardised, repetitive proceedings, such as tax foreclosure proceedings, primarily in the cities of Viedma, General Roca, and Cipoletti. The AI system handles tasks previously conducted by court employees, such as verifying the formal validity of debt certificates, ensuring documentation completeness and consistency, detecting ongoing related legal proceedings, and automatically generating digital case files for judge review. This AI-assisted workflow has significantly boosted court efficiency: cases that previously took around 6.5 business days are now concluded in approximately 2.86 days. As of September 2023, the AI system had already generated nearly 6,000 rulings in tax foreclosure cases from the three major cities. While the AI performs formal controls and automation, judges remain responsible for the legal reasoning.
Legal research, analysis and drafting support
PROMETEA, as well as being used by public prosecutors since 2017, has been used by the judiciary.
AI-based services and tools are being used by local courts when issuing a judgment, to issue the ruling in an easy-to-read format, allowing readers to understand the ruling in a clearer manner. AI tools are not used to solve the cases. For example, a judge in the Corrientes province used ChatGPT to draft a paragraph in an easy-to-read format for a person with a low-level of education.
The Secretary of Jurisprudence for the Superior Court of Justice of La Pampa has developed a tool entitled 'Genaro'. This tool can be accessed by logging into a ChatGPT account and uploading a PDF of a ruling, which the tool will review and explain the strengths and weaknesses of the drafting .It will examine whether, for example, the paragraphs are too long or the terms too complex, and will also address the overall style and clarity of the judgment. Genaro’s responses rely on the Drafting Guide prepared by the Superior Court of Justice of La Pampa. The system cannot capture when there are inadequate citations included in the ruling, nor can it shorten the text.
The Secretary of Jurisprudence for the Superior Court of Justice of La Pampa has also experimented with another small chatbot called ‘Relmo’, a ChatGPT-powered judgment analyst which analyses and summarises rulings for judicial use. Relmo has issues with analysing and summarising long rulings and those with scattered reasoning.
Judges use commercial tools such as ChatGPT on an individual basis for drafting decisions and analysing caselaw.
Defence
Legal research, analysis and drafting support
‘DoctIA’ is a widely used application for legal professionals, which uses AI to search for caselaw of the Supreme Court of Justice of the Argentine Nation (CSJN) and appellate courts, allegedly without inventing material. DoctIA works by copying the legal text and recommending relevant CSJN caselaw to cite, providing a direct link to the original sentence.
Lawyers in Argentina use commercial tools such as ChatGPT on an individual basis for analysing caselaw.
David Mielnik, InteligenciaLegal
David Mielnik, InteligenciaLegal
TRAINING
As at October 2024, the National Comprehensive Program of AI in Justice has trained approximately 4,500 judicial officials and 6,500 lawyers across the country on the use of AI. The training appears to cover the use of ChatGPT, Gemini, and Microsoft Copilot.
The Federal Court of Criminal Cassation published a compilation of AI training conducted between November and December 2024. The court later issued a document titled Artificial Intelligence Tools for Case Analysis and Legal Writing, which aimed to bring together the content presented over the training sessions.
Training has also been provided provincially. In April 2024, the Ministry of Justice of Buenos Aires launched an ‘Artificial Intelligence Program’, which included training in the Ministry of Justice and the judiciary on ‘ethics in the use of AI’. In San Luis,structured training modules are being delivered to judges and judicial staff, including how to use ChatGPT. The training is coordinated by the Secretariat of Judicial Information, in collaboration with UniTech, a private firm. In San Juan, a bar association organised an AI bootcamp in mid-2025.
Training has also been provided for Argentina’s specialised systems. When the Buenos Aires Prosecutor’s Office launched PROMETEA in 2017, employees received training on using the system. In San Luis, judges, court staff, and counsel also receive specialised training on IURIX Mind (mentioned above), the generative AI programme adopted in the province’s judiciary.
REGULATION
As at August 2025, there is no nationwide legislation governing the use of AI in court proceedings in Argentina, but judicial authorities in several provinces have adopted protocols addressing the use of generative AI in the judiciary. The Buenos Aires Bar Association likewise issued some guidance for practitioners. Existing general regulation, such as the Criminal Procedure Code and data protection legislation also apply to the use of AI in criminal proceedings.
Guidelines for practitioners
Judicial guidelines
Courts in a number of provinces have issued protocols on the use of AI. These protocols set out general guidelines and fundamental principles for the ethical and responsible use of generative AI in the administration of justice. All prohibit delegating decision-making to AI tools, underscoring that such tools are intended to strengthen and support the judiciary rather than replace human judgment in legal proceedings.
While these general frameworks and guidelines are very much needed, they are not sufficient and often too vague to ensure effective implementation. What we need in addition to guidelines–at both the national as well as regional level–are operating procedures. For example, guidelines may require anonymization of certain information–an important safeguard–but they don’t specify how to accomplish this responsibly and safely? We need concrete examples and best practices.
![]()
The Superior Court of Justice in the province Río Negro approved in October 2024 the Protocol of Good Practices for the use of Generative AI, applicable to all judges, officials and employees. This protocol provides guidance for the responsible, ethical, appropriate, and diligent use of generative AI. It establishes four overarching guidelines and objectives:
- responsible use of AI to prevent risks of hallucinations, bias and lack of transparency;
- human oversight, acknowledging that AI can support but never replace judicial decision-making;
- data protection, prohibiting entering personal or confidential data unless privacy safeguards are guaranteed;
- continuous training of the officials using the AI tools.
It also provides that generative AI should only be used in tasks where the operator has knowledge and is capable of verifying the results and provides guidance on how to create prompts. The protocol is based on the 2023 UK judicial guidance on AI (which was updated in 2025) and the Guidelines for the use of ChatGPT and text generative AI in Justice developed by the University of Buenos Aires IALAB research team in 2023.
In April 2025, the Superior Tribunal of the province Santa Fe adopted a Protocol of Good Practices for the Use of Generative AIwhich is also applicable to all judges, officials, and employees of the Santa Fe court system. The protocol is very similar to the abovementioned protocol issued by the Superior Court of Justice in Río Negro’s insofar as it establishes the same overarching guidelines and objectives and provides guidance on creating prompts. It highlights that the responsibility to take any judicial decisions falls on the judges and officials, who cannot use AI to replace judicial analysis.
In October 2024, the Supreme Tribunal in the province of San Juan established the Acceptable Use Protocol for Generative AI (IAGen) via General Agreement No. 102/2024. Its main objectives are to:
- safeguard ethics and impartiality in judicial processes;
- protect sensitive and confidential information;
- guarantee transparency in AI use (documenting uses, limitations, and results);
- improve efficiency in case analysis and decision-making;
- identify and manage risks of errors, bias, or misuse;
- ensure compliance with laws and data protection regulations; and
- promote innovation and staff training without undermining judicial independence.
Access to any AI tools requires prior authorisation and requests must be justified based on the duties of each officer. Users must also complete mandatory training before using the AI. The protocol also includes prompting guidelines and establishes specific prohibitions on the use of AI (namely personal use, inappropriate content, data manipulation, and non-authorised access). If the protocol is not followed, users may be faced with disciplinary measures.
The Superior Tribunal of the province Jujuy issued a protocol on the use of AI in April 2025, stating that its main objectives are to:
- establish a regulatory framework for AI use in the judiciary;
- ensure protection of fundamental rights, privacy, and non-discrimination;
- promote good practices, transparency, and accountability in AI applications; and
- encourage innovation in the judiciary and continuous training of officials.
It establishes the scope of application of the AI, including in case management, legal information analysis, decision-making support for judges, citizen assistance (i.e., chatbots and virtual assistants), and drafting/reviewing legal documents. It also provides guidance on generating prompts and measures to promote innovation (i.e., implementing a regulatory ‘sandbox’ and collaborating with universities).
Other provinces have also issued guidance on the use of AI, including the Buenos Aires Council of the Judiciary and the Judiciary of the province of San Luis.
Buenos Aires Bar Association
In July 2025, the lawyers’ Bar Association for the City of Buenos Aires issued two documents providing guidance on the use of AI: the Guide on the Use of AI for Lawyers as well as Guiding Criteria for an Ethical and Responsible Use of AI in Legal Practice. The former sets out core principles including that:
- AI cannot replace human judgment and legal reasoning;
- human oversight is mandatory;
- AI must be used to avoid exposing sensitive case data; and
- lawyers must stay updated and attend training. It also includes guidelines on prompting.
The Guiding Criteria for an Ethical and Responsible Use of AI in Legal Practice expands on the core principles and prompting guidelines set out in the Guide. It also suggests ‘advanced techniques’, including context-based prompting (providing source material), verifying sources (demanding citations and checking them), explicit instructions against hallucinations, and avoiding ambiguous instructions.
Criminal procedure rules
Even though the criminal procedure code does not specifically target AI, its rules on, for example, the admissibility of evidence, could also apply to AI-generated or AI-assisted evidence. Law No. 27.063, which regulates the federal criminal procedure, provides that all evidence must be lawfully obtained. Any evidence, including AI-generated outputs, may be subject to challenge if methods are not transparent or not collected in a lawful manner.
Data protection legislation
The 2014 Personal Data Protection Law No. 25.326, which applies to the processing of personal data by both public and private entities, naturally makes no mention of AI. However, it may be argued that any AI that uses personal data must comply with the protections established by the law. In 2023, the Executive Branch submitted a bill to update Law No. 25.326. The bill was drafted with AI in mind, drawing on the UNESCO’s Recommendations on the Ethics of Artificial Intelligence. Article 31 of the draft bill, for example, states that individuals have the right not to be subject to decisions that produce legal effects or significantly affect them, when such decisions are based solely or partially on automated processing. It also obliges the controller to provide clear and adequate information about the criteria and procedures used.
Additionally, Resolution No. 161/2023 of the Agency for Access to Public Information created the ‘Program for Transparency and Data Protection in the use of AI’, which was included as an annex to the resolution. The general objectives of the program are to ensure that AI development and use in both the public and private sectors respects transparency and personal data protection rights and anticipate the social, economic, labour, cultural, and environmental impacts of AI. It outlines three program components:
- an AI observatory to monitor the use of AI (i.e., map stakeholders and track regional and global regulatory trends);
- establish a multidisciplinary advisory council to generate AI policies; and
- prepare guidelines, training programs, and campaigns regarding the use of AI.
Human rights
Provisions of Argentina’s National Constitution may serve as safeguards against certain uses of AI in criminal proceedings. When deciding to suspend the City of Buenos Aires government’s ‘Facial Recognition System of Fugitives programme’, the Court of Appeals of the City of Buenos Aires discussed several constitutional provisions, including the right to privacy (article 19 of the National Constitution), to the presumption of innocence (article 18) and to non-discrimination (article 16). Fair trial and privacy guarantees under regional and international human rights treaties to which Argentina is a party, such as articles 8 and 11 of the American Convention on Human Rights, articles 14 and 17 of the International Covenant on Civil and Political Rights or articles 16 and 40 of the Convention on the Rights of the Child, may also be relevant. Article 75(22) of the National Constitution grants human rights treaties constitutional hierarchy.
Outlook
The Buenos Aires Supreme Court of Justice in 2025 entered into a cooperation agreement with UNESCO to promote the ethical use of AI in the province’s judicial system, including through training programs for judges, prosecutors and judicial operators; the development of digital tools to enhance access to public information; and strategies for improving judicial governance.
There are also several dozen AI-related draft bills that have been introduced. Two bills that are quite progressed and that if implemented in the future may regulate the use of AI criminal proceedings are the bill to update Personal Data Protection Law No. 25.326 and Bill 3003-D-2024 on the responsible use of AI. The latter proposes, among other things, banning “unacceptable-risk” AI (for example, systems that undermine human dignity or due process) and requiring impact assessments, traceability measures, and human oversight for high-risk AI systems used in public services. However, the Bill proposes—in exceptional circumstances, for a limited time, and with prior order of a judge—using AI systems that may use real-time biometric identification of potential criminals. Both bills remain under deliberation, and there is no confirmed timeframe for when the laws could come into force if approved.
CASES
Argentine courts have in recent decisions examined the use of AI by a lawyer who cited hallucinated cases as well as the deployment of an AI-driven facial-recognition program by the City of Buenos Aires government.
In a 2023 ruling, the Court of Appeals of the City of Buenos Aires declared that the implementation and deployment of the government’s Facial Recognition System of Fugitives program, which uses AI, unconstitutional. The Court held that the government could not implement the program until a proper oversight body was established, the necessary investigations were conducted to determine whether the system produced differentiated impacts based on individuals’ personal characteristics, and the system, along with information about its operations, was made publicly accessible. The Court did not find the system itself unconstitutional. It held the plaintiff had failed to demonstrate that the controls would not be effective in safeguarding individuals’ rights.
In August 2025, the Court of Appeals of Santa Fe, City of Rosario, admonished counsel for submitting a brief with non-existent jurisprudence generated by AI. Although the lawyer claimed to have acted in good faith, the court held that this did not excuse the professional obligation to verify legal sources. The court issued a formal admonition (llamado de atención) to the lawyer and gave notice to the Rosario Bar Association so that they may ‘take the appropriate measures’.