Brazilian Choices in Regulating Generative AI
By Isabella Ferrari
This perspective examines how Brazil’s judiciary is responding to an unprecedented litigation crisis—marked by more than 80 million pending cases—through the regulated adoption of artificial intelligence. Focusing on National Council of Justice Resolution 615/2025, it analyses five key regulatory choices shaping AI use in judicial decision-making. These include a dual-track model that prioritizes institutionally developed AI systems while permitting regulated private subscriptions; full judicial liability for decisions supported by private AI tools; mandatory AI training as a precondition for use; strict data-protection rules for sensitive information; and a distinction between internal registration of AI use and discretionary public disclosure.
Drawing on the author’s experience as both a federal judge and academic, the perspective piece argues that Brazil has adopted a pragmatic, flexible, and accountability-focused regulatory framework. Rather than technological determinism, the model emphasises human responsibility, institutional learning, and judicial independence. Brazil’s approach offers valuable lessons for other jurisdictions seeking to integrate AI into justice systems while safeguarding legitimacy, fairness, and transparency.