Law and Philosophy fellow considers the potential implications of AI in courtrooms

Amin Ebrahimi Afrouzi outdoors at UCLA

Sean Brenner/UCLA Humanities

Amin Ebrahimi Afrouzi explores the potential implications of AI taking a great role in jurisprudence and government more broadly.

Ashna Madni | January 12, 2026

Imagine if civil court cases were decided by artificial intelligence instead of human judges. Some legal professionals say that prospect could become reality in the not-too-distant future.

Amin Ebrahimi Afrouzi, a UCLA law and philosophy postdoctoral fellow, is researching the philosophical questions that could arise from such a scenario. For example, what if AI judges could routinely arrive at the same decisions as human judges? Would there be any effective difference between judgments rendered by AI and humans? Would legal decisions carry different ethical weight if they were AI-generated?

The Law and Philosophy Program is a collaboration between the philosophy department and UCLA School of Law. In addition to interdisciplinary degrees and specializations for philosophy doctoral students and law students, the program offers one- to two-year research postdoctoral fellowships; Afrouzi is one of two fellows in the program currently.

Afrouzi, whose research interests span law, philosophy and technology, said today’s large language models, like ChatGPT, “could perhaps one day accurately predict the outcome of court cases just by doing statistics,” he said. But using that type of technology in court would mean that judgments were being rendered without anyone considering the actual merits of each case.

AI’s limitations

In a 2024 paper published in the Canadian Journal of Law & Jurisprudence, Afrouzi argued that the reason human judgment cannot be replaced by AI is that AI lacks “normative rationale,” meaning reasoning based on justification.

“Any desirable kind of law has to not only yield the correct outcome, but also be based on justificatory reasons,” he said in an interview. “And statistical AI just can’t do that. It’s not within its capacity, even though marketing for AI suggests otherwise.”

Beyond that, Afrouzi wonders about the Pandora’s box that AI in the courtrooms would open. “We’d risk losing our ability and aspiration as a society to self-govern and exercise political and legal virtues like impartiality and fairness,” he said.

Afrouzi has considered similar questions about the roles of humans and artificial intelligence in government more broadly. In a 2025 opinion piece in the journal AI & Society, he argued that for humans to thrive, we must participate in our own government. It would be a grave mistake for humanity, he wrote, to automate tasks that contribute to human flourishing, political participation among them, even if AI was capable of performing them.

“Many of us could benefit from more physical activity. But you wouldn’t send Siri to the gym, would you, even if one day it could squat better,” he wrote. “AI may someday be able to govern us just as well. But it won’t get us in shape.”

Seana Shiffrin, faculty co-director of the Law and Philosophy program, professor of philosophy and the Pete Kameron Professor of Law and Social Justice, said Afrouzi’s work is both timely and important.

“The questions and points he is making are absolutely central, speaking to the intrinsic unsuitability of the use of AI to replace human judges,” Shiffrin said.

Afrouzi learned computer science as a child and started teaching the subject at a leading private college in Tehran at the age of 16. After earning a bachelor’s degree in rhetoric and two master’s degrees in ancient philosophy, he went on to earn a J.D. and Ph.D. in legal philosophy from UC Berkeley. Before coming to UCLA in 2024, he was a resident fellow at Yale Law School’s Information Society Project, where he remains a visiting fellow, focusing on the regulation of data-driven technologies.

That’s not the only way in which Afrouzi has kept his interest in computer science alive. He holds several patents for AI and robotics technologies. And he’s a co-inventor of a technology called Collaborative AI, which enables AI agents to identify one another and collaborate to perform complex tasks, not only in cyberspace but also in the physical world. One example is the Schematic Parking System, which uses AI to maximize the use of space in parking lots.

Inspiring new approaches

At UCLA, Afrouzi has taught an introductory lecture course on legal philosophy and a seminar on the nature of law and legal precedent. His research as a law and philosophy fellow centers on jurisprudence, the nature of law’s normativity, legal interpretation and justice in political procedure.

Hillah Greenberg, a fourth-year philosophy and pre-law major in Afrouzi’s seminar, said the class has shaped her research on AI governance and privacy law.

“His work shows me that it is possible to approach these issues in both a legal and philosophically cogent manner,” she said.

And Mark Pampanin, a third-year law student, says Afrouzi’s teaching has given him entirely new ways to approach the law.

“Amin’s teaching encouraged me to think critically and concretely about what the law is for and what legal opinions and statutes are actually doing as reason-giving forces in a noisy and complicated human society,” Pampanin said. “His research cracked open a whole new line of thinking for me about what law is and how it shapes and responds to normative inquiry.”

At a time when much of academia is panicking about the impact of AI on higher education, Afrouzi said the technology’s rapid adoption only reinforces the importance of humanists and the humanities.

“The rise of AI itself reveals the importance of the humanities more than anything else,” he said. “People shouldn’t need an expert to tell them what sort of society they’d want to live in. They should choose that for themselves. The humanities’ promise is that people know how to make those choices.”