Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Judge Used ChatGPT in a Court Opinion

February 8, 2023

VICE reported on February 3 that a judge in Colombia used ChatGPT in a court opinion. OK, so it’s not in the U.S., but isn’t it only a matter of time?

Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023.

“The arguments for this decision will be determined in line with the use of artificial intelligence (AI),” Garcia wrote in the decision, which was translated from Spanish. “Accordingly, we entered parts of the legal questions posed in these proceedings.”

“The purpose of including these AI-produced texts is in no way to replace the judge’s decision,” he added. “What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI.”

The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included “Is an autistic minor exonerated from paying fees for their therapies?” and “Has the jurisprudence of the constitutional court made favorable decisions in similar cases?”

Garcia included the chatbot’s full responses in the decision, apparently the first time a judge has admitted to doing so. The judge included his own insights into applicable legal precedents, and said the AI was used to “extend the arguments of the adopted decision.” After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.

Colombian law does not forbid the use of AI in court decisions, but systems like ChatGPT are known for giving answers that are biased, discriminatory, or  wrong. This is because the language model holds no actual “understanding” of the text—it merely synthesizes sentences based on probability from the millions of examples used to train the system.

ChatGPT’s creators, OpenAI, have implemented filters to eliminate some of the more problematic responses. Nonetheless, developers warn that the tool still has significant limitations and should not be used for consequential decision-making.

While the case is apparently the first time a judge has admitted to using an AI text generator like ChatGPT, some courts have, controversially, begun using automated decision-making tools in determining sentencing or whether criminal defendants are released on bail. The use of these systems in courts has been heavily criticized by AI ethicists, who say that they regularly reinforce racist and sexist stereotypes and amplify pre-existing forms of inequality.

Although the Colombian court filing indicates that the AI was primarily used to speed up drafting a decision, and that its responses were fact-checked, it’s likely a sign that we may see more usage of AI in drafting court opinions.

I await the first story about AI being used in a U.S. court opinion. I suspect the wait will not be long.

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225Fairfax, VA 22030
Email:   Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson