Judge fines lawyers $31k after they use AI to generate brief with made-up citations

Two California law firms surreptitiously used AI to generate a 10-page supplemental brief, but the judge, intrigued by unfamiliar cases cited within it, looked them up only to find some did not even exist. After asking for clarification, Judge Michael Wilner was treated to "more made-up citations and quotations beyond the two initial errors" and it thereby became time to issue sanctions: "this was a collective debacle," he wrote.

Directly put, Plaintiffs' use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn't exist. That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut.

Emma Roth reports at The Verge that it's not a unique occurence.

This isn't the first time lawyers have been caught using AI in the courtroom. Former Trump lawyer Michael Cohen cited made-up court cases in a legal document after mistaking Google Gemini, then called Bard, as "a super-charged search engine" rather than an AI chatbot. A judge also found that lawyers suing a Colombian airline included a slew of phony cases generated by ChatGPT in their brief.

The AI in this case was Google Gemini and Westlaw Precision's CoCounsel AI service. Westlaw has a citation-checking service, too. I guess that costs extra.

Previously:
Top eviction lawyer barely sanctioned for citing fake cases possibly made by AI
'Dragon lawyer' annoys judge with cartoon dragon watermark
Lawyer fabricates brief using ChatGPT, then doubles down when judge wants details of the fake cases it cited