Lawyer fabricates brief using ChatGPT, then doubles down when judge wants details of the fake cases it cited


This is rich. An attorney used ChatGPT to create fake legal cases for their arguments in a lawsuit. When the court asked for the cited cases, the attorney doubled down and exploited the AI again to fabricate full case details. Even though they were non-existent, these fabricated details were then incorporated into their legal filings. WHAT?! (via PfRC)

Simon Willison's Weblog gives the TLDR version based on a New York Times report:

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.

ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air.

When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.

At some point, they asked ChatGPT to confirm that the cases were real… and ChatGPT said that they were. They included screenshots of this in another filing.

The judge is furious. Many of the parties involved are about to have a very bad time.

The detailed timeline is worth a look.