Ask any high school teacher if they've had a rise in questionable essay submissions lately. Even the worst freshman English paper has some kind of human flair to it. Typos, terrible grammar, citing Wikipedia, a lack of focus— these are all de rigueur in the field of adolescent writing. Except now, teachers are receiving a ton of papers from students that had previously turned in accidental spoonerisms but now have correct grammar, minimal typos, citations that direct to SparkNotes and a strange, pointed, almost clinical focus. In fact, these essays are a little too right, but lack idea and error. Or they don't answer the prompt at all, but are completely structurally sound. This stinks of the future.
Since ChatGPT's release, teachers have squinted at many a paper with an AI-discerning eye and have adapted, as is required, to the latest form of student laziness. You didn't write this. Turn it in again or fail. It's a simple lesson, probably noted but not internalized. But whatever, it's a high school essay, who cares.
But what happens when the big ol' people in charge get lazy and ask their freshman kid for tips and tricks on cutting corners? It turns out that the kids are hip! And so Dad flips around his baseball cap and uses ChatGPT to scaffold an eviction case for him (allegedly). And that slap on the wrist punishment? Like father, like son, it makes no difference.
Dennis Block, a prominent Los Angeles area attorney whose firm specializes in removing rent-controlled renters from their homes and considers aiding landlords in evicting their tenants his "patriotic duty," was recently caught citing two completely fake cases. They looked right at first glance, but had no standing in reality. They have all the markings of an AI-generated "hallucination." The firm used the fictional cases to hastily pad up a case with the end goal of eventually evicting a tenant.
The court never got to the bottom of exactly how the filing was prepared. But six legal experts told LAist they could think of a likely explanation: misuse of a generative AI program.
These programs, the best known of which is ChatGPT, have come under increasing scrutiny in the legal profession. While some lawyers see potential for reducing costs to clients, experts agree that failing to check work produced by such tools is risky and unethical.
Law professors and malpractice attorneys who reviewed Block's filing told us — based on the language used — that's likely what happened in this case.
"I think it's virtually certain that the lawyer involved used some kind of [generative] artificial intelligence program to draft the brief," said Russell Korobkin, a professor at UCLA School of Law who recently moderated a panel on AI in the legal profession.
This isn't the first time this kind of situation has come up in the legal world, either. Earlier this year, Steven Schwartz, a New York lawyer, was reprimanded for citing cases completely fabricated by ChatGPT. The lawyer said believed that the program was a "super search engine" and didn't bother to read the cases he had cited. His children had showed him the program. The scandal this caused in the legal and tech world didn't seem to have the intended deterrent effect, though, as more recent parallel incidents have received little to no coverage or punishment.
In order for the Block incident to be reported to the state bar, Block would have had to pay $1000. The courts fined him $999. No further investigation is required, and Block & Co are still allowed to practice law and aid in removing renters from their homes.