A recent eDiscovery Today article reported that cases involving hallucinated citations and excerpts are not just continuing to occur, which would be bad enough, but are actually increasing at a rapid rate. Readers of this site will be familiar with the problem and its ethical implications, so I’ll jump right into the three reasons I’m really worried by this trend.
Reason to worry no. 1: Traditional legal education is failing to reach large numbers of lawyers about something as important and newsworthy as hallucinated case citations and excerpts.
Hallucinated citations exploded into view with the Mata v. Avianca case in 2023. The problem has received significant publicity since then with widespread coverage in legal publications. As the “legal ethics and AI” issue du jour it came up in just about every CLE and webinar I attended over the last two years. The story has even been picked up by mainstream news outlets.
Judges have done their part to raise public awareness. There have been “teaching moment” opinions and “example setting” sanctions. Some courts have adopted local rules mandating AI usage disclosures.
Last month I was making small talk in a waiting room and was asked about hallucinated cites in legal filings. Retired English teachers know you shouldn’t use ChatGPT for legal research but there were hundreds of published decisions involving hallucinated content last year.
Many lawyers haven’t learned the entry-level basics of using consumer genAI. That’s despite an enormous effort to create risk awareness about a specific, obvious problem like hallucinations. The inescapable conclusion is that traditional means of legal education are falling short.
Meanwhile advanced AI tools are rapidly being incorporated into legal practice and discoverable ESI is being created using AI. I don’t even want to think about the implications for technology competence.
Reason to worry no. 2: The legal system does not have the capacity to absorb the burden of mistrust.
Legal research is the most fundamental of fundamental skills for a lawyer. It’s basically assumed that a case stands for the proposition it’s cited for – within the admittedly elastic limits of advocacy – and that it’s good law. Of course, mistakes, unsound arguments and envelope pushing are always with us, and the lawyer on the other side has a complementary responsibility to call out overreach and overturned holdings. Nonetheless, the starting position for counsel and court is trust.
Or at least it was. Until three years ago the idea that a lawyer would cite a nonexistent case or fabricate excerpts was almost unimaginable. Now it’s its own category of sanctions case.
I realize that in absolute numbers this is happening in only a tiny fraction of all the cases winding their way through the nation’s courts. The reason I’m so concerned is that hallucinated content in legal filings has consequences extending far beyond individual breaches of the ethical duties of competence, candor and fairness, serious as those are.
Legal services are expensive. State courts are underfunded, understaffed and have inconsistent access to technology. Federal court caseloads are high and there are vacancies on the bench.
Verifying that cited content isn’t hallucinated is taking time away from other important tasks. Some hallucinated content will inevitably be missed, prejudicing the just adjudication of disputes. Motion practice over this issue is an unacceptable burden on public and private resources. THIS HAS GOT TO STOP.
Reason to worry no. 3: We’re collectively moving too slowly in responding to the use of genAI in legal work.
My third concern is not as anxiety-inducing, but I decided to include it because it’s something all litigators need to pay attention to.
Some of the sanctions cases have arisen from “it could happen here” situations where supervising lawyers didn’t catch errors by junior lawyers or staff. Assuming it hasn’t already happened, there will be a sanctions case where a lawyer copied an old motion or brief containing a hallucination to use as a first draft and then didn’t carefully vet the cites.
As a partner in a law firm these scenarios concern me as they should everyone similarly situated. On the other hand, the root causes – inexperienced staff, over-reliance on templates – are nothing new. The immediate action item is to incorporate safeguards around AI usage into existing workflows. I’m mostly worried that this is happening too slowly, a concern exacerbated by reasons to worry nos. one and two above.
Strange as it may sound I’m normally a glass half full kind of person. If you think I’m exaggerating the problem or being too pessimistic about the outlook, please share your reasons. I would truly much rather not be worrying about the negative trend of hallucinations cases.
Image created using Microsoft Designer, using the term “robot lawyer worried”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



