As the number of AI hallucination cases approaches 1,000 cases, it looks bad. But hallucinations by US lawyers aren’t as bad as you think.
There has been a lot of hand wringing about case filings with AI hallucinations are out of hand and that lawyers (especially in the US) need to take more responsibility for ensuring the accuracy of cited information instead of relying on generative AI. This is certainly true. But US lawyers are far from the only problem.
In my coverage of the case Fletcher v. Experian Information Solutions & Bridgecrest Credit Company – suggested by Judge Andrew Peck (ret.) – in last Friday’s Kitchen Sink, I mentioned that the case has two notable components. One of which I discussed: It’s a Fifth Circuit decision and it discusses the Fifth Circuit’s proposed rule (which the Fifth Circuit decided not to implement) that would have required counsel and pro se litigants to certify either: (a) that no generative AI program was used to prepare any submitted document; or (b) if an AI program was used, that a human checked the AI-generated text for accuracy.
The other notable component is what the Fifth Circuit said on page 4 of the February 18th order (referencing Damien Charlotin’s database), as follows:
“As of the date of this order, Charlotin has identified 239 cases of hallucination by lawyers in the United States.”
What? That can’t be right! Can it?
Yes, it can.
If you “Click to Download CSV” in Charlotin’s site, load the CSV file into Excel, sort the list by country and party(ies) and count the total number of rows involving lawyers from the USA, you get 257 rows – at least I did last night. That’s 18 more than the Fifth Circuit got, which – almost two weeks later – seems logical.
So, out of 982 total cases in the list as of last night, US lawyers are only solely responsible for 257 of those cases. If you add the handful where they’re partially responsible, that gets you up to about 263.
Conversely, US pro se parties are solely responsible for 412 cases.
I’m certainly not trying to say that 257 (and counting) cases with AI hallucinations is good. Nor am I saying that pro se parties can be excused for their filings with AI hallucinations. But it is notable that most people tend to look at that number of nearly 1,000 cases and say it’s a “lawyer ethics problem”. It’s not – at least not solely – on lawyers. While they’re bad, hallucinations by US lawyers aren’t as bad as you think.
So, what do you think? Are you surprised that hallucinations by US lawyers aren’t as bad as you think? Does that make you feel any better about the AI hallucinations problem by lawyers? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Google Gemini, using the term “a few robot lawyers slipping on banana peels in the courtroom”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




Doug:
I don’t know how bad of a problem those “hallucinations” were, and I don’t fully understand where the count of pro se litigant cases came from. However, I have the following comments:
1. We all (bar certified, or pro se) need to quality control.
2. The tools are getting better. I see this on a monthly, if not weekly, basis.
3. I read some of the pro se pleadings in Warner v. Gilbarco, and read her deposition. If she never had formal training, she certainly sounds like she has.
What is the point?
Flooding the system with hallucinations, is not a good thing, but these tools are giving more people access to Justice is. The word of the year might not be “hallucination.” It might be “balance.”
Doug-
I agree with Dan Regard and you.
I have been actually reading Charlotin’s data correctly all along, and the numbers were concerning but not alarming to me. We do not need lawyers backing away from using powerful AI products. What we need is AI literacy and lawyers leaning into these tools with understanding and competence.
There has always been a lag or gap in lawyers and judges gaining technology savvy as new tools and information culture evolved. But the shift to ESI and digital occurred over many years. The AI wave came on us just three years ago. Plus, lawyers are busier than ever, deadlines loom, and the time available for CLE and training is small and full of other priorities. The legal profession and our education institutions and components need to make AI literacy a priority and get everyone on board.
Keep up the good work on the awareness front. Your messaging is on point.
[…] or citations generated by AI” (emphasis added), but they’re not all attorneys. As I discussed here at the beginning of March, only around 25% were US attorneys, over 400 were US pro […]