It was bound to happen and you may have already heard about it, but a lawyer used fake ChatGPT generated case citations. It did not go well for him.
According to CNN (Lawyer apologizes for fake court citations from ChatGPT, written by Ramishah Maruf and available here), Roberto Mata sued Avianca airlines for injuries he says he sustained from a serving cart while on the airline in 2019, claiming negligence by an employee. Steven Schwartz, an attorney with Levidow, Levidow & Oberman and licensed in New York for over three decades, handled Mata’s representation.
But at least six of the submitted cases by Schwartz as research for a brief “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in this order.
You can guess where this is going, right? Yes, the lawyer used fake ChatGPT generated case citations.
The plaintiff’s submission led off with a decision of the United States Court of Appeals for the Eleventh Circuit, Varghese v China South Airlines Ltd, 925 F.3d 1339 (11th Cir. 2019). Plaintiff’s counsel, in response to the Court’s Order, filed a copy of the decision, or at least an excerpt therefrom.
Problem #1: The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to the Court’s inquiry, confirmed that there has been no such case before the Eleventh Circuit with a party named Vargese or Varghese at any time since 2010, i.e., the commencement of that Court’s present ECF system.
Problem #2: The bogus “Varghese” decision contained internal citations and quotes to four other cases, which, in turn, were non-existent.
Problem #3: There were five other decisions cited (Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines) that, according to the Court, “contain similar deficiencies and appear to be fake as well”.
Schwartz, in an affidavit, said that he had never used ChatGPT as a legal research source prior to this case and, therefore, “was unaware of the possibility that its content could be false.” He accepted responsibility for not confirming the chatbot’s sources.
Well, he “sort of” tried to confirm them…by asking ChatGPT.
“is varghese a real case,” Schwartz asked the chatbot.
“Yes,” ChatGPT doubled down, it “is a real case.”
Schwartz then asked for its source. The chatbot again claimed the false case was real.
“I apologize for the confusion earlier,” ChatGPT replied. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”
When Schwartz asked the chatbot if any other cases were fake, ChatGPT replied the other cases “are real” and could be found on “reputable legal databases.”
Schwartz is now facing a sanctions hearing on June 8.
I’ve already seen another case where the Court scheduled an Order to Show Cause Re: Sanctions for another lawyer who provided citations to cases that didn’t appear to exist. I haven’t seen the results of the hearing on that, but I suspect a similar result.
So, what do you think? Are you surprised that a lawyer used fake ChatGPT generated case citations? Or are you surprised it took this long? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
[…] has only been a matter of days since it was reported that Steven Schwartz, an attorney with Levidow, Levidow & Oberman with over three decades of […]
[…] than GPT-3.5”: That’s great, but certainly no guarantee of accuracy. Check that output, as this guy learned the hard […]
[…] As you can imagine, this means that Microsoft Copilot will accelerate AI evidence generation (at least partially). Evidence = impact to eDiscovery. Many people will use it to generate work product (the smart ones will QC the results, unlike these guys). […]
[…] a lawyer used fake ChatGPT generated case citations in a filing, it initiated a “Bumble-Fly effect” of responsible AI […]
[…] the filing in Avianca v. Mata earlier this year, Block’s brief falls apart upon checking the case citations. […]
[…] use of the term “bogus” caused it to be missed?), but it did a good job with the famous Avianca case example many of us […]
[…] to “only the second time” for Federal courts, Judge Johnson was referring to (of course) the Avianca case, where the lawyer in that case admitted he used ChatGPT. We have seen at least a couple of cases […]
[…] fake case citations here and here (and even here from a pro se party), it seems courts are going to have to get tougher with […]