When a lawyer used fake ChatGPT generated case citations in a filing, it initiated a “Bumble-Fly effect” of responsible AI efforts.
You’ve probably heard of the “butterfly effect” which was suggested by MIT meteorologist Edward Lorenz, who suggested decades ago that the flap of a butterfly’s wings could cause a tornado weeks later. The “butterfly effect” has come to serve as an analogy of how small events can eventually have large impacts.
Which leads me to the Mata vs. Avianca Airlines case referenced above. In a filing by the attorney representing the plaintiff, he submitted at least six “bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in his order regarding the ruling. Pretty much everyone I talk to (especially in legal or technology circles) knows about this case.
As a result, the filing in this case has led to a “Bumble-Fly effect” of responsible AI efforts – and reactions to some of those efforts.
First, it was the courts, where at least four US courts (including this one) and some Canadian courts either issued or considered standing orders that require litigants to disclose their use of generative AI (or, in some cases, any AI) and submit certifications about their efforts to verify the accuracy of those uses of AI.
That led to an article titled Is Disclosure and Certification of the Use of Generative AI Really Necessary? by Maura R. Grossman, Paul W. Grimm and Daniel G. Brown, which was made available for download last week and covered here earlier this week, where the authors referred to the Avianca filing as “The Shot Heard ’Round the World” and the courts’ reaction (or perhaps overreaction as the authors suggested) as “Bringing a Cannon to a Sword Fight”.
By the way, ComplexDiscovery has a great summary of that article here.
The Avianca filing was also referenced right off the bat in a proposal of seven draft principles that establish a lawyer’s duties when using AI for legal work issued by an MIT Task Force (covered here) and it was also mentioned prominently in an excellent open forum conducted by the task force yesterday afternoon.
I suspect there are other examples of references to the Avianca filing in responsible AI efforts, but these are the ones I can think of easily. Hopefully, eDiscovery providers keep responsible AI in mind as they develop and implement their own generative AI capabilities.
This one “bumble” by a lawyer who didn’t check the case citations provided by ChatGPT (other than ask ChatGPT if they were real cases) has led to a “Bumble-Fly effect” of responsible AI efforts (good and not so good). Regardless, this small event may do more to have a large impact on best practices for responsible AI than the efforts of many experts and organizations to do so. Sometimes, it takes high-profile failure to generate widespread success.
So, what do you think? Can you think of any other responsible AI efforts inspired by the “Bumble-Fly effect” response to the Avianca filing? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.