Pro Se Party Hallucinated

Pro Se Party Hallucinated Filings Are Growing. So, Sue the Tool, Right?: Artificial Intelligence Trends

As I noted last week, pro se party hallucinated filings are growing. How can we slow that down? This case may lead to a possible answer.

As discussed in this article by Project Counsel Media (Here’s an idea: let’s sue OpenAI for tortious interference and the unlicensed practice of law, written by Casey Newton and Alexander Dumont and available here), Nippon Life Insurance Co. of America sued OpenAI in federal court in March, alleging that its ChatGPT software was advising pro se plaintiff Graciela Dela Torre and writing her briefs. Nippon’s complaint claimed the bot had engaged in tortious interference with its prior settlement with Dela Torre and “the unlicensed practice of law.”

This after Dela Torre docketed about four dozen filings in just over a year, attempting to reopen her previously settled case against her insurance company over carpal tunnel syndrome and tennis elbow claims. She simultaneously filed a new suit to revive her case against Nippon.

Advertisement
Veracity Forensics

The landmark lawsuit . . . Nippon Life Insurance Company of America v. OpenAI . . . was filed in the U.S. District Court for the Northern District of Illinois. It is among the first to hold an AI developer liable for legal harm caused by its chatbot to a business.

As the authors note, the case is still in its early stages, but it highlights many anxieties and hopes about pro se litigants using generative AI to churn out legal arguments.

Per the authors, here are the 2 key sections of the Nippon complaint:

  • Abuse of process: The complaint claimed that the individual filed multiple frivolous motions and other documents “with no legitimate or proper purpose.” Paragraph 119 of the complaint claimed that OpenAI, through ChatGPT, “aided and abetted” the individual’s abuse of process by providing individual “with legal advice, legal analysis and legal research, as well as by assisting the individual in the drafting and preparation of her frivolous motions and requests for judicial notice.”
  • Unlicensed practice of law: Paragraph 123 of the complaint stated that OpenAI, through ChatGPT, “provides legal advice, legal analysis, legal research and can draft legal documents and papers for submission to a Court . . . to any user who requests them,” including the individual. The complaint pointed out that ChatGPT is not licensed to practice law in any state in the United States.

The authors also state that last October, OpenAI changed its terms of use to prohibit users from using ChatGPT for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”. {Side note: that language is actually in the Usage Policies linked to from the Terms of Use, FWIW.}

Advertisement
Level Legal

Regardless where it’s located, we all know that’s going to stop pro se parties from using ChatGPT to create filings with hallucinations, right? Because, of course, everybody reads the Terms of Use and Usage Policies. 🤣

Clearly not, as the hallucinated filings from pro se parties continues to rise.

For this case, it doesn’t matter as Dela Torre’s filings occurred before OpenAI updated the Usage Policies. It may help protect OpenAI in future cases, but not this one.

One of the things that Nippon wants in their case against OpenAI is a permanent injunction against ChatGPT providing legal advice to individuals. If they get that, it could significantly reduce the number of pro se party hallucinated filings.

Of course, as the authors note, there could be access to justice concerns here. On the flip side, if it’s bad legal help, maybe the access to justice concerns aren’t as great.

Casey and Alexander go into much more depth on the issues, so check out their article here.

So, what do you think? Do you think a judgment against OpenAI could significantly reduce the number of pro se party hallucinated filings? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using DALL-E 3, using the term “robot reacting in shock to receiving a legal filing from a robot process server”.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply