Justice in a Generative AI World

Justice in a Generative AI World: Artificial Intelligence Trends

Pursuit of justice in a generative AI world may be more complex than ever, but a forthcoming paper discusses issues you need to consider.

The paper (The GPTJudge: Justice in a Generative AI World) written by Maura R. Grossman, Paul W. Grimm, Daniel G. Brown, and Molly (Yiming) Xu is scheduled for publication in Vol. 23, Iss. 1 of Duke Law & Technology Review in October is shared with permission here. The Abstract is as follows:

“Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world.”

Perhaps my favorite section is the first one after the Introduction titled Coming Soon to a Court Near You, which describes four different lawsuit scenarios involving generative AI:

  1. A pre-law student who sues her university because they have determined that the use of ChatGPT to write a paper is cheating (even though the rules only prohibit help from another person and another student uses spell check and Grammarly).
  2. A potential copyright infringement of artwork from an app that integrates DALL-E 2.
  3. An elderly couple who are scammed out of $12,000 through the use of Murf.AI, an AI voice-cloning tool used to convince them their grandson is in trouble.
  4. Advice received from a search engine that has been augmented with a chatbot feature that uses a large language model (“LLM”) which provides bad medical advice for a sick baby that delays medical treatment and leads to potential long-term cognitive disability for the child.

These are examples of the types of scenarios that we will likely see develop (if they haven’t already) as GenAI continues to be more widely adopted and the capabilities continue to advance.

The most comprehensive section is Some Issues for Judges to Ponder, which is the majority of the paper and discusses considerations including whether we’ll need new rules of evidence to address GenAI, steps for judges to consider and address the authenticity of evidence, whether every case will now require a GenAI expert, will juries still be able to do their jobs, impacts on the number of cases filed and IP law, and even whether judicial officers should be allowed to use ChatGPT or other generative AI to help with research and/or draft opinions (at least three judges already have!).


The 26-page paper is a terrific read and provides several different considerations for the present and future of justice in a generative AI world. None of the scenarios seem far-fetched – on the contrary, many of them are happening today. Check out the paper here.

So, what do you think? What do you think will be the biggest change to justice in a generative AI world? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Leave a Reply