Judicial Approaches to Acknowledged

Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence: Artificial Intelligence Trends

What judicial approaches to acknowledged and unacknowledged AI-generated evidence are needed today? This article provides practical advice!

The article titled Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence (and available here) is the latest article authored by Maura R. Grossman and Hon. Paul W. Grimm (ret.). In the Introduction, it quickly illustrates how much has changed in a decade, as follows:

“In 2013, the first author of this paper (Grossman) was a speaker at a bench and bar conference sponsored by the Tenth Circuit Court of Appeals. One of the justices of the U.S. Supreme Court attended the event as the justice assigned to that Circuit. During a cocktail reception, the author, using her cell phone—discreetly, so she thought—snapped a few photographs of the justice before being approached by the U.S. Marshals Service. Apparently, the justice in question did not appreciate being photographed holding an alcoholic beverage and the marshals requested that the author delete the photos she had taken in exchange for an opportunity to have her photograph taken with the justice, sans beverage. Obviously, the author dutifully complied with the marshals’ request.”

Advertisement
Level Legal

A decade later, technology was “readily available to anyone with a computer and Internet access, that could not only create a highly realistic photo of the justice holding an alcoholic beverage, but also a video of that same justice appearing to be stumbling drunk at the same 2013 reception, and there was no U.S. marshal that could do anything to prevent that fake video from being disseminated.”

That’s how much things have changed in a relatively short period of time.

Deepfakes are already appearing in real-world scenarios with significant consequences – such as voice-cloning for fraud (here’s one recent example) or fake images in cyberbullying. That’s even more compelling when you consider that studies show that humans are generally poor at detecting deepfakes and tend to believe what they see and hear in audiovisual format (especially video evidence) which is highly impactful, more cognitively and emotionally arousing, and can powerfully affect memory and perception of reality. The “continued influence effect” means that even when misinformation (like a deepfake) is corrected, it can still influence reasoning. This makes judicial instructions to disregard evidence less effective.

And we can’t count on deepfake detection methods to fix the problem. Human and algorithmic methods for detecting deepfakes have limitations. Human detection rates are low, and the “tells” for deepfakes can vary culturally. Algorithmic detection tools are often limited to detecting content created by specific AI models and can be defeated by manipulation.

Advertisement
Syllo

This article by Grossman and Grimm examines the increasing prevalence of generative AI and deepfakes in society and their inevitable impact on legal proceedings, highlighting the ease with which realistic fake content can be created, democratizing fraud and disinformation. One notable key distinction in the article is drawn between “acknowledged AI-generated evidence” and “unacknowledged AI-generated evidence” (i.e., deepfakes).

Here, Grossman and Grimm argue that existing rules of evidence are problematic for unacknowledged deepfakes, particularly due to the difficulty in human detection and the significant psychological impact of audiovisual evidence on factfinders. The requirement to expose the jury to potentially fake and highly prejudicial evidence to determine authenticity creates a “catch 22.” As Grossman and Grimm state: “If the jury agrees it more likely is authentic, they can consider it. If they agree it is more likely not authentic, they are told by the judge to disregard it. But the ‘catch 22’ with respect to our deepfake hypothetical is that the jury must listen to the challenged evidence in order to make its determination of authenticity.”

So, Grossman and Grimm propose potential modifications to existing rules or the creation of new, bespoke rules to address these challenges, emphasizing the need for judicial gatekeeping and early case management to ensure fair trials in the age of synthetic media. They are:

Proposed New Rule 901(c) for Potentially Fabricated or Altered Electronic Evidence

If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, using artificial intelligence, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

Advantages highlighted by the authors:

  • It is limited in scope to evidence challenged specifically as having been fabricated or altered by AI.
  • It adopts a different balancing test than Rule 403, requiring the proponent to show that the probative value of the evidence outweighs its prejudicial impact (as opposed to Rule 403 which requires the prejudice to substantially outweigh probative value for exclusion). This standard is borrowed from Rule 609(a)(1)(B) and is considered more appropriate given the challenges of AI-generated evidence.
  • It places the burden on the objecting party to provide sufficient facts (not just arguments) from which a reasonable jury could find the evidence is AI-generated or fake by a preponderance of the evidence.
  • It maintains consistency with the Huddleston and Johnson cases discussed in the article by not requiring the judge to personally find the evidence is fake, only that a reasonable jury could make that determination.
  • It explicitly allows the judge to apply the balancing test before the jury is exposed to the potentially highly prejudicial evidence

Proposed Amendments to Rule 901(b)(9) for Acknowledged AI-Generated Evidence

The proposed new language is shown in bold font, the existing language is in regular font, and deleted language is shown with strikethrough:

[901] (b) Examples. The following are examples only—not a complete list—of evidence that satisfies the requirement [of Rule 901(a)]:

(9) Evidence about a Process or System. For an item generated by a process or system:

(A) evidence describing it and showing that it produces an accurate a valid and reliable result; and

(B) if the proponent acknowledges that the item was generated using artificial intelligence, additional evidence that:

(i) describes the training data and software or program that was used; and

(ii) shows that they produced valid and reliable results in this instance.

Advantages highlighted by the authors:

  • It is a minor adjustment to an existing, familiar rule.
  • It replaces the term “accurate” with “valid and reliable,” which are more precise scientific terms used in fields like science and are consistent with the standards in Rule 702 for evaluating scientific evidence.
  • It does not mandate a single method of authentication but provides an example of how acknowledged AI evidence could be authenticated, consistent with the structure of Rule 901(b).
  • It offers a “recipe” for lawyers and judges, providing certainty about what is sufficient for authentication if this method is followed.

Grossman and Grimm note that while the Advisory Committee on Evidence Rules “did not decide that a new rule would be adopted or what the proposed new ‘deepfake rule’ would say, only that it would be helpful to move forward with drafting such a rule in the event that the Committee decides that one should be adopted… Nevertheless, it seems that the Committee may have crossed the Rubicon with respect to its position on whether it is advisable to have a bespoke rule addressing potential deepfake evidence. That is a very significant and important step forward.”

Grossman and Grimm close the article “with some suggestions about what lawyers and courts might do to deal with acknowledged and unacknowledged AI-generated evidence now, since any rules change is likely years away, and there is no developed case law at present to lend a hand.” They include:

  • Early Anticipation and Planning: Recognize that AI-generated evidence challenges need to be addressed now, as bespoke rules and developed case law are currently lacking.
  • Discovery about Acknowledged or Unacknowledged AI-Generated Evidence: Engage in thorough discovery regarding AI-generated evidence. For acknowledged AI evidence, this includes inquiring about the development, training, testing, underlying data, validation, error rates, and potential biases of the AI system. For unacknowledged AI evidence (potential deepfakes), discovery might involve forensic examination of devices.
  • Use of Protective Orders to Address Issues Associated with Claims of Proprietary Information or Trade Secrets and Claims of Confidentiality or Privacy: Utilize protective orders to manage concerns about proprietary information, trade secrets, confidentiality, and privacy when seeking discovery about AI systems or conducting forensic examinations of potentially deepfake evidence. The party offering the evidence cannot refuse necessary discovery to evaluate it.
  • Expert Witnesses: Anticipate the necessity of expert witnesses. Evaluating AI evidence and detecting deepfakes are scientific and technical tasks often beyond the capability of lay individuals. Experts will be crucial for authenticating and challenging the evidence. Expert disclosures should be detailed and address factors like those in Rule 702 and the Daubert standard.
  • Motions Practice: Conduct pre-trial hearings to resolve evidentiary issues related to AI evidence and potential deepfakes after discovery is complete. Require detailed written motions outlining the basis for admitting or excluding the evidence, including information about validity, reliability, error rates, bias, provenance, manipulation, probative value, and potential prejudice.

Grossman and Grimm conclude that “if applied flexibly, the current rules of evidence can be used today to deal with both acknowledged and unacknowledged AI-generated evidence” – which is good, because the earliest we could see rules changes is 2027, but more realistically, 2029. So, “regardless of what the Committee does, judges and lawyers will have to come to terms with these challenges now.”

The 46-page article is available here, with a link to download the full article in PDF form.

So, what do you think? Are you concerned about the potential of deepfakes in court? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using GPT-4’s Image Creator Powered by DALL-E, using the term “robot sitting at a desk in front of a computer showing a picture of another robot”.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

5 comments

  1. The issues created by the many layers of deepfakes and altered evidence is a present and undoubted increasing problem for truthful resolution of cases. This is an incredibly important article. Maura Grossman and Judge Paul Grimm have been at the forefront of studying and addressing this area, and rule makers at the state and federal level would benefit from accepting their diligent work and recommendations. We should all appreciate their tireless efforts.

    • Does the authors’ proposed 901(c) rule change create a gap and an exception that swallows the rule under 901(a)? Under Rule 901(a), evidence has to “be what it’s purported to be” or it’s not authentic and inadmissible. In the “fake” email and in the “deepfake” audio or video recording examples, the authors’ proposed 901(c) rule change would open the door for the admission into evidence of “close calls on fakeness” – based on the content of the email or video as opposed to whether it’s fake? Under this proposed rule, it sounds like a judge would be instructed to ignore the unresolved question of “fakeness” and allow the evidence to go to the jury based instead on the (I would argue less important consideration of fakeness) Rule 403 balancing test: so that really well crafted fake emails and videos come into evidence because their content (even though potentially fake) is more probative than prejudicial. Doesn’t Capra’s reported proposed new FRE 707 aviod this problem and ensure that the court remains focused on whether it’s fake or not: “[blah, blah, blah]…the evidence is admissible only if the proponent demonstrates to the court that it is more likely than not authentic.”

  2. […] It brought to mind an article by Maura Grossman and Paul Grimm which examines the increasing prevalence of generative AI and deepfakes in society and their inevitable impact on legal proceedings, highlighting the ease with which realistic fake content can be created, democratizing fraud and disinformation. My colleague, Doug Austin, covers the article here. […]

  3. […] by generative AI growing in Japan: This article ties into three of my posts this week, involving AI-generated evidence, how hard it is to spot deepfakes, and Japan’s new AI Promotion bill. Not sure the Japan bill […]

Leave a Reply