Site icon eDiscovery Today by Doug Austin

Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence: Artificial Intelligence Trends

Judicial Approaches to Acknowledged

What judicial approaches to acknowledged and unacknowledged AI-generated evidence are needed today? This article provides practical advice!

The article titled Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence (and available here) is the latest article authored by Maura R. Grossman and Hon. Paul W. Grimm (ret.). In the Introduction, it quickly illustrates how much has changed in a decade, as follows:

“In 2013, the first author of this paper (Grossman) was a speaker at a bench and bar conference sponsored by the Tenth Circuit Court of Appeals. One of the justices of the U.S. Supreme Court attended the event as the justice assigned to that Circuit. During a cocktail reception, the author, using her cell phone—discreetly, so she thought—snapped a few photographs of the justice before being approached by the U.S. Marshals Service. Apparently, the justice in question did not appreciate being photographed holding an alcoholic beverage and the marshals requested that the author delete the photos she had taken in exchange for an opportunity to have her photograph taken with the justice, sans beverage. Obviously, the author dutifully complied with the marshals’ request.”

Advertisement

A decade later, technology was “readily available to anyone with a computer and Internet access, that could not only create a highly realistic photo of the justice holding an alcoholic beverage, but also a video of that same justice appearing to be stumbling drunk at the same 2013 reception, and there was no U.S. marshal that could do anything to prevent that fake video from being disseminated.”

That’s how much things have changed in a relatively short period of time.

Deepfakes are already appearing in real-world scenarios with significant consequences – such as voice-cloning for fraud (here’s one recent example) or fake images in cyberbullying. That’s even more compelling when you consider that studies show that humans are generally poor at detecting deepfakes and tend to believe what they see and hear in audiovisual format (especially video evidence) which is highly impactful, more cognitively and emotionally arousing, and can powerfully affect memory and perception of reality. The “continued influence effect” means that even when misinformation (like a deepfake) is corrected, it can still influence reasoning. This makes judicial instructions to disregard evidence less effective.

And we can’t count on deepfake detection methods to fix the problem. Human and algorithmic methods for detecting deepfakes have limitations. Human detection rates are low, and the “tells” for deepfakes can vary culturally. Algorithmic detection tools are often limited to detecting content created by specific AI models and can be defeated by manipulation.

Advertisement

This article by Grossman and Grimm examines the increasing prevalence of generative AI and deepfakes in society and their inevitable impact on legal proceedings, highlighting the ease with which realistic fake content can be created, democratizing fraud and disinformation. One notable key distinction in the article is drawn between “acknowledged AI-generated evidence” and “unacknowledged AI-generated evidence” (i.e., deepfakes).

Here, Grossman and Grimm argue that existing rules of evidence are problematic for unacknowledged deepfakes, particularly due to the difficulty in human detection and the significant psychological impact of audiovisual evidence on factfinders. The requirement to expose the jury to potentially fake and highly prejudicial evidence to determine authenticity creates a “catch 22.” As Grossman and Grimm state: “If the jury agrees it more likely is authentic, they can consider it. If they agree it is more likely not authentic, they are told by the judge to disregard it. But the ‘catch 22’ with respect to our deepfake hypothetical is that the jury must listen to the challenged evidence in order to make its determination of authenticity.”

So, Grossman and Grimm propose potential modifications to existing rules or the creation of new, bespoke rules to address these challenges, emphasizing the need for judicial gatekeeping and early case management to ensure fair trials in the age of synthetic media. They are:

Proposed New Rule 901(c) for Potentially Fabricated or Altered Electronic Evidence

If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, using artificial intelligence, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

Advantages highlighted by the authors:

Proposed Amendments to Rule 901(b)(9) for Acknowledged AI-Generated Evidence

The proposed new language is shown in bold font, the existing language is in regular font, and deleted language is shown with strikethrough:

[901] (b) Examples. The following are examples only—not a complete list—of evidence that satisfies the requirement [of Rule 901(a)]:

(9) Evidence about a Process or System. For an item generated by a process or system:

(A) evidence describing it and showing that it produces an accurate a valid and reliable result; and

(B) if the proponent acknowledges that the item was generated using artificial intelligence, additional evidence that:

(i) describes the training data and software or program that was used; and

(ii) shows that they produced valid and reliable results in this instance.

Advantages highlighted by the authors:

Grossman and Grimm note that while the Advisory Committee on Evidence Rules “did not decide that a new rule would be adopted or what the proposed new ‘deepfake rule’ would say, only that it would be helpful to move forward with drafting such a rule in the event that the Committee decides that one should be adopted… Nevertheless, it seems that the Committee may have crossed the Rubicon with respect to its position on whether it is advisable to have a bespoke rule addressing potential deepfake evidence. That is a very significant and important step forward.”

Grossman and Grimm close the article “with some suggestions about what lawyers and courts might do to deal with acknowledged and unacknowledged AI-generated evidence now, since any rules change is likely years away, and there is no developed case law at present to lend a hand.” They include:

Grossman and Grimm conclude that “if applied flexibly, the current rules of evidence can be used today to deal with both acknowledged and unacknowledged AI-generated evidence” – which is good, because the earliest we could see rules changes is 2027, but more realistically, 2029. So, “regardless of what the Committee does, judges and lawyers will have to come to terms with these challenges now.”

The 46-page article is available here, with a link to download the full article in PDF form.

So, what do you think? Are you concerned about the potential of deepfakes in court? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using GPT-4’s Image Creator Powered by DALL-E, using the term “robot sitting at a desk in front of a computer showing a picture of another robot”.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Exit mobile version