Rules Change to Address

Rules Change to Address AI Evidence Proposed by Grossman and Grimm: Artificial Intelligence Trends

The emergence of deepfakes has raised concerns about authentication of evidence. Now, there’s a proposed rules change to address that concern.

Dr. Maura R. Grossman and Hon. Paul W. Grimm (ret.) have submitted a Proposed Modification of Current Rule 901(b)(9) to address authentication issues regarding Artificial Intelligence evidence to the Advisory Committee on Evidence Rules in advance of their presentation during the Fall meeting, scheduled for October 27th. Grossman and Grimm propose an amendment to Rule 901(b)(9) that would provide as follows:

(9) Evidence about a Process or System. For an item generated by a process or system:

KLDiscovery

(A) evidence describing it and showing that it produces a reliable result; and

(B) if the proponent concedes that — or the proponent provides a factual basis for suspecting that — the item was generated by artificial intelligence, additional evidence that:

(i) describes the software or program that was used; and

(ii) shows that it produced reliable results in this instance.

KLDiscovery

The proposed rule would amend current Rule 901(b)(9) to help attorneys and courts deal with the many evidentiary challenges presented by the authentication of evidence that is generated by artificial intelligence (“AI”) software applications, including, but not limited to, generative AI applications such as ChatGPT and DALL-E 2. One of the keys to their proposal is that the proposed rule replaces the word “accurate” with “reliable” throughout Rule 901(b)(9), making reliability the standard for all evidence generated by a system or process, regardless of whether it is computer-generated artificial intelligence or not.

Why should reliability be the standard and not accuracy? As Grossman and Grimm explain, “[a] system or process may produce a valid result when applied in certain circumstances, but not in others. For example, AI facial-recognition software programs that have been trained primarily on images of light-skinned males will typically produce accurate results when applied to photos of light-skinned men. But the same software may not produce accurate results when applied to a photo that is not of a light-skinned male. For that reason, the proposed rule substitutes the term ‘reliability’ for ‘accuracy,’ and also requires that the proponent of the AI evidence demonstrate that the software or program produces reliable results in general, as well as with respect to the particular evidence being offered.”

Grossman and Grimm go on to discuss the challenges of deepfakes, the expectation that the trial judge, acting pursuant to Rule 104(a), must make a preliminary determination whether the proponent has met its burden of authenticating the evidence. They also recommended that Rule 902(13) also be amended to replace “accurate” with “reliable”, for the same reasons stated above.

Their discussion of the proposed modification is contained within this packet of materials in advance of the meeting. As the entire document is a whopping 394 pages, here is some guidance on where to go to review their recommendation and other important resources:

  • Pages 97-99 contains the three-page submission from Grossman and Grimm regarding the proposed rules change to address authentication issues regarding AI evidence.
  • Pages 84-95 contains a Memo from Daniel J. Capra, Philip Reed Professor of Law on “Deepfakes” and Possible Amendments to Article 9 of the FRE, which includes two suggestions made in law review s for change to the authenticity rules to deal with the rise of deepfakes, a summary of Grossman and Grimm’s proposed change, and a fourth view that no change to the rules is necessary.
  • Pages 101-200 provides the article Artificial Intelligence as Evidence, written by Grossman, Grimm and Gordon V. Cormack and published back in 2021 (which I covered here).
  • Pages 202-227 provides the article The GPTJudge: Justice in a Generative AI World written by Grossman, Grimm, Daniel G. Brown, and Molly (Yiming) Xu and published earlier this year (which I covered here).

A couple of weeks ago while in New Orleans, Maura hinted about a possible proposed rules change to address concerns about deepfakes – now we have more details on what that proposal is! It will be interesting to see what happens with it and the other proposals.

So, what do you think? Are you in favor of the proposed rules change to address authentication issues regarding AI evidence? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using Microsoft Bing’s Image Creator Powered by DALL-E, using the term “authentication of AI evidence”.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Leave a Reply