During last week’s CCBJ webinar, we were asked about recommendations on an audit process for AI applications. Here’s what I found.
The webinar, hosted by Corporate Counsel Business Journal (CCBJ), was about Adopting Generative AI Technologies in Corporate Legal Departments and I was joined by TracyAnn Eggen and Joy Holley, who both had great insights on the benefits, risks & challenges, use cases of generative for corporate legal departments and recommendations for getting started.
We also had great questions from the audience, including one from Phil Weldon, Director of eDiscovery and Litigation Support Technology at Kaplan Hecker & Fink LLP, who asked this question: Do you have any recommendations on an audit process for AI or LLM applications?
It was a great question, and I didn’t have the best answer. I remembered that I had seen a couple of resources from Dr. Maura R. Grossman (including at least one we covered together in a webinar earlier this year), but I couldn’t think of them off the top of my head.
After the webinar, I looked up the resources that I had remembered, and I found two terrific resources for AI validation that Maura had co-authored:
The first was the article Artificial Intelligence as Evidence, published by Northwestern’s Pritzker School of Law’s Journal of Technology and Intellectual Property in December 2021 and authored by Maura, Gordon V. Cormack, Ph.D. and (now retired) Maryland District Judge Paul W. Grimm, where the authors discussed establishing validity and reliability of the AI, as well as evidentiary principles that should be considered in evaluating the admissibility of AI evidence in civil and criminal trials.
The second was the article Artificial Justice: The Quandary of AI in the Courtroom, published by Judicature and authored by Maura, Judge Grimm, Mireille Hildebrandt and Sabine Gless, where Maura discussed the factors discussed in the U.S. Supreme Court’s decisions in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) (Daubert) and posed these questions in the article:
- Was the AI tested?
- Who tested it?
- How was it tested?
- How arm’s length was that testing?
- Is there a known error rate associated with the AI, and is that an acceptable error rate depending on the risk of the adverse consequences of a ruling based on invalid or unreliable information?
- Was the methodology generally accepted as reliable in the relevant scientific and technical community?
- Has the methodology been subject to peer review by other people other than the AI developer?
- Have standard procedures been used to develop the AI where applicable?
Both are great resources. I figured Maura might have even more, so I asked her if there were any other resources she would recommend on the subject of audit processes for AI applications and she gave me this one as an “excellent resource”:
The Algorithmic Impact Assessment (AIA) tool from the Canadian government, which is a “mandatory risk assessment tool intended to support the Treasury Board’s Directive on Automated Decision-Making. The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 51 risk and 34 mitigation questions. Assessment scores are based on many factors, including the system’s design, algorithm, decision type, impact and data.
The AIA is available as an online questionnaire on the Open Government Portal. When the questionnaire is completed, the results provide an impact level and a link to the requirements under the directive. The detailed results page will also explain why the system was rated a certain level. The results and the explanation can be printed or saved as a PDF.
The site discusses parameters for using and scoring the assessment and instructions for completing it – looks great!
BTW, Phil is forming a working group with ILTA on AI Audits. Feel free to reach out to him at firstname.lastname@example.org if you’re interested in participating.
So, what do you think? Do you know of any other audit process for AI applications? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Microsoft Bing’s Image Creator Powered by DALL-E, using the term “audit of artificial intelligence”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.