Told you I would have more on artificial intelligence (AI) today! Northwestern’s Pritzker School of Law’s Journal of Technology and Intellectual Property last week published Volume 19, Issue 1, which contains the article Artificial Intelligence as Evidence, authored by three well-known names in legal and legal technology: Maura R. Grossman, J.D., Ph.D. and Gordon V. Cormack, Ph.D. and Maryland District Judge Paul W. Grimm.
As noted in the Abstract, “This article explores issues that govern the admissibility of Artificial Intelligence (“AI”) applications in civil and criminal cases, from the perspective of a federal trial judge and two computer scientists, one of whom also is an experienced attorney.” And the article is very comprehensive, informative and well-sourced, with 386(!) footnote sources. It discusses what AI is, why it has come to the forefront, the current technology landscape, uses of AI in business and law today, a very detailed look at issues raised by the use of AI in business and law today, establishing validity and reliability of AI and evidentiary principles that should be considered in evaluating the admissibility of AI evidence in civil and criminal trials, referencing several rules from the Federal Rules of Evidence (FRE), while noting that “there are, at present, no rules in the Federal Rules of Evidence that directly address AI evidence”.
Artificial Intelligence as Evidence gives the good, bad and ugly of AI and doesn’t hold anything back. On the one hand, the authors note that “Arguably, once an application of technology becomes well established, it becomes engineering, rather than AI. For example, spam filters and computerized systems that can compare two documents and identify their differences were both once considered AI, but today are simply referred to as ‘software.’” Ironically, they also detail an experience by Cormack where his credit card was flagged during his travels from New York to Brisbane, Australia (with a stop in Los Angeles) where he used his credit card in each place and the credit card company sent him a notification email (which was flagged by his spam filter and not delivered). Even accepted AI isn’t infallible.
The authors also discuss a variety of other considerations such as whether AI could qualify for a patent in the US and the GPT-3 language generating AI model, which a college student was able to use to generate a fake blog post on productivity and self-help. They even publish as a footnote a New York Times Modern Love column about how a couple met, which was pretty good (thankfully, GPT-3 is a bit more obvious when writing about “Legaltech”, so my job is hopefully safe for now!).
On the other hand, they provide a very comprehensive look at the factors that can affect the validity and reliability of AI evidence, including bias of various types, “function creep,” lack of transparency and explainability, and the sufficiency of the objective testing of AI applications. It includes a comprehensive set of examples of AI errors and shortcomings – including facial recognition errors with certain demographics. It also discusses challenges of transcription software (e.g., 83% accurate may seem pretty accurate, but not when you look at the results), among other challenges.
It also spends quite a bit of time discussing the risk-assessment software called the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) which was used to make sentencing decisions, and which was unsuccessfully challenged in the Wisconsin Supreme Court in 2016 by Eric Loomis, despite the fact that the proprietary nature of COMPAS was invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined. An article by Pro Publica also observed that COMPAS was twice as likely to classify black defendants as high-risk and vice versa with white defendants as low risk. This despite the fact that COMPAS was originally designed for assessing the treatment needs of offenders, but “function creep” eventually evolved it into a tool used for sentencing, despite the fact that it wasn’t designed for that.
The final section of Artificial Intelligence as Evidence discusses evidentiary principles, including FRE rules 102, 401/402/403 (which should be considered together), 901(a), 901(b) (especially 901(b)(1) and 901(b)(9) as particularly relevant to AI evidence) 602, 702 and 703. That includes a special discussion the five Daubert factors added in an Advisory Committee Note to the amendment of Rule 702 that went into effect in 2000. This section even includes six practice pointers for lawyers and judges – five questions to answer and one factor to consider (timing issues) when considering AI as evidence.
As I mentioned above, the article is a comprehensive and informative look at AI and Artificial Intelligence as evidence and is designed to provide “at least a rudimentary understanding of what AI is, how it operates, scientific and statistical evaluation, and the issues that need to be addressed in order to make decisions about its validity and reliability, and hence its admissibility.” Artificial Intelligence as Evidence is a must read for anyone in legal who wants to understand AI better and how it can be applicable as evidence, in lieu of (hopefully eventual) Federal rules that directly address AI evidence. My copy is covered with highlights now, so get your own copy! 😉
BTW, Maura, Gordon and Judge Grimm spoke on this topic at EDRM’s E-Discovery Day last week and the on-demand version of the webinar should be available soon. I’ll update in the comments with a link when it is!
So, what do you think? Are you interested in reading Artificial Intelligence as Evidence? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
[…] Artificial Intelligence as Evidence by Grossman, Cormack and Grimm: AI Best Practices – I only covered this excellent article on the good, bad and ugly of AI from Maura R. Grossman, […]