This always seems to be the case, but even more so recently: calls for AI transparency. It seems everyone wants it – except one group.
What group is that? Read on.
Weekends (and Monday mornings) are a great time to catch up on reading and I ran across a couple of articles this morning that tie into the growing calls for AI transparency. The first one was from Rob Robinson on his excellent ComplexDiscovery site (Holding Providers Accountable? Considering AI in eDiscovery Service and Software Provider Marketing and Messaging).
Here, Rob links to and summarizes two key articles from the Federal Trade Commission (FTC) regarding AI and explores how their guidance and questions can be applied to the marketing and communication strategies of eDiscovery software and service providers. The first one discusses the potential issues of bias and discrimination in AI and how the FTC provides guidance on using AI ethically. As part of that discussion, the FTC offers seven statements of exhortation to help businesses harness AI benefits without introducing unfair outcomes. One of those (highlighted by Rob) is:
“Embrace transparency and independence: Promote transparency and allow independent researchers to examine data, algorithms, and results.”
The second FTC article discusses the issue of AI hype and the potential for marketers to overuse and abuse the term “artificial intelligence” and Rob notes that the FTC may question several aspects of AI advertising, including (again highlighted by Rob):
“Actual use of AI: Marketers should not make baseless claims about their product being AI-enabled, and should note that using AI in the development process is not the same as having AI in the product itself.”
In both instances, Rob does a great job of outlining how the guidance from the FTC can be beneficial for eDiscovery service and software providers in several ways and identifies what companies (including eDiscovery providers) should be doing to achieve greater AI transparency. It’s a great read and you can check it out here.
The second article I read was from Gregory Bufithis, founder of both The Project Counsel Group and Luminative Media, who is also a prolific blogger on a variety of topics including cyber, data privacy, AI and more (and another must read). In his “Thoughts Over My Afternoon Coffee” column, his latest article (AI is not the only system that hallucinates) discusses a Wired magazine article that (in turn) discusses the “now infamous” letter calling for a halt to ChatGPT development where he quotes the author stating:
“there is no magic button that anyone can press that would halt “dangerous” AI research while allowing only the ‘safe’ kind”.
As Greg says, “no kidding”. He also references this quote from the article, noting that Wired has always been one of the cheerleaders always fired up about “transparency and accountability”:
“Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and we already have concrete proposals to work with to address the present risks of AI.”
As these two articles illustrate, there are plenty of groups advocating for AI transparency, including publications, government agencies and more. Heck, I’m for it! It seems that everyone wants it. Except one group.
The creators of the AI algorithms themselves.
Last year on the EDRM blog, I wrote about the battles in the courts over transparency of facial recognition and sentencing guidelines algorithms where courts have rejected the attempts of parties to obtain discovery on these technologies from law enforcement and the algorithm creators to determine the level of bias (despite considerable evidence that bias exists).
As Greg reminds us in his terrific article: “big technology savvy companies will do whatever they can do to corner a market and generate as much money as possible. Lock in, monopolistic behavior, collusion, and other useful tools are available.” He also links to an article from The Verge that discusses how “AI is entering an era of corporate control”.
Once again, no kidding.
How will we get to a point where we can make greater progress on AI transparency? My guess is that it will take some combination of law making (with data privacy laws being a significant driver) and/or court wins.
Expect a fierce battle.
So, what do you think? What do you think it will take to achieve greater AI transparency? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.