NIST Publishes New Study, Establishes Model for Trust and Artificial Intelligence

When it comes to the slow adoption of Artificial Intelligence (AI) in the Legal industry, most people will claim “Luddite Lawyers” are to blame. And this certainly may be a factor, but I think higher on the list (understandably so) is trust. I mean, there is that whole “beyond a shadow of a doubt” issue. So, it’s a good thing that the the National Institute of Standards and Technology (NIST) included in a new model for determining the trustworthiness of AI.

As datasets continue to grow, the human review of so many documents becomes physically impossible. And this is where Artificial Intelligence (AI) comes into play. It’s not there to make decisions for attorneys, but instead to act as a guide and tool to enhance their knowledge. An interesting parallel for me as a writer is using the very basic, non-AI tools of spellcheck and grammar check. I don’t need them, and I’m grateful that my professional education happened before they existed, so I had no choice but to learn to live without them. But they sure are nice to have!

Still, writing an article isn’t as high risk as a multi-million dollar civil suit or a criminal case involving kidnapping and murder. And risk is a key factor that NIST included in the new model.

Advertisement
Nextpoint

The study first discusses how a key element of people learning to trust automation systems is predictability. An example might be ATM machines: the more people saw these machines acting in predictable ways time and time again, the more they trusted them. However, with AI, “Asking the AI to perform the same task on two different occasions may result in two different answers as the AI has ‘learned’ in the time between the two requests. AI has the ability to alter its own programming in ways that even those who build AI systems can’t always predict. Given this significant degree of unpredictability, the AI user must ultimately decide whether or not to trust the AI.”

Which is why NIST is working on a model to determine trust. The paper goes on to lay out nine factors for determining the success of AI:

  • Accuracy
  • Reliability
  • Resiliency
  • Objectivity
  • Security
  • Explainability
  • Safety
  • Accountability
  • Privacy

And then they include formulas (which are available in the study) for applying them to Artificial Intelligence. However, they go one step further than this by then including a risk factor.

NIST provides two scenarios, one low risk and one high risk, as examples. In the low risk scenario, AI is helping a college student find music he might like. In the high risk scenario, AI is helping a doctor in a highly specialized field with the diagnosis of her patient.

Advertisement
KLDiscovery

As the paper states, “Each trustworthiness characteristic has a sufficiency value indicating the extent to which its measured value is good enough based on context and risk.” So while the AI in each of these scenarios may have a 90% accuracy rate, the perceived trustworthiness is affected by those sufficiency values. With the music AI, 90% is considered trustworthy, and with the medical AI, it is not.

Trust and computing systems is nothing new. The authors of the study cite an email that Bill Gates sent out to all Microsoft employees in 2002 as an early understanding of trust as it applies to computing. In this email he states, “What I mean by [trustworthy computing] is that customers will always be able to rely on these systems to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure…”

In the end, the perceived trust of Artificial Intelligence is what will allow successful collaboration between humans and AI systems. Vanity metrics and vendor claims are not enough. Users need defined standards to help them determine whether a system is acceptable for the risk and context of their situation, and this model proposed by NIST is taking things in the right direction.

So, what do you think? How will the NIST Model for AI Trustworthiness Affect the Legal Technology Industry? Will defined standards like these lead to more widespread adoption?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

3 comments

Leave a Reply