Sorry for the Debbie Downer on a Monday, but this article does a good job of discussing twelve AI risks that are causing trust issues.
The article from Harvard Business Review (AI’s Trust Problem, written by Bhaskar Chakravorti and available here) lists and then goes into depth into each one. Here are the twelve AI risks that are causing trust issues, with a comment by me on each:
- Disinformation: Right off the bat, they start the article talking about deepfakes (here’s one recent example) and online disinformation, which isn’t new, but as the author notes: “AI tools have supercharged it”. Oh, and “Social media companies are largely failing to address the threat”. Shocker!
- Safety and security: I expected this to be about malicious use cases for AI tools in cyberattacks, being “jailbroken” to follow illegal commands, etc. However, the author starts out this section by telling us that between 37.8% and 51.4% of all respondents “[i]n the largest ever survey of AI and machine learning experts” “placed at least a 10% probability on scenarios as dire as human extinction”. And these are the experts talking!
- The black box problem: You’re fully familiar with this one – lack of transparency is one of the reasons that technology assisted review (TAR) in eDiscovery hasn’t caught on as fully as many people predicted. Will generative AI suffer the same adoption roadblocks for eDiscovery? Some say yes.
- Ethical concerns: The challenge of making sure that AI is being used ethically is that not everyone defines “ethically” the same way. Tough to get AI developers to keep ethics in a prominent position when there’s so much money to be made.
- Bias: We’ve seen plenty of examples of AI models with potential bias concerns – look up COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) as one example. AI tools are typically trained in closed environments, so creating a model which is designed to be balanced using a set of training data that is also balanced? Not easy.
- Instability: AI decisions can change drastically when the input is changed slightly. I’ve noticed that with ChatGPT. It can be potentially disastrous in some circumstances, such as an autonomous vehicle that fails to stop at a stop sign because of a small obstruction.
- Hallucinations in LLMs: I think we all know this by now – is there anyone on planet Earth who isn’t familiar with Mata v. Avianca? Apparently, at least a few aren’t because it keeps happening (and probably will keep happening).
- Unknown unknowns: This is my favorite one, and I would have put it last. We simply don’t know what we don’t know about how AI can act sometimes.
- Job loss and social inequalities: We’ve heard all the concerns about AI taking people’s jobs, in pretty much every profession, and that they may negatively impact certain groups more than others. In some of those professions, that’s a legitimate concern.
- Environmental impact: AI’s share of data centers’ power use worldwide is expected to grow to 10% by 2025. More sobering stats here.
- Industry concentration: Too much concentration of power by two few organizations, like Google, Meta, and Microsoft. Can the “little guy” with the big idea make a splash? It’s difficult, unless they align with a big player, like Microsoft, which was critical to the growth success of OpenAI.
- State overreach: Apparently, trends point in the direction of a greater use of AI and related tools to exert control over citizens by governments across the world. For example, at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including 51% of advanced democracies.
The article goes into a lot more depth on each AI risk, so check it out here. Not all twelve AI risks that are causing trust issues apply to every use case of AI, but many do. To maximize success, it’s important to have a plan to mitigate each one that applies to your use case(s).
So, what do you think? Which AI risks keep you up at night? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Bing Image Creator Powered by DALL-E, using the term “robot juggling several balls”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



[…] These potential flaws in artificial intelligence tools are putting users’ trust in the technology to the test. Read more @ ediscoverytoday.com […]