Agentic AI

Agentic AI is Coming. Are We Ready For It?: Artificial Intelligence Best Practices

There’s growing discussion about “agentic AI”. What is it and are we ready for it? Two legal AI experts discuss its potential and also its pitfalls.

In The National Law Review (The Next Generation of AI: Here Come the Agents!, written by Tara S. Emory, Maura R. Grossman, J.D., Ph.D. as an article from The Sedona Conference® and available here), the authors make their concerns about agentic AI apparent right at the start with dialogue from the famous movie 2001: A Space Odyssey, where Dave Bowman asks HAL 9000 to “Open the pod bay doors, HAL”. To which HAL responds: “I’m sorry, Dave. I’m afraid I can’t do that.”*

So, what is agentic AI?

Advertisement
Insight Optix

Agentic AI, also known as Large Action Models (LAMs), represents the next generation of artificial intelligence. Unlike current AI systems that perform single, defined tasks, agentic AI systems operate autonomously to achieve high-level objectives, making cascading decisions and taking real-world actions.

Agentic AI can interact with various AI systems and vast datasets, independently executing complex tasks. For example, while a current AI might generate a vacation itinerary, an agentic AI would book the flights, hotels, and excursions.

Great, right? Well…

As Emory and Grossman point out, “Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective.”

Advertisement
CloudNine

You think that’s being dramatic?

Well, as the authors note: “In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.”

Sounds very “HAL-like”, doesn’t it?

They also note that “while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.”

Yeesh! Had enough?

Well, Emory and Grossman proceed to identify several specific challenges of Agentic AI. I won’t spoil their thunder and list them all, but here’s one of the challenges they identify:

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard, this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans. Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

While Emory and Grossman discuss some strategies for addressing agentic AI, they also note that: “These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks.”

Couldn’t agree more. I’m not sure we’re ready for AI agents to perform most actions without human oversight. With great power comes great responsibility.**

You can check out their article here.

So, what do you think? Do you think the time is right for agentic AI? Please share any comments you might have or if you’d like to know more about a particular topic.

*I can’t believe that movie is almost 57 years old! 😮

**Uncle Ben may get credit for that saying, but Voltaire said it first. Sacré bleu! 😉

Image Copyright © Metro-Goldwyn-Mayer (MGM)

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply