Promise and Limitations of Agentic AI

The Promise and Limitations of Agentic AI: Artificial Intelligence Best Practices

Relativity has a new toolkit that discusses and educates about the promise and limitations of agentic AI! Here’s what it discusses and includes.

The toolkit, titled (wait for it!) Understanding the Promise and Limitations of Agentic AI for Legal and available here, explores the emergence of agentic AI, defining it as advanced systems that can perceive, reason, and take action to achieve complex goals. Unlike standard software, these agents use large language models to independently plan multi-step tasks such as document review, case strategy, and data analysis.

The authors emphasize that human oversight is mandatory in legal practice to ensure ethical compliance, defensibility, and accuracy. To help practitioners adapt, the guide provides practical use cases and a safety spectrum for evaluating a tool’s level of autonomy. It ultimately encourages legal teams to develop an agentic mindset by testing these tools in safe environments while prioritizing data security and professional responsibility.

Advertisement
Level Legal

In addition to a one-page Foreword from Aron Ahmadia, Vice President of Applied Science at Relativity, the 32-page toolkit includes four sections:

  1. Explaining Key Terms (and Their Nuances): Provides a discussion of what Agentic AI is and how AI Agents differ from AI Workflows. It also provides a nice table to represent the Levels of AI Agent from none up to Fully Autonomous Agent. For the four levels of agentic control – from Fully Human Controlled to Fully Autonomous Agents – it provides a Visual Guide on AI Safety that provides examples of each and their fit for Legal work.
  2. Considerations and Ethics for Legal Practitioners: Discusses the Responsible Practice of AI in Law, referencing key frameworks such as the ABA’s Formal Opinion 512, Sedona Principle 6 from The Sedona Principles, Rule 26(g) of the Federal Rules of Civil Procedure, ABA Model Rule 1.1 on Competence and more. It also provides a Green-Flag Checklist for Evaluating Legal AI Tools based on Transparency, Ethical Development, Validation & Accuracy, User Control, Support & Training and Security & Compliance.
  3. Practical Use Cases in Legal Practice: While briefly identifying a couple of AI agent examples in the broader world, this section discusses examples of Agentic AI for legal teams, including document review and case strategy, data intelligence and triage and more. The “tool” provided here is Spot the Agent Reflection Exercise – a table, which enables you to identify examples of agentic AI, how it perceives, thinks and acts and what could this look like in legal.
  4. How to Get Started: Discusses developing an agentic mindset by starting small with low-risk personal or administrative tasks to build “muscle memory” for agentic interaction and mapping the firm’s ecosystem by identifying rule-based tasks ripe for automation, understanding where repositories live and how systems connect and establishing internal AI usage guidelines to deter “shadow AI” (unsanctioned use) and encourage responsible experimentation.

Each section includes a “tl;dr” (i.e., “too long; didn’t read”) page with summary bullet points of “What You Should Know” and “What You Should Do”, links to Recommended Resources and a tool that you can use. And while the toolkit references Relativity products (like aiR for Review), the focus is clearly educational, not promotional.

The overall promise and limitations of agentic AI is that, for legal, agentic AI does not represent an “easy button” or a “set-it-and-forget-it” solution. It’s a collaborative tool that requires strategic direction. By maintaining rigorous human oversight and validation, legal teams can leverage these systems to improve efficiency and client impact without compromising their ethical or professional obligations. In other words, you don’t have to be afraid of the agents if you use them appropriately!

So, what do you think? Are you concerned about applying AI agents to legal and eDiscovery tasks? Please share any comments you might have or if you’d like to know more about a particular topic.

Advertisement
Casepoint

Image created using Microsoft Designer, using the term “robot agents lined up ready to go to work”.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply