Protective Order Against Using GenAI

Protective Order Against Using Public GenAI to Review Produced Documents: eDiscovery Trends

Still unpacking all the insights from Legalweek. Here’s one for you: get a protective order against using public GenAI to review produced documents.

That was a recommendation I heard at least two times this week. The first was from David Cohen, Partner at Reed Smith, who was part of a terrific panel on Tuesday in the Relativity sponsored session titled Generative AI: Judges, Lawyers, and Technologists on Legal Ethics and Practical Guidance, which included Dave, and several other terrific panelists (Justice Tanya R. Kennedy from the Supreme Court of the State of New York, Texas District Judge Xavier Rodriguez, Texas District Judge, Linda Sheehan, Head of intelligENS, ENS in South Africa, and moderator David Horrigan, Discovery Counsel & Legal Education Director of Relativity – side note: Judge Andrew Peck was in the audience and also provided comments on the topics as well).

Dave’s closing recommendation related to documents produced by his client and the concern that the opposing party would load them into the public version of ChatGPT or some other model to analyze them – which, of course, has serious data privacy implications. So, Dave’s recommendation was to get a protective order against using GenAI on those produced documents.

Advertisement
Cloudficient

The topic was also brought up during a roundtable luncheon with Level Legal on Wednesday (who always conducts terrific roundtable discussions at events like this) when the moderator at our table (Daniel Bonner, Director of Client Solutions at Level Legal) asked a table of legal professionals: “What keeps you up at night?”

One of those topics that was brought up by Olga Friedman of Latham & Watkins was the potential that receiving parties would load their clients’ produced documents into a public GenAI tool and why they are needing to pursue a protective order to prevent that from happening. It’s one consideration in our new GenAI world that isn’t being discussed enough but could have significant ramifications to a law firm’s duty to protect its client’s data (even when that data is not within their direct control).

This tip is obvious to some, perhaps, but I’ll bet a lot of lawyers for producing parties may not think about doing that. They should.

More to come about Legalweek next week! 😊

Advertisement
Lexbe

So, what do you think? Are you getting a protective order against using GenAI to review produced documents? You should! Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

3 comments

  1. Thanks Doug. I don’t think it is fair to say that you should get a “Protective Order Against Using GenAI To Review Produced Documents.” What we should say is that you should get a Protective Order Against Using PUBLIC GenAI to review documents… We should absolutely be using GenAI tools and models to make our legal work more efficient and better, but we should require that the receiving party do that in a responsible way by using GenAI tools and models that are private and secure. Ones that meet or exceed the security requirements that we should already be asking receiving parties to adhere to. I would not recommend to any client that they agree to a protective order that has a blanket “No GenAI” requirement.

  2. Good point, Chris, and an important omission on my part. Guess that’s what happens when you write a post in a hurry on a plane heading home! 😉

    There’s no problem with using a GenAI tool to review productions that is private and secure and isn’t putting your client’s data into a public training tool. I did mean PUBLIC GenAI models like the public version of ChatGPT, not ANY GenAI model.

Leave a Reply