Yesterday, I covered Craig Ball’s post about adapting RFPs for generative AI, which brought up another topic: AI generated or AI assisted?
Craig’s post (Adapting Requests for Production for AI GLLM Assessment, available here on his excellent Ball in Your Court blog), which I covered yesterday, provides a terrific walk-through of how legal professionals need to reconsider how to approach requests for production (RFPs) to take advantage of generative AI when creating those requests.
The other thing that caught my eye was what Craig said at the end of the post. Craig stated: “This essay is unlike anything I’ve ever posted here.” And he explains what he means by that here, referencing a post at Rob Robinson’s excellent ComplexDiscovery site:
“The post at Complex Discovery carried the byline ‘ComplexDiscovery Staff,’ an amorphous attribution that, to my suspicious mind, smacks of ‘AI-generated.” So, I should tell you that the essay above was the work of ‘Ball in your Court Staff,’ meaning me prompting ChatGPT and then reading the result to see if I can stand behind it. It felt a bit like cheating, hence this confessional postscript. I used the following prompt: “Write an essay of under approximately 1,000 words describing specific ways to adapt requests for production in discovery to incorporate the salient elements of effective prompts when AI GLLMs are used to assess document collections for relevance and responsiveness. Supply examples of tweaks to common requests in business disputes and tort claims to be better adapted to this usage.” What emerged isn’t bad.; tepid to be sure, not my voice and unlikely to be embraced by counsel ever-apprehensive of framing a request too-narrowly. Still, it faithfully fleshes out the idea and–hopefully–gets you thinking about how to take better control of the delegation of your requests to our robot overlords.”
As I noted yesterday, we’re seeing a lot more content out there that is AI generated. And AI models are going to be trained more and more on content that is generated by AI, which leads to “degenerative AI” and content that can eventually deteriorate in quality. The article I covered yesterday provides several examples, some of which I included in my write-up.
However, here’s an important question to be asked: is it AI generated or AI assisted? There is a difference. And if it’s assisted, by whom and how?
When a terrific writer like Craig Ball issues a “confessional postscript” saying he used AI to help generate a blog post, are we being taken over by “our robot overlords” and contributing to the problem of “degenerative AI”? Not necessarily. Why? Because of curation.
An expert like Craig or Rob (who has been publishing AI assisted articles for months) doesn’t just let the AI generate content and publish it sight unseen. Unlike the “thousand websites that churn out error-prone A.I.-generated news articles” that NewsGuard found, they review the content for accuracy and ensure that it makes sense. There is curation from an expert that knows about the subject matter they are covering. Their content is AI assisted, even if the AI generates much of the content.
The other thing that matters is the disclosure of the use of AI. Craig did it in this post. Rob’s articles that leverage generative AI always have a disclosure of “Assisted by GAI and LLM Technologies” at the bottom of each article (not to mention that ComplexDiscovery has a stated and formal Generative Artificial Intelligence and Large Language Model Policy that discusses its use of generative AI. There’s no ambiguity of whether the content was AI assisted.
I don’t have a formal genAI and LLM policy (probably should add one), but what I have tried to do is disclose when I used generative AI to assist with a post. Examples are here, here and here (I’m sure there are others) and they’ve been mostly to see what the model would generate in response to a particular topic. Regardless, I didn’t just dump the content out there and claim it to be my own. I discussed and disclosed the use of generative AI in creating the post, just like Craig and Rob did. Even if they use the AI content as is, they wouldn’t publish it unless it met their standards. Neither would I. That’s still an assist (if you’re scoring at home). 😉
AI generated or AI assisted? The difference is curation and disclosure about the content. There’s nothing wrong with using AI to assist in generating content, as long as a qualified expert is ensuring its quality and the use of it is disclosed.
So, what do you think? Do you differentiate between AI generated or AI assisted? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4o’s Image Creator Powered by DALL-E, using the term “robot writer with scales of measurement with one side saying ‘Generated’ and the other side saying ‘Assisted’”. FYI, the image was “assisted” as I had to tweak it! 😀
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.





