Here’s the kitchen sink for November 7, 2025 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! I’m a friend you can phone, just sayin’! P.S.: Bogus question! 🤣
Here is the kitchen sink for November 7, 2025 of ten stories that I didn’t get to this week, with a comment from me about each:
We’re up to 509 AI hallucination cases and counting! As I discussed in this post, here’s what’s causing all these AI hallucinations and how to fix it, IMHO.
Also, the 2H 2025 eDiscovery Business Confidence Survey, conducted by ComplexDiscovery and Rob Robinson is in its last few days! Please consider participating here!
Which AI Model Is Actually Best?: I’ll bet that got your attention, didn’t it? 😉 Stephen Abram shares this article, which provides an infographic ranking the models in different tasks (certain to be out of date any day now). The latest data from Artificial Analysis benchmarks show that: 1) GPT-5 leads in reasoning and agentic tasks, 2) Grok 4 dominates coding, and 3) DeepSeek & Qwen are closing the gap — at 1/100 the cost.
Judge’s Order Against Use of LinkedIn for Research on Potential Jurors Led to Alston & Bird’s $10,000 Fine, Sanction for Violating Ban: This long title means you don’t have to read the article to find out what happened. 🤣 But if you do, you’ll find that it’s due to the firm’s failure to notify the jury consultant of the ban. Oops.
Order Prohibiting Upload of Confidential Discovery Documents to Artificial Intelligence (“AI”): Surprised we haven’t seen one of these before now. Thanks to Michael Berman’s post on the EDRM blog, now we have! 😊
OpenAI changes ChatGPT’s usage policy to preclude legal advice: As Caroline Hill notes on Legal IT Insider, “By warning users that they can’t use ChatGPT for legal advice, OpenAI is looking to limit its liability for when things go wrong. The reality is that in practice, ChatGPT is still undertaking legal activities.” Exactly.
How to actually use AI in a small business: 10 lessons from the trenches: Good article, with plain sense tips for small businesses as they approach using AI. I agree with most of them, with a caveat or two – tip #2 says “Experiment using the free AI chatbots”, but it depends on what you’re using it for (e.g., see Mike’s story two items up).
Generative AI might end up being worthless – and that could be a good thing: Interesting discussion about the current state of GenAI (e.g., current estimates suggest big AI firms face a US$800 billion dollar revenue shortfall). Which (sadly) is why OpenAI is considering bringing ads to ChatGPT. Oh, and you get to learn what “enshittification” means.
AI Can’t Trump Thought Leadership: How Lawyers Can Keep Winning the Content Game: Personally, I would have chosen the word “supersede” over “trump”, but that’s just me. 😉 Nonetheless, this article is saying what I’ve been saying since the GenAI boom started. AI can’t provide “thought leadership” because someone else has to have said (or written) it first. I love this quote: “AI thrives on yesterday’s internet. Lawyers who are winning the content game write about tomorrow’s legal risk.”
Document Correlation: This case law post from Michael Berman on the EDRM blog gets into the nuances of the Rule 34(b)(2)(E)(i) & (ii) dispute. By that, I mean “produce documents as they are kept in the usual course of business or…organize and label them to correspond to the categories in the request” – E(i) – and “a party must produce [ESI] in a form or forms in which it is ordinarily maintained or in a reasonably usable form or forms” – E(ii). The issue is whether a TIFF production with metadata and Excel files produced natively satisfies the rules. Read his article to find out!
Beyond Public Cloud: The Enduring Case for Deployment Flexibility in eDiscovery: Good discussion of the considerations where Rob Robinson differentiates public cloud, private cloud and on-premise deployment options and relates them to current deployment models available through eDiscovery providers.
Case studies in the unethical and irresponsible use of AI: This article discusses two of them: The first is: Failing to detect “botshit” in AI-generated outputs (which drives people “botshit” crazy 🤣). The second is: Breaching the privacy of vulnerable people – which happens when people is enter confidential or private information into public AI platforms (e.g., see Mike’s story seven items up).
Louvre’s Surveillance Password Was Just … Louvre: That was for the video surveillance system, which probably helped the robbers get away (at least initially) with the jewels. No truth to the rumor that they changed it to “undeuxtroisquatrecinq”. 😉
Hope you enjoyed the kitchen sink for November 7, 2025! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



