Here’s the kitchen sink for November 21, 2025 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! I think I covered that case! 🤣
Here is the kitchen sink for November 21, 2025 of ten stories that I didn’t get to this week, with a comment from me about each:
We’re up to 575 AI hallucination cases and counting! As I discussed in this post, here’s what’s causing all these AI hallucinations and how to fix it, IMHO.
Also, eDiscovery Day is happening on Thursday, December 4th! For information on events for eDiscovery Day (including this webinar sponsored by eDiscovery Today), click here! And if you’re in the Houston area, consider joining the ACEDS Houston chapter involving education and networking here!
The New York Times-OpenAI Legal Fight Is Getting Mean: “Getting”? Um, it’s been that way since the beginning. I said yesterday in my session at Georgetown that this case is a daily eDiscovery blogger’s dream: the latest is that OpenAI claims to be “one of the most targeted organizations in the world” and “Fighting the New York Times’ invasion of user privacy.” In a podcast, Sam Altman said: “Are we gonna talk about where you sue us because you don’t like user privacy?” I’m quite sure OpenAI’s objections are all about user privacy. 😉
Leaked documents shed light into how much OpenAI pays Microsoft: Then again, maybe OpenAI should pay more attention to their own privacy? Click on the article if you want to know the numbers.
From Arizona to California: TLTF Summit Panel Explores ABS Impact: Interesting article from Rob Robinson on ComplexDiscovery on a session at the TLTF Summit that explored the evolution and national impact of Arizona’s alternative business structure (ABS) model. Not surprisingly, the ABS model is colliding with fierce resistance across state lines.
OpenAI named Emerging Leader in Generative AI: “Duh!”, you say. Well, now Gartner says it, so it must be true. 😉 As in OpenAI is an Emerging Leader in their new Magic Quadrant for Generative AI Model Providers. So is Google, Microsoft, Anthropic and six others. Notably absent is Meta, which is in the Emerging Challengers section. Even more notably absent is Grok, which isn’t even in the quadrant.
“Just When You Thought It Was Safe to Go Back Into the Water,” A.I. Hallucinates Metadata: Michael Berman discusses a Law360 article on the EDRM blog which states: “When AI generates a document, it may quietly populate or modify hidden fields that are embedded in the document — called metadata — with fictitious or misleading information. These AI-generated hallucinations are just as dangerous, if not more so, as errors in the body of documents, because they are overlooked by most users, appear authentic, and can have significant implications in discovery, authentication and privilege disputes.”
Google CEO: If an AI bubble pops, no one is getting out clean: This isn’t just anybody that warned of “irrationality” in the AI market – it’s Google CEO Sundar Pichai. Ruh-roh!
Lighting the Digital Path: How eMentorship Builds Real Connection in a Virtual World: Terrific article from Sheila Grela on the EDRM blog about “eMentorship”, including how “Mentorship does not have to be formal to be real” and the relationship between mentor and mentee is reciprocal. Couldn’t agree more. With shout outs to Mary Mack and Kaylee Walstad’s discussion on Relativity’s Stellar Women in eDiscovery podcast.
Critics scoff after Microsoft warns AI feature can infect machines and pilfer data: Microsoft warned us on Tuesday that Copilot Actions – a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails” – integrated into Windows can infect devices and pilfer sensitive user data. Oh, is that “frowned upon”? 🤣
The Imminent AI Bubble Crash (and Why It Won’t Matter in the Long Run): Bubble article number two this week. This article says the “intense excitement around artificial intelligence” is “a clear echo of the dot-com bubble”. Ruh-roh.
“We’re in an LLM bubble,” Hugging Face CEO says—but not an AI one: And here’s bubble article number 3. Clem Delangue, CEO of machine-learning resources hub Hugging Face, has made the case that the bubble is specific to large language models, which is just one application of AI. Somehow, that doesn’t make me feel any better. Ruh-roh.
Hope you enjoyed the kitchen sink for November 21, 2025! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



