This week’s kitchen sink for February 13, 2026 (with meme from Gates Dogfish) discusses agentic AI as the next big thing, loving eDiscovery, too many AI-generated articles & more!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! ¡Ay, caramba! 🤣
Here is the kitchen sink for February 13 of ten-ish stories that I didn’t get to this week, with a comment from me about each:
We’re up to 917 AI hallucination cases and counting (including this one we recently covered)! As I discussed in this post, here’s what’s causing all these AI hallucinations and how to fix it, IMHO.
AI Doesn’t Reduce Work—It Intensifies It: Hey, I’ve been saying that AI won’t reduce your workload for a while now! I’m glad HBR agrees – great minds think alike! 😉
A.I. Is Making Doctors Answer a Question: What Are They Really Good For?: AI bots are good at finding ways to talk to patients, good at reading scans and images, good at diagnosing, and so good at answering patient questions in portals and writing appeals to insurance companies when a medication or procedure is denied. Until they lead to botched surgeries and misidentified body parts. Whoops. 😩
The Rise of AI Agents: The Next Big Thing in eDiscovery: Well, it’s already happening, but still. This article discusses what makes agentic AI different, where AI agents can be deployed, risks and considerations, and more. And this article doesn’t even get into how it can impact doc review.
What Agentic AI Actually Means for Lawyers’ Daily Workflows: If it’s the next big thing, they should probably know that, right? This article discusses several important considerations about how it can be applied, cutting through the marketing speak, questions to ask, etc.
There Must Be 50 Ways to Lose Your License (with AI): “Paste in that brief, Keith; Upload client files, Miles; Don’t fret that AI use, Bruce; Just cite ChatGPT!” Clever way to discuss lawyer AI risks in this LinkedIn article with a play on Paul Simon’s 50 ways to leave your lover. The author? Michael Simon. Coincidence? I think not! 😉
Musings from eDiscovery Industry Leaders for Valentine’s Day: Amy McWilliams rounds up several industry thought leaders (including me) and asks them What do you *love* about eDiscovery? The end result (published on the EDRM blog) is one eDiscovery lovers will love! ❤️
2026 AI Safety Report Flags Escalating Threats for Cyber, IG, and eDiscovery Professionals: Rob Robinson discusses key findings in the 2026 International AI Safety Report, and what it means for InfoGov and eDiscovery. As usual, very informative and thought-provoking.
Why ‘deleted’ doesn’t mean gone: How police recovered Nancy Guthrie’s Nest Doorbell footage: Surely you know by now that “deleted” doesn’t necessarily mean gone, right? With Google’s help, investigators were able to recover video footage from her Google Nest Doorbell, clearly showing the masked suspect. Took a while because she didn’t have a subscription to store the videos in the cloud. Nonetheless, let’s hope it leads to capturing the suspect AND her safe return. Prayers.
OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path: Don’t be like Facebook! 🤣 She resigned from the company the same day OpenAI began testing advertisements inside ChatGPT. Her concern is that users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.” Which (sadly) makes it worth a huge fortune.
AI-Generated Text and the Detection Arms Race: The science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by AI – in 2023! AI is writing so much of our content these days – and often not very well hidden. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. Given my experience with AI detection, consider me skeptical. 🤔
Google says hackers are abusing Gemini AI for all attacks stages: Bad actors from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia are using Gemini for target profiling and open-source intelligence, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting. Cybercriminals are also showing increased interest in AI tools and services that could help in illegal activities, such as social engineering ClickFix campaigns. Sometimes, I feel like the cybercriminals are leveraging AI better than we are! 😩
Hope you enjoyed the kitchen sink for February 13, 2026! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



