This week’s kitchen sink for February 20, 2026 (with meme from Gates Dogfish) discusses defensible AI in legal workflows, wooing chatbots, “death by GPS” & more!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! eDiscovery means you’re always skating on thin ice! 🤣
Here is the kitchen sink for February 20 of ten-ish stories that I didn’t get to this week, with a comment from me about each:
We’re up to 961 AI hallucination cases and counting (including this one we recently covered)! As I discussed in this post, here’s what’s causing all these AI hallucinations and how to fix it, IMHO.
FutureLaw 2026 Preview: The Practical Path to Defensible AI in Legal Workflows: Rob Robinson probably had you with the title on this one. In it, he discusses things like the Data Sovereignty Imperative, the growing role of small language models, addressing the “Intelligence Ceiling” and 2026 deployment models. What more could you want? Excellent article. 😁
Navigating Legal and Compliance Risks When Corporations Expose Sensitive Data to AI: Very timely topic discussed by Kelly Twigger and John Patzakis on the Minerva26 blog. Discusses several grounded strategies for risk management counsel should follow. Another excellent article!
Radio Host Alleges Google’s AI Podcast Voice Mimics Him: You know those podcast voices that NotebookLM uses to create those AI “podcasts” of your information sources? This guy says the male one is him and he’s suing. You can listen to one of his podcasts here and decide for yourself.
5 custom ChatGPT instructions I use to get better AI results – faster: Another article that probably had you with the title. I like the first one the best. You’re welcome! 😁
Chatbots Are the New Influencers Brands Must Woo: And by “woo”, you need to make sure they know about you and what you’re all about. For example, three years ago, ChatGPT had no idea who I was and hallucinated lies about me instead. This is what it says about me today. Awww, shucks, thanks ChatGPT! 😇
University Booted From AI Summit Over a Robotic Dog: Apparently, an Indian university was booted from a top AI summit in New Delhi on Wednesday after one of its staffers displayed a commercially available robotic dog made in China, claiming it was the university’s own innovation. And that’s no “shaggy dog story”.
Amazon Van Follows GPS Onto the ‘Doomway’ Path: AI hallucinations in case filings and “Death by GPS” are caused by the same thing: confirmation bias. Here’s where I discuss that link. P.S.: no actual death here, the Amazon driver just got stuck in the mud. 😌
“The Court is keenly interested in whether Defendants’ counsel issued a litigation hold.”: News flash – this is something you don’t want the Court saying about your discovery preservation efforts. Michael Berman breaks down why the Court said it here in this post on the EDRM blog.
Zuckerberg Defends Meta in Landmark Trial: This case involving whether Meta and other social media companies have been responsible for underage users being addicted to their platforms promises to have significant impact on how they approach them (and perhaps all users) in the future. And if you use Meta glasses in court, expect to be held in contempt (as you should be). 😠
Decoding the A.I. Beliefs of Anthropic and Its C.E.O., Dario Amodei: This has become important as it has been reported that the US military (through a contractor) used Anthropic’s AI model Claude in the Venezuela raid where they captured Maduro. It’s not known (at least by me, but there could be a later report with more info) how it was used, but it may have violated Anthropic’s terms of use, which prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance. Is anybody surprised that AI models are being used for military purposes? I’m not. 😏
Something Big Is Happening — But Not What You Think: Ralph Losey has his own take on Matt Shumer’s viral essay on AI acceleration on the EDRM blog, which Craig Ball discussed (and I covered his discussion). Ralph agrees with Shumer on some things, not so much on others (or at least he feels that Shumer “exaggerates” certain points). I love the debate! 😁
Hope you enjoyed the kitchen sink for February 20, 2026! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



