This week’s kitchen sink for April 10, 2026 (with meme from Gates Dogfish) discusses AI bot depression, deleted Signal message recovery & more!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! Do as you’ve been trained to do!* 🤣
Here is the kitchen sink for April 10 of ten-ish stories that I didn’t get to this week, with a comment from me about each:
We’re up to 1,294 AI hallucination cases and counting. But only 487 of them involve lawyers. 😁
Google AI bots can get locked in a ‘depressive spiral’ if they are repeatedly told they are wrong: AI models – they’re just like us! 😉 The company’s Gemini and Gemma models, which assist users with daily tasks, can fall into a ‘depressive’ spiral if they get answers wrong or fail to complete tasks when prompted. And the chatbots even go so far as abandoning routines and deleting work, according to a study conducted by Imperial College London and AI company Anthropic. I guess when your competitor says you’re wrong, AI models haven’t yet learned to take that with a “grain of salt”. 😁
The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures: Rob Robinson notes that courts across the United States imposed at least $145,000 in sanctions for AI-generated fake citations during the first quarter of 2026 alone, according to tracking data compiled by researchers monitoring judicial responses to generative AI failures. And it’s really serving as a deterrent – we’ve only had 432 cases involving them since the start of the year. 🤣
Deepfakes And The Future Of Litigation: Are We Ready?: No. 😉 Seriously, though, Stephen Embry discusses how courtroom lawyers must be better prepared to offer stronger proof of authenticity, given the rise of deepfakes and the liar’s dividend causing people to claim legitimate evidence is a deepfake (which happened in this case we’ll be discussing on Tuesday).
How Accurate Are Google’s A.I. Overviews?: Is Hulk Hogan really dead? Google’s AI Overview says “no”, but displays articles underneath that discuss his death. A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. Which still means that it provides tens of millions of erroneous answers every hour, including once saying that Lady Gaga is two days older than Ariana Grande – even though their birth dates are more than seven years apart. I blame the depression. 😉
Hallucination or Old-Fashioned Error? It Doesn’t Matter: Michael Berman covers this case on the EDRM blog where defendant’s counsel adamantly denied that AI was responsible for “two incorrect case names, two incorrect case citations, and improper direct quotations from one of those incorrect case citations”. So, the Court said (I’m paraphrasing) “fine, tell me how it happened and who was responsible.” I’m sure their explanation will be a doozy. 🤣
Anthropic’s Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems: Anthropic’s new cybersecurity initiative called Project Glasswing using a preview version of its new frontier model (Claude Mythos) has already discovered thousands of high-severity zero-day vulnerabilities in every major operating system and web browser, says Anthropic. One of those bugs was 27 years old! We should get this on the market ASAP!
Behind the Curtain: AI’s scary phase: Or not. 😉 The model demonstrated a “potentially dangerous capability for circumventing our safeguards,” Anthropic revealed. “The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park.” Just what do you think you’re doing, Dave?
Scoop: Meta removes ads for social media addiction litigation: Two weeks after Meta and YouTube were found negligent in a landmark California case about social media addiction, Meta began removing advertisements from attorneys who were seeking clients that claim to have been harmed by social media while under the age of 18. A Meta spokesperson told Axios. “We will not allow trial lawyers to profit from our platforms while simultaneously claiming they are harmful.” Duh! Hat tip to Project Counsel Media for the heads up on this one.
AI, Work Product, and the Protective Order Problem: What Morgan v. V2X, Inc. Means for Every Litigator: Kelly Twigger discusses this ruling, which is “the most comprehensive framework on AI and work product protection in federal court yet, and the protective order standard it set will follow every litigator using AI, not just those with pro se opponents.” Sounds like another great case for us to discuss on Tuesday!
FBI recovers deleted Signal messages from iPhone notification database: Hat tip to Debbie Reynolds for the heads up on this one. Guess what? If you have Signal auto-delete messages and even delete the app, messages can still be recovered in iOS’s push notification cache (assuming you turned on notifications for the Signal app at the time). Oopsie! 🤣
Hope you enjoyed the kitchen sink for April 10, 2026! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
*But forget about the Malaysian Prime Minister 🤣
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



