This week’s kitchen sink for May 1, 2026 (with meme from Gates Dogfish) discusses top university websites serving porn, Musk vs. “Scam” Altman & more!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! I know a bit about AI detector quality! 🤣
Here is the kitchen sink for May 1 of ten-ish stories that I didn’t get to this week, with a comment from me about each:
We’re up to 1,369 AI hallucination cases and counting. But hallucinations are different for eDiscovery solutions. Here’s why.
Note: Rob Robinson has launched his 1H 2026 eDiscovery Business Confidence Survey, with more AI and business-related questions! Consider taking the survey here – it’s a terrific barometer on eDiscovery business trends!
The router on the shelf is now a national security problem: Speaking of Rob, his article here discusses how ordinary consumer-grade routers and small office/home office (SOHO) devices – a long treated as low-priority IT assets – have become a serious national security and enterprise risk due to their growing role in large-scale cyber operations. He gets into the specific concerns and challenges and what they mean for us.
Why are top university websites serving porn? It comes down to shoddy housekeeping.: This week’s sign of the apocalypse? Maybe. A researcher found the official domains for the University of California, Berkeley, Columbia University, and Washington University in St. Louis are serving explicit porn and malicious content after scammers exploited the shoddy record-keeping of the site administrators.
Deepfake Voice Attacks are Outpacing Defenses: What Security Leaders Should Know: A routine Zoom call with a senior leadership team with executives appearing on screen turned out to be deepfaked, leading to a $499,000 transfer before anyone flagged the fraud. Another company saw $25.6 million stolen in a similar attack. Here’s how inexpensive it is to pull off this scam and what you can do about it.
Florida murder suspect allegedly asked ChatGPT about putting body in trash bag, dumpster: ChatGPT is developing quite a rap sheet as an alleged accomplice to murder – at least from some perspectives. Here, a guy was charged with murder in the death of his roommate and roommate’s girlfriend. He allegedly asked ChatGPT “What happens if a human has a put in a black garbage bag and thrown in a dumpster” three days before the victims were last seen. Other questions he allegedly asked ChatGPT include whether one can change a car’s VIN number and whether one can keep a gun at home without a license. Oy.
Robinhood account creation flaw abused to send phishing emails: To target Robinhood customers, attackers likely used lists of known customer email addresses from previous data breaches, which could be as many as 7 million customers – way more than the real Robin Hood stole from in the days of yore. 😉
AI models refused harmful requests until researchers hid them in fiction and theology: A suggests that some of the world’s most advanced language models still struggle to recognize malicious intent when users disguise it as fiction, theology, symbolic analysis, or bureaucratic prose. When prompts designed to solicit dangerous information were posed directly, they “only” had a 3.84% attack success rate. But once the prompts were transformed, the attack success rate ranged from 36.8% to 65.0%, with an overall average of 55.75%. Honestly, even the 3.84% number has me worried.
No Right to a “Hit Report” for Facially Overbroad Search Terms?: This week’s case from Michael Berman (other than this one I covered yesterday which he also covered) covered on the EDRM blog discusses a case where the court rejected a bright-line right to “hit reports”. Like Mike, I’m somewhat baffled by the ruling – generating hit reports isn’t burdensome.
Taylor Swift files for AI protections through trademarks: I would link to a video saying “alright, alright, alright”, but Matthew McConaughey trademarked that too. 🤣
Musk vs Scam Altman: How a $1 Billion Charity Became a $134 Billion War: Terrific discussion of how we got to this point with the litigation between Elon and OpenAI. As the author notes, whether Musk or OpenAI wins, “Both outcomes have uncomfortable implications.”
eDiscovery Impact of Advanced Indexing in M365: Terrific article by Greg Buckles on eDiscovery Journal on what Advanced Indexing in M365 is, what triggers it, strategic workflow considerations and hidden gotchas.
From Training to Execution: Embedded Safeguards for Responsible AI Use in Legal Practice: Judge Ralph Artigliere (ret.) argues on the EDRM blog that traditional training alone is insufficient to ensure responsible AI use in legal practice. Real-world pressures (such as fatigue, time constraints, novelty of tasks, etc.) often lead to inconsistent execution. Ralph says: “The durable answer is to move the guardrails into the workflow itself, so that verification, confidentiality checks, and bias flags surface at the point of action rather than relying on memory alone.” Couldn’t agree more.
Hope you enjoyed the kitchen sink for May 1, 2026! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




Doug-
Thanks for highlighting my EDRM article on training and embedded guardrails for AI use by legal professionals. I consider this to be an extremely important issue.
Ralph Artigliere
Terrific article, Judge Artigliere! Couldn’t agree more!