Kitchen Sink for May 9

The Kitchen Sink for May 9, 2025: Legal Tech Trends

Here’s the kitchen sink for May 9, 2025 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!

Why “the kitchen sink”? Find out here! 🙂

The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! Hyperlinked file missing? That’s not even fairly odd! 🤣

Advertisement
Nextpoint

Here is the kitchen sink for May 9, 2025 of ten-ish stories that I didn’t get to this week, with a comment from me about each:

Today’s General Counsel Special Edition: Legal Operations: Didn’t get enough of legal ops at this week’s CLOC conference? Today’s General Counsel’s May 2025 edition focuses on legal ops with an interview of Adam Becker, who is a board member of CLOC, the Corporate Legal Operations Consortium, and the Director of Legal Operations at database technology company Cockroach Labs

Chilling moment humanoid robot wakes up and starts attacking its handlers while trying to break free from restraints: Hat tip to my beautiful wife Paige for this one. “I, for one, welcome our robot overlords” – as long as they are tied to a crane! 🤣

OpenAI scraps controversial plan to become for-profit after mounting pressure: Never a dull moment for the makers of ChatGPT! I guess there goes the employee stock option plan! 😁

Advertisement
Everlaw

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse: Apparently, the new reasoning systems are more unreasonable. See what I did there? 😉 “On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.” Oy.

The eDiscovery Data Collection Survival Guide (Day 1): He’s more than just a meme guy! Aaron Patton kicks off a five-part series on the EDRM blog with five of the most common mistakes he has seen in the eDiscovery trenches, as well as how to avoid them. This first one involves “Expecting IT to Welcome You with Open Arms”! Looking forward to the others!

Are you really doing enough to detect “botshit” in your AI-generated content?: OK, I’ll admit that I was sucked in by the title on this one. I guess hallucinations are enough to make people go “botshit” crazy! 🤣! Check that content – including any linked sources if AI has pulled from the web, as the linked source doesn’t always say what the model says it says.

1 in 3 workers keep AI use a secret: My mind was blown 🤯 this week at CLOC when someone at a major corporation said they have a “no AI policy” (not even Grammarly is allowed). Stats like these convince me that every organization has people using AI (whether they allow it or not).

The Rademaker CLOC Keynote: GenAI Is Like Teenage Sex. Wait, What?: Stephen Embry recaps the opening keynote by Nancy Rademaker at CLOC. One of her quotes (which I’ve since discovered isn’t new but was new to me) is that “AI is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” 🤣

The LockBit Breach: Unmasking the Underworld of Ransomware Operations: Rob Robinson covers this breach where – for once – it’s the cybercriminals that were breached. Ha! 😁

Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI: There are twenty-two image-only cards in a game invented in northern Italy in 1450, the 78-card “Trionfi” pack, that warn of life’s risks and dangers. Only Ralph Losey could figure out how to equate those to 22 dangers of AI, as he does so in this EDRM blog post.

A judge accepted an AI video testimony from a dead man: A lot of people reported on this or sent this one to me, including my beautiful wife Paige, who was the first to do it. I selected this article from Eric De Grasse on Project Counsel Media (republished from 404 Media). The AI video was a victim impact statement from the victim himself, who was shot to death in a road rage incident. As the article notes: 1) it was possible in Arizona because they have a victim’s bill of rights to choose how they want to give a victim impact statement, 2) the AI avatar of the victim spoke of forgiveness which moved the Court, and 3) the victim’s sister still gave her own victim impact statement where she asked for the maximum sentence, which was granted. I’m not sure how I feel about AI avatars of victims being used to make victim impact statements in general – it seems like there is a lot of potential for problems with it. But it sounds like it was handled well in this case, at least.

Hope you enjoyed the kitchen sink for May 9, 2025! Back next week with another edition!

So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

One comment

Leave a Reply