Kitchen Sink for August 22

The Kitchen Sink for August 22, 2025: Legal Tech Trends

Here’s the kitchen sink for August 22, 2025 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!

Why “the kitchen sink”? Find out here! 🙂

The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! Aaron tells me that this meme was one of Kaylee’s favorites. ❤️ And, happy third anniversary to Gates Dogfish! 🎉

Advertisement
KLDiscovery

Here is the kitchen sink for August 22, 2025 of ten-ish stories that I didn’t get to this week, with a comment from me about each:

We’re up to 308 AI hallucination cases and counting! As I discussed in this post, there’s a site that is tracking AI hallucination cases, so I am showing an updated total weekly here.

Three Depositions Reopened to Address After-Produced Documents – Fed.R.Civ.P. 30(d)(1): Michael Berman’s case of the week, published on the EDRM blog. Last minute production of documents after Defendants removed the privilege designation from them. Defendants said the documents did not include “sufficient relevant information, not previously known to Plaintiffs, to warrant reopening” the depositions – the Court disagreed.

It’s a New Dawn In Legal Tech: From Woodstock to ILTACON (And Beyond): When Bob Ambrogi writes a recap of ILTACON, I want to cover it. When he ties it into Woodstock and Grace Slick of Jefferson Airplane, even better. When Bob feels that we are “experiencing a new dawn in legal technology”, that means a lot.

Advertisement
KLDiscovery

At What Cost? The Risks of Using Free AI Tools at Law Firms: We can’t hammer home enough the potential pitfalls of law firms using public AI tools (especially free tools), which include potential issues like data privacy and client confidentiality violations, non-compliance with regulations, limited support and more. Consider it hammered home once more, thanks to Kraft Kennedy.

How ChatGPT saved me time troubleshooting 3 annoying tech support issues: Then again, public AI tools can be useful for some things, as this article points out, walking us through each of those issues. Knowing where to draw the line is the key.

Reasonable or Overreach? Rethinking Sanctions for AI Hallucinations in Legal Filings: Considering we jumped up 33 cases in AI hallucination cases this week, this article from Hon. Ralph Artigliere (ret.) and Professor Bill Hamilton on the EDRM blog recaps one of those cases (Johnson v. Dunn), discusses the disparity for how sanctions have been applied in different cases, and recommends a practical four-pillar test judges can apply to determine the level of sanctions that are appropriate. A great read.

Apple Prevails in UK Encryption Battle: Official Withdrawal of Backdoor Demand After U.S. Diplomatic Pressure: Interesting article from Rob Robinson about the UK’s attempt to require Apple to give law enforcement access to encrypted iCloud data worldwide and US successful efforts to get the UK to back off on that request. This is a debate I’ve been covering for over nine years (starting in my previous blog) when the DOJ was trying to obtain access to the iPhone of one of the San Bernardino shooters. Apple’s position has always remained the same.

Thinking About Boilerplate Objections: We have not one, but two, articles this week from Professor Bill Hamilton of the University of Florida! Here, Bill recaps this case we covered a while back and issues a call to action to better educate and “encourage and train law students and attorneys to engage in meaningful internal dialogue”. Couldn’t agree more.

Scammers have infiltrated Google’s AI responses – how to spot them: Don’t take at face value the phone number included in a Google AI overview – one CEO of a prominent real estate firm learned that the hard way when he was scammed “with what looked like a legit phone number for Royal Caribbean I found on Google”. This article provides five tips for avoiding the same fate.

OpenAI says GPT-6 is coming and it’ll be better than GPT-5 (obviously): Apparently, that’s not a very high bar! 🤣 Remember when OpenAI was a “research company”? Now they’re just a product peddler like everyone else.

AI slop and the destruction of knowledge: When ScienceDirect is using AI to define scientific terms that don’t reflect the actual meaning of the term in the eyes of cognitive scientists, that’s just the latest example of AI “slop” – the infiltration of inaccurate AI content within information resources. It’s everywhere.

Why 95% of Corporate AI Projects Fail: Lessons from MIT’s 2025 Study: Rob Robinson dives into the MIT report (available here) titled The GenAI Divide: State of AI in Business 2025, which says only five percent of AI initiatives are producing measurable returns, sharing what it means to us and why “Human factors—such as skills gaps, workforce resistance, and cultural barriers—compound the challenge.”

Hope you enjoyed the kitchen sink for August 22, 2025! Back next week with another edition!

So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply