Kitchen Sink for August 29

The Kitchen Sink for August 29, 2025: Legal Tech Trends

Here’s the kitchen sink for August 29, 2025 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!

Why “the kitchen sink”? Find out here! 🙂

The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! eDiscovery can be dangerous without taking the proper safety precautions! 🤣

Advertisement
Veracity Forensics

Here is the kitchen sink for August 29, 2025 of ten-ish stories that I didn’t get to this week, with a comment from me about each:

We’re up to 316 AI hallucination cases and counting! As I discussed in this post, there’s a site that is tracking AI hallucination cases, so I am showing an updated total weekly here.

Half-Baked Motion to Compel Was Not Prompt, Not Ripe, Not Complete, and Not Likely to Succeed: Other than that, it was great! 🤣 Seriously, though, how can a motion be not prompt and not ripe at the same time? Check out Michael Berman’s case of the week, published on the EDRM blog, for the answer.

Digital Justice or Divide? Legal Tech Faces Tariff Headwinds: How could the tariffs affect cybersecurity infrastructure, information governance frameworks, and eDiscovery operations? As Rob Robinson discusses, 50 percent tariffs that could rise to as much as 300 percent on semiconductors, could have a ripple effect into the pricing models of major legal technology providers.

Advertisement
Relativity

Elon Musk’s xAI Sues Apple and OpenAI Over Claims It Is Being Shut Out: Don’t you hate it when an app provider messes with the algorithm to enable certain content to rise to the top? Said every X user who keeps finding Elon Musk’s unwanted X posts at the top of their feeds. 😉

YouTube secretly tested AI video enhancement without notifying creators: The latest example of the “forgiveness, not permission” attitude of big tech – applying AI to artificially enhance videos in a “test” conducted by Google without notifying the owners of the videos.

Gartner says add AI agents ASAP – or else. Oh, and they’re also overhyped: Gartner’s says 40% of enterprise applications “will feature task-specific AI agents by 2026, up from less than 5% in 2025.” Gartner also said that AI agents are at the Peak of Inflated Expectations and headed for the Trough of Disillusionment next. Alrighty then! 😁

The Digital Fortress Under Siege: How Today’s Cyber Threats Are Rewriting the Rules of Corporate Defense: As Rob Robinson puts it, “a chilling reality is becoming impossible to ignore: the very technologies that drive modern business success have become the primary vectors for corporate destruction.” He discusses several incidents to illustrate his point, and also AI’s emerging role in cyber threats.

OpenAI admits ChatGPT safeguards fail during extended conversations: OpenAI published a blog post on Tuesday titled “Helping people when they need it most” that addresses how ChatGPT’s safety measures may completely break down during extended conversations. This after being sued because a 16-year-old boy died by suicide after extensive interactions with ChatGPT, covered extensively here. Sad story.

Google’s AI Mode is getting more links for you not to click on: Google is tinkering with methods to display more links in AI Mode search queries to address concerns that AI mode is causing a steep drop in traffic to the sites from which the information in the AI summary is derived.

AI adoption linked to 13% decline in jobs for young U.S. workers, Stanford study reveals: Specifically, entry-level jobs in customer service, accounting and software development have seen a 13% decline in employment since 2022. But employment for more experienced workers in the same fields has stayed steady or grown. Hmmm.

A hacker has used AI to automate an “unprecedented” cybercrime spree: Another example that illustrates Rob Robinson’s point a few posts up. As Project Counsel Media notes, Anthropic said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies. Anthropic also announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users, but apparently, malicious websites can embed invisible commands that AI agents will follow blindly. Even with “safety measures” implemented by Anthropic, their tests found an attack success rate of 11.2 percent in autonomous mode. Ruh-roh!

L.A. Woman Loses Life Savings After Scammers Use AI to Pose as General Hospital Star, Says Family: Sad story of a woman who was conned into selling her condo and giving the money to scammers that included deepfake videos of a soap opera star. Perhaps the darkest side of AI.

Hope you enjoyed the kitchen sink for August 29, 2025! Back next week with another edition!

So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply