Kitchen Sink for March 27

The Kitchen Sink for March 27, 2026: Legal Tech Trends

This week’s kitchen sink for March 27, 2026 (with meme from Gates Dogfish) discusses Meta’s Walter Kornbluth-like week, privilege status of legal hold notices & more!

Why “the kitchen sink”? Find out here! 🙂

The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! How else are you going to learn all the “angles” of eDiscovery? 🤣

Advertisement
Casepoint

Here is the kitchen sink for March 27 of ten-ish stories that I didn’t get to this week, with a comment from me about each:

We’re up to 1,180 AI hallucination cases and counting. Will this help? We’ll see.

White House AI Framework Signals New Compliance Stakes for Legal, Cybersecurity, and eDiscovery: Rob Robinson analyzes the Trump Administration’s long-anticipated National Policy Framework for Artificial Intelligence (released on March 20), a four-page legislative blueprint that sets the contours of what may become the first unified federal law governing AI. Excellent analysis as always, starting with the section titled “Intellectual Property, Data Training, and the eDiscovery Fault Line” is when it becomes even more interesting.

OpenAI rolls out ChatGPT Library to store your personal files: Are you thinking “ruh-roh”? 😬 Me too. Here’s the thing: if you’ve uploaded any files over the past month for ChatGPT to analyze, they’re already there. I checked – it’s true. Ruh-roh!

Advertisement
Veracity Forensics

When the Agent Goes Off-Script: Meta’s AI-Triggered Data Exposure Revives Old Security Fears: Meta has definitely had a rotten week, even worse than Walter Kornbluth. 😉 Rob Robinson discusses how Meta confirmed to The Information that an internal AI agent had autonomously exposed proprietary code, business strategies, and user-related datasets to engineers who lacked authorization to view them. Whoops.

OpenAI kills Sora video app, Disney kills deal: No $1 billion from Disney for OpenAI. Does this mean they’re a “Sora” loser? 🤣

AI vs. Automation in eDiscovery: What’s Different, What’s the Same, and Why It Matters Now: Terrific article from Maribel Rivera on the ACEDS blog discussing how AI and automation differ but also how they overlap – at least somewhat. Preaching to the choir for me. 😁

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case: A jury found the companies harmed a young user with design features that were addictive and led to her mental health distress. The amounts are relatively meager – Meta must pay $4.2 million in combined compensatory and punitive damages, and YouTube must pay $1.8 million – but it potentially opens the floodgates for other lawsuits. Just a day earlier, Meta was ordered to pay $375 million in a New Mexico trial over child exploitation, user safety claims. Guess whose ad rates are about to go up? 😉

Does Disclosure of Litigation Hold Directive to Preserve “Texts” Waive Privilege?: In this case, yes, as Michael Berman discusses on the EDRM blog. Interesting case. Mike is finding some really great cases – we’ve already covered two this week on regular posts, with another one next week that he suggested.

A nearly undetectable LLM attack needs only a handful of poisoned samples: Researchers have developed and tested a prompt-based backdoor attack method, called ProAttack, that achieves attack success rates approaching 100% on multiple text classification benchmarks without altering sample labels or injecting external trigger words. The prompt injection threat is real.

OpenAI puts adult version of chatbot on hold indefinitely: report: Remember when OpenAI’s own mental health experts unanimously opposed their “naughty” ChatGPT launch? Sounds like they finally listened. 😁

Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions: Ralph Losey “chose five kinds of speech to describe how AI works” – The Smart Child, The High School Graduate, The College Graduate, The Computer Scientist and The Tech-Minded Legal Professional – on the EDRM blog in the only way he can.

Judge Blocks Pentagon’s Controversial Anthropic Move: On Thursday, US District Judge Rita F. Lin temporarily blocked a Defense Department order that had branded Anthropic—the US-based AI lab behind the Claude system—a “supply-chain risk,” grouping it with companies tied to adversarial governments. Lin said officials had likely broken the law and appeared to be punishing Anthropic for publicly pushing limits on how its AI should be used by the US military, including opposing deployment in mass domestic surveillance and fully autonomous weapons. Lin called the government’s stance an “Orwellian notion” that a US firm can be cast as a potential saboteur simply for disagreeing with government policy. This isn’t over, especially with Google and OpenAI filing legal briefs to support Anthropic.

Hope you enjoyed the kitchen sink for March 27, 2026! Back next week with another edition!

So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply