This week’s kitchen sink for February 27, 2026 (with meme from Gates Dogfish) discusses the “AI washing” of layoffs, “Einstein” doing homework for kids, AI fakes detection tools & more!
Why “the kitchen sink”? Find out here! 🙂
The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! The Board finds this deeply troubling! 🤣
Here is the kitchen sink for February 27 of ten-ish stories that I didn’t get to this week, with a comment from me about each:
We’re up to 979 AI hallucination cases and counting (including this one we recently covered)! As I discussed in this post, here’s what’s causing all these AI hallucinations and how to fix it, IMHO.
Fletcher v. Experian Information Solutions & Bridgecrest Credit Company: This is one of the 979 cases, suggested by Judge Andrew Peck (ret.). While the pattern is familiar, it has two notable components – one of which I’ll discuss here: It’s a Fifth Circuit decision and it discusses the Fifth Circuit’s proposed rule (which the Fifth Circuit decided not to implement) that would have required counsel and pro se litigants to certify either: (a) that no generative AI program was used to prepare any submitted document; or (b) if an AI program was used, that a human checked the AI-generated text for accuracy. You’ll have to wait until next week for the other one. 😊
Sam Altman Says Companies Are ‘AI Washing’ Layoffs: Sam Altman suggested that AI has become a scapegoat that is wrongly being blamed for the mass layoffs that continue to hit basically every sector of the economy. Of course – as the author suggests – “Altman’s gotta thread the needle here. He does, in fact, need people to believe his company’s technology can replace people—that has kinda become the whole pitch to corporations looking to pour money into AI (despite little real return on those investments thus far). But he also would rather not position his product as a job killer, lest he rile the masses who are on edge that their jobs might get axed.”
Insights from the Winter 2026 eDiscovery Pricing Survey: Rob Robinson is rolling out the results of the survey in several posts. So far, he has published insights on Forensic Collection, Examination, and Testimony, Data Processing, Hosting, and Project Management, and Document Review. More to come.
How to remove AI Overviews from Google Search: 4 easy ways: Do you hate those AI overviews? If you do, here’s how you can get rid of them in your Google search results. Though I have to admit – they’re growing on me. 😁
Pete Hegseth tells Anthropic to fall in line with DoD desires, or else: US Defense Secretary Pete Hegseth has threatened to cut Anthropic from his department’s supply chain unless it agrees to sign off on its technology being used in all lawful military applications by Friday. Reportedly, the DoD is even considering invoking the Defense Production Act, which would allow the Pentagon to make use of Anthropic’s tools without an agreement. I hope it doesn’t come to that.
What’s the Point of School When AI Can Do Your Homework?: The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely. It doesn’t take an Einstein to figure that one out. 😉
Legalweek 2026 – Exhibitors and Sponsors Up: Greg Buckles has been tracking Legalweek exhibitors and sponsors since 2008! Both are this year – dramatically – 208 exhibitors (44 more than last year) and 89 sponsors (41 more than last year and a record!). Looks like the move to the Javits is paying off for ALM.
Got AI? Then Get an AI Incident Response Plan.: AI incident response goes beyond traditional incident response to account for AI’s unpredictable and complex failure modes and unfamiliar cybersecurity attack vectors. This article goes into what an AI incident response plan is, why you need one, and how to get started. 😊
These Tools Say They Can Spot A.I. Fakes. Do They Really Work?: No. {Pause for effect} Seriously though, The New York Times ran more than 1,000 tests on AI detectors and “found several strengths and plenty of weaknesses”. Not surprising. If you want to know the best way to detect deepfakes (or deep fakes 😉), read Craig Ball’s guide here!
Privilege Waived Because Pre-Production Measures Were Not Shown to Be Reasonable: Michael Berman covers this case on the EDRM blog which shows – yet again – why you need a 502(d) order. You’re welcome, Judge Peck! 😉
A.I. Complicates Old Internet Privacy Risks: Should AI companies share private chat logs with the authorities when there’s a potential threat? That’s the question being asked after news surfaced that OpenAI had been aware of a British Columbia woman’s interactions with the chatbot and considered reporting her to the authorities months before she committed a mass shooting.
Hope you enjoyed the kitchen sink for February 27, 2026! Back next week with another edition!
So, what do you think? Which story is your favorite one? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



