Kitchen Sink for September 20

The Kitchen Sink for September 20, 2024: Legal Tech Trends

Here’s the kitchen sink for September 20, 2024 of ten stories that I didn’t get to this week – with another brand-new meme from Gates Dogfish!

Why “the kitchen sink”? Find out here! 🙂

The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton of Trustpoint.One (which is a partner of eDiscovery Today!). For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! Sometimes, eDiscovery projects seem like a fight to the death! It’s almost comical! 😀

Advertisement
Cimplifi

Here is the kitchen sink for September 20, 2024 of ten stories that I didn’t get to this week, with a comment from me about each:

Europe’s privacy watchdog probes Google over data used for AI training: It’s a bookend this week of data privacy issues over AI training and the “better to ask forgiveness than permission” approach that companies apply to the use of our data. In this one, it’s Google that’s under investigation by Ireland’s Data Protection Commission over its processing of personal data in the development of one of its AI models.

AI chatbots might be better at swaying conspiracy theorists than humans:new paper published in the journal Science shows that experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. I know a few people I’d like to try this out on! 😉

A.I. Pioneers Call for Protections Against ‘Catastrophic Risks’: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. Unless, an AI chatbot talks them out of it, that is. 😀

Advertisement
Level Legal

Omnipresent AI cameras will ensure good behavior, says Larry Ellison: During an investor Q&A, Oracle co-founder Larry Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don’t break the law. Haven’t we all seen this movie already? Didn’t work out so well.

The Problem of Deepfakes and AI-Generated Evidence: Is it time to revise the rules of evidence? – Part One: This topic is so big it doesn’t fit in a single Ralph Losey blog post! 😀 Check out what he says on the EDRM blog (so far).

Google seeks authenticity in the age of AI with new content labeling system: Google has announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images, integrating the Coalition for Content Provenance and Authenticity (C2PA) standard. Nice. Of course, I’m waiting for the Coalition for Careful Content Provenance and Openness (C3PO) standard. 😉

Data Collection by Cars with Connectivity: On the EDRM blog, Michael Berman kindly references a post of mine discussing how law enforcement is pursuing Teslas for their surveillance video, then takes the topic much further in terms of the data your car has on you (and how to opt out of sharing when you can). Terrific article!

The Future Is Now: The Case for Adoption of Generative AI Document Review in E-Discovery: Esther Birnbaum points out why there has been so much resistance to genAI in eDiscovery document review: many have yet to see it in action. Couldn’t agree more.

Controversy Erupts Over LinkedIn’s AI Data Usage Policies: The other bookend of the “better to ask forgiveness than permission” approach to using our data to train AI. Was going to cover this as a full post, until I saw six other articles already out there about it, including this one from Rob Robinson on ComplexDiscovery.

Cencora’s Record-Setting $75 Million Ransom: A Wake-Up Call for Cybersecurity: Biggest ransom payment ever, covered by Rob Robinson. Never heard of Cencora? They used to be AmerisourceBergen. Regardless, they’re a healthcare provider, which means sensitive personal and medical data at risk. Sigh.

Hope you enjoyed the kitchen sink for September 20, 2024! Back next week with another edition!

So, what do you think? Is this useful as an end of the week wrap-up? Please share any comments you might have or if you’d like to know more about a particular topic.

Image Copyright © Universal Pictures

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply