Site icon eDiscovery Today by Doug Austin

LinkedIn Accused of Using Private Messages to Train AI: Artificial Intelligence Trends

LinkedIn Accused of Using Private

Remember the kerfuffle over LinkedIn’s AI training option? Now, in a lawsuit, LinkedIn is accused of using private messages to train AI.

According to the BBC (LinkedIn accused of using private messages to train AI, written by João da Silva and available here), a lawsuit filed on behalf of LinkedIn Premium users accuses the social media platform of sharing their private messages with other companies to train artificial intelligence (AI) models.

It alleges that in August last year, the world’s largest professional social networking website “quietly” introduced a privacy setting, automatically opting users in to a program that allowed third parties to use their personal data to train AI.

Advertisement

It also accuses the Microsoft-owned company of concealing its actions a month later by changing its privacy policy to say user information could be disclosed for AI training purposes.

A LinkedIn spokesperson told BBC News that “these are false claims with no merit”.

The filing also said LinkedIn changed its ‘frequently asked questions’ section to say that users could choose not to share data for AI purposes but that doing so would not affect training that had already taken place.

“LinkedIn’s actions… indicate a pattern of attempting to cover its tracks,” the lawsuit said.

Advertisement

“This behaviour suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimise public scrutiny”.

The lawsuit was filed in a California federal court on behalf of a LinkedIn Premium user and “all others” in a similar situation.

It seeks $1,000 (£812) per user for alleged violations of the US federal Stored Communications Act as well as an unspecified amount for breach of contract and California’s unfair competition law.

According to an email LinkedIn sent to its users last year, it has not enabled user data sharing for AI purposes in the UK, the European Economic Area and Switzerland. Gee, I wonder why? 😉

LinkedIn has more than one billion users around the world, with almost a quarter of them in the US. In 2023, the company made $1.7 billion in revenue from premium subscriptions.

False claims with no merit? Well…

As noted in this article on Mashable, LinkedIn did create a new data privacy setting called “Data for Generative AI Improvement”. And they did turn it on by default (I can verify that as I found it turned on when I heard about it from users on – of all places – LinkedIn). They also reportedly did so without updating its terms of service to inform users.

So, what exactly does LinkedIn mean by “false claims with no merit”. I can’t imagine they are trying to claim they didn’t create a new AI training setting and automatically turn it on. There are thousands of users who would dispute that contention, including me. Not sure that it’s worth $1,000 per user, but still.

So, what do you think? Are you surprised that LinkedIn is accused of using private messages to train AI? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Exit mobile version