LinkedIn is facing a class action lawsuit in the United States over the alleged use of data from paid users to train third party artificial intelligence systems, with claims the tech giant tried to “cover its tracks” over the issue.

A lawsuit filed in a California Federal Court last year on behalf of LinkedIn Premium users has alleged the company violated the US Stored Communications Act, breach of contract rules, and unfair competition law over its use of user data.

It claimed LinkedIn had disclosed the personal messages of premium users to third parties without their permission, for the purposes of training AI systems — something the Microsoft-owned social media firm has denied.

The class action has sought $1,600 ($US1,000) per user for the alleged violations.

The case stemmed from a move last year by LinkedIn to turn on by default a setting which allowed user data to be used to train some AI systems.

It was alleged this was done before users were alerted to the change in the LinkedIn terms of service, which have since been updated.

Despite updating these terms, LinkedIn added at the time this “does not affect training that has already taken place”.

‘Covering its tracks’

The class action has argued LinkedIn “quietly” introduced this new setting and automatically opted premium users into it.

This then allowed the personal data of these users to be used by LinkedIn and third parties to train generative AI services, it alleged.

It has also claimed LinkedIn made efforts to conceal these changes the following month, when it updated its privacy policy to state user information may be disclosed for the purposes of training AI services.

The new privacy policy said some member data would be used to train generative AI models, and may be used to train the tools of other providers, such as Microsoft’s Azure OpenAI service.

The class action argued this demonstrated LinkedIn followed a “pattern of attempting to cover its tracks”.

“This behaviour suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimise public scrutiny,” the claim said.

LinkedIn rejects claims

LinkedIn denied the allegations in the class action, in a statement to Information Age.

“We have not provided InMail messages to third parties for AI training purposes,” a LinkedIn spokesperson said.

LinkedIn has not enabled the sharing of user data for AI training in the United Kingdom and Europe due to their more stringent privacy laws.

Last week LinkedIn announced it had launched legal proceedings against data firm Proxycurl in relation to the enforcement of LinkedIn’s data scraping and fake accounts rules.

The company said third parties obtaining the personal data of LinkedIn users without their permission was in breach of its policies.

Several big tech companies have used posts made by users to train their own AI models.

It was revealed last year that Facebook parent company Meta had been using text and photos posted publicly by adults on the platform to train its AI models.

Slack also moved to defend its use of user data to develop its AI tools in May, saying at the time it did not use the data to train generative AI systems, but did train its non-generative models for features such as emoji and search suggestions.