Social media giant Meta actively and illegally targeted social media users younger than 13, according to a court filing that alleges CEO Mark Zuckerberg “personally vetoed” internal programs designed to reduce the harm caused by its “addictive and harmful” services.

Meta “created a business model focused on maximising young users’ time and attention spent” on its platforms and used “harmful and psychologically manipulative product features” to feed their “compulsive and extended” use of its platforms, the 233-page court filing – published by 33 US state attorneys-general and released last week in an unredacted version after their initial October filing was redacted – alleges.

Such features – including “dopamine-manipulating recommendation algorithms”; ‘likes’ and social comparison features; audio visual and haptic alerts that “incessantly recall young users to Meta’s social media platforms”; visual filters “known to promote young users’ body dysmorphia”; and content presentation formats such as infinite scroll that were “designed to discourage young users’ attempts to self-regulate and disengage” with its platforms – were all designed to attract young users and keep them hooked on Meta social-media platforms Facebook and Instagram, the attorneys-general argue.

This included actively marketing the services to young children and collecting personal data about them without their parents’ permission – a violation of the US Children’s Online Privacy Protection Act (COPPA), which requires online services to actively police and prevent use of their services by those under 13 without parental consent.

Statista figures suggest that 2.1 per cent of Facebook’s female users and 2.7 per cent of male users are aged 13 to 17, with the figures surging to 8.9 per cent and 12.6 per cent of users in the 18 to 24 demographic – but other surveys have found 45 per cent of under-13s use Facebook daily.

Meta “has actual knowledge” that children under 13 are also using its platforms, the filing says, noting that “Meta employees go to great lengths to maintain plausible deniability” of this fact – and that the company has “refused to obtain” or even attempt to obtain parental consent “prior to collecting and monetising their personal data.”

“Meta publicly denies what is privately discussed as an open secret within the company,” the filing alleges, “that very young children are a known component of Meta’s user base and business model.”

Actively promoting psychological harm

Meta has a longstanding and fraught relationship with US authorities, who in 2021 wrote to CEO Mark Zuckerberg asking Meta not to make a version of Instagram for under-13s and this year proposed a blanket ban that would prevent Facebook from monetising the data of its young users.

The proposed under-13 version of Instagram, internal documents subsequently revealed, was floated “so that competitors are not in a superior position to create habit with the next generation’” – with an internal team researching “the top things kids find compelling” and a 2017 internal briefing calling under-13s “critical for increasing the rate of acquisition when users turn 13.”

As well as targeting under-13 users, this latest suit alleges that Meta actively “concealed and suppressed internal data” that its platforms were harming young users, while at the same time routinely publishing “misleading reports boasting a deceptively low incidence of user harms” including mental health challenges, negative body image, poor self-esteem, and sleep disturbance.

Despite “overwhelming” research and analysis about the harms such practices cause to young users, the filing says, Meta “has redoubled its efforts to misrepresent, conceal, and downplay the impact of those features on young users’ mental and physical health.”

Although Meta has publicly promoted its well-being initiatives as examples of its work to address the social harms of its platforms, the findings of those initiatives were routinely ignored internally.

Company memos show that in 2020 the company made a decision that it “will not focus on problematic use for the foreseeable future” and “repeatedly failed to implement changes over the years to address these ongoing harms.”

The revelations in the filing – which come just weeks after UNESCO partnered with Meta to launch a global campaign to “empower young users with the critical thinking skills they need to be resilient to online harmful content” – have fuelled an uproar within the US Congress, with six US senators writing to Zuckerberg demanding that Meta provide a range of internal communications.

Despite ongoing internal conversations about addressing the ongoing harms of its social media platforms, the letter alleges that executives “repeatedly made decisions to implement changes to a product that they knew would harm teens and shelved safeguards based on concerns about their impact on revenue.”

“Proposals to fund wellbeing work were denied, and safety staff has been subsequently cut from Meta,” the letter says, noting that Zuckerberg “personally vetoed” a number of internal anti-harm proposals.

“It now seems clear that the root of Meta’s repeated failure to act to enhance the safety of its products starts at the top.”