Australian mining magnate Dr Andrew ‘Twiggy’ Forrest is a step closer to holding Meta to account for deepfake scam advertisements, after the dismissal of an appeal by the tech giant marked the first time it has failed to hide behind a US law widely seen to provide blanket immunity online.
Forrest first discovered his image being used in fake Facebook ads in 2014, and has been publicly entreating Meta since at least 2019 to do something about ongoing crypto scam ads using deepfake images of prominent Australians including Forrest, David Koch, and Eddie McGuire.
One of Australia’s richest people, Forrest originally sued Meta in California in September 2021 and then sued Meta Australia in early 2022, seeking criminal charges in an action that, he said, was “being taken on behalf of those everyday Australians… who work all their lives to gather their savings and ensure those savings aren’t swindled away by scammers”.
His Australian argument for charges of criminal recklessness – which was contemporaneous to a separate ACCC suit against Meta, whose efforts to get the case dismissed were thrown out last October – was discontinued in April but his US legal battle got a boost in June after a US district judge ruled that he can continue to sue Meta over the Facebook ads.
Meta appealed that ruling – but with that appeal now denied, the stage is set for a court battle that has been slammed by some commentators as a waste of time and welcomed by others as a “significant” chance to legally test the widely used Section 230(c)(1) of the US Communications Decency Act (CDA), a 28-year-old law that social media companies have widely used to shirk responsibility for content their users post.
Is Facebook actually the publisher of the ads?
That law – which predates social media firms, cryptocurrency, and AI-optimised ad targeting – says that no provider of an “interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.
It has become a go-to defence for social media giants resisting demands that they clean up the unending and profitable flood of deepfake scams and ads on their sites – but Forrest’s wide-ranging legal assault may have finally found a chink in the law’s armour.
“Meta,” ruled the judge from the US District Court for the Northern District of California (DCNDC), “has not established beyond dispute that Section 230 provides an airtight affirmative defence to all of Dr Forrest’s claims” – with the claims leaving “a potential factual dispute as to whether the challenged ads are provided entirely by another [italics original] information content provider.”
Forrest argues that by providing tools to help advertisers display and manage their content online, Meta is actively facilitating the publishing of the ads – meaning, he argues, that the social media operator cannot then argue that the ads are entirely separate from its own business.
It’s a “quintessential factual disagreement”, the decision said, that echoes the argument that then ACCC chair Rod Sims posited in 2022 when that organisation took action against Meta, arguing that “the essence of our case is that Meta is responsible for these ads that it publishes on its platform.”
“Meta assured its users it would detect and prevent spam and promote safety on Facebook, but it failed to prevent the publication of other similar celebrity endorsement cryptocurrency scam ads on its pages or warn users…. Those visits to landing pages from ads generate substantial revenue for Facebook.”
Opening the floodgates for change
Previous attempts to legally take on social media giants have proven futile and even ruinous – but with the US district judge opening the door to reassess s230’s protections, Forrest’s win comes as DCNDC becomes a focal point for those arguing that social media firms should be held accountable for what they publish, and to whom.
The judicial outcomes of Forrest’s Meta battle, will be closely watched – and if courts agree that s230 provides consumer protections that supersede social media giants’ heretofore impenetrable protections, they could open the floodgates for punishing their role in scams, misinformation, and harmful behaviour.