Critics are calling it “pointless and shameless”, but social-media giants’ support for an Australian digital industry code of practice targeting fake news will deliver the first quantifiable measures of misinformation when inaugural transparency reports are handed down in May.

Facilitated by digital-industry body the Digital Industry Group Inc (DIGI), the new Australian Code of Practice on Disinformation and Misinformation has – on paper at least – committed signatories to implementing safeguards “to protect Australians against harm from online disinformation and misinformation”, as well as implementing measures to reduce the spread and visibility of fake news online.

Early signatories include Google, Microsoft, Tiktok, Twitter, Redbubble and Facebook – which has been locked in an ongoing stoush with the government after it blocked Australians’ access to news content.

The code was developed to address “a wider, multifaceted social problem which involves a range of offline and online behaviours which propagate information that threatens to undermine established democratic processes or public goods such as public health,” the report says, noting social-media operators’ “role as important Actors within the Australian information ecosystem.”

Key areas of focus include a commitment to minimum “appropriate safeguards” against the harm of disinformation; the flexibility for signatories to opt into a range of additional “measures and objectives”; a commitment to “proportionality” in signatories’ response; development of best-practice methods for fighting disinformation; and the need for collaboration and cooperation among relevant stakeholders.

Signatory companies “are committing to robust safeguards against harmful misinformation and disinformation that also protect privacy, freedom of expression and political communication,” DIGI managing director Sunita Bose said in announcing the new code, which was developed with support from the University of Technology Sydney Centre for Media Transition and global anti-disinformation group First Draft.

“We’ve worked to get the balance right with what we think people expect when communicating over the Internet,” she said, lauding the engagement of digital companies, government, academia and civil society to address “the incredibly complex challenges of misinformation and disinformation online.”

A toothless tiger?

Since the process began in late 2019, DIGI received 17 submissions from a range of media and advocacy groups – many of which excoriated the degree of optionality built into the code’s wording.

Singling out a clause saying that signatories aren’t required to delete or block access to potentially deceptive content – or to “signal the veracity of content uploaded and shared by their users” – Digital Rights Watch warned that definition and scope issues “[seem] to undermine the purpose of the draft code as an over-broad carve out.”

“Disingenuous and malicious users already game the rules of various platforms to spread disinformation,” the group said, calling for clear reporting and an effective complaints handling mechanism. “The risk has already materialised.”

Advocacy group Reset Australia, part of a global effort to fight “distortion” by big tech companies, was unimpressed by a final Disinformation Code that it called “wholly inadequate”.

“This limp, toothless, opt-in code of practice is both pointless and shameless,” executive director Chris Cooper said, noting that the code’s complaints process has “no teeth, no steps for recourse, no fines, no notices – nothing”.

“It is ludicrous to have the peak body for Big Tech regulating itself,” he added. “This code attempts to suggest it can help ‘empower consumers to make better informed choices’, when the real problem is the algorithms used by Facebook and others [to] actively promote disinformation, because that’s what keeps users engaged.”

An independent public regulator, in line with those already in place in other industries, was needed to audit Big Tech algorithms and have the power to impose meaningful penalties for Big Tech inaction and other breaches.

The Disinformation Code comes at a time when social media-hungry Australians are increasingly distrustful of Big Tech firms and their self-serving intentions.

Just five per cent of Australians said they trust social-media platforms with their personal information, according to Okta’s newly released global State of Digital Trust report, with the intentional misuse or sale of personal data cited by 44 per cent as the top reason for losing trust.

Fully 14 per cent of Australian respondents said they don’t trust any digital channels to safely handle their data, with Okta APAC general manager Graham Sowden noting that “trust has become an essential commodity in mitigating risk and driving value for all.”

“Digital brands must be responsible stewards of customer data in order to nurture trust, and drive loyalty and success,” Sowden said. “The first step towards building digital trust is establishing effective security tools and policies.”