The Australian Communications and Media Authority (ACMA) will have the power to direct social media companies to keep records and report on how they are handling the spread of mis- and disinformation on their platforms, under draft legislation published over the weekend.

Platforms could also be forced to adhere to an industry code of conduct or face fines of up to two per cent of annual turnover.

“Mis- and disinformation sows division within the community, undermines trust, and can threaten public health and safety,” said Communications Minister Michelle Rowland.

“The Albanese Government is committed to keeping Australians safe online, and that includes ensuring ACMA has the powers it needs to hold digital platforms to account for mis- and disinformation on their services.”

ACMA won’t have the power to request for specific content or posts to be removed from digital platforms under these laws.

Rather, they are designed to reserve ACMA the ability to force platforms into line through mandated codes of conduct or industry standards.

Already the Digital Industry Group (DIGI) runs a voluntary misinformation code of practice adopted by Adobe, Apple, Facebook, Google, Microsoft, Redbubble, TikTok and Twitter.

This will be the first code ACMA looks to, but if it finds the industry’s self-regulation is insufficient then it will step in by creating a set of industry standards.

The eSafety Commissioner works within a similar framework and has rejected a pair of industry codes, forcing it to develop its own set of industry standards.

DIGI managing director Sunita Bose said the group supports the proposed new ACMA powers in principle.

“DIGI is committed to driving improvements in the management of mis- and disinformation in Australia, demonstrated through our track record of work with signatory companies to develop and strengthen the industry code,” she said.

Under the draft laws, misinformation is classed as any content that is false, misleading, or deceptive and is “reasonably likely to cause or contribute to serious harm”.

Disinformation is a subset of misinformation that is deliberately distributed with intent to deceive other people. The draft bill makes mention that this includes content put out there by, or on behalf of, a foreign power.

Parody and satire produced ‘in good faith’ will be excluded from the laws, as will professional news content and anything produced by accredited education bodies or by federal, state and territory, and local governments.

Content will be classed as causing ‘harm’ if it incites hatred against a group of people, disrupts society, harms democratic processes, public health, the environment, or the economy.

In a related fact sheet, the government offers examples of the types of content that would be considered harmful, pointing to misinformation that undermines electoral bodies, causes people to burn down communications towers, or encourages people to drink bleach to treat viral infections among others.

Shadow Communications Minister David Coleman said it was a “complex area of policy and encouraged the government to avoid overreach.

“The public will want to know exactly who decides whether a particular piece of content is ‘misinformation’ or ‘disinformation’,” he said.

“The significant penalties associated with this legislation potentially places substantial power in the hands of government officials.”

Powers handed to the eSafety Commissioner under the previous government lets that office direct online platforms to remove or restrict access to content.

Twitter has fallen afoul of the eSafety Commissioner since Elon Musk took over and is at risk of facing fines if it doesn’t explain how it is tackling hate speech on the platform.

Consultation on the proposed ACMA powers closes on Sunday, 6 August.