Content creators won’t be forced to label AI-generated deepfakes despite their widespread use for election misinformation, a Senate committee has decided, despite suggesting that political deepfakes may be banned during caretaker periods before 2028’s federal election.

Released in the last few weeks before the contentious United States presidential election on 5 November, the new Select Committee on Adopting Artificial Intelligence interim report outlined the committee’s findings to date relating to the impact of AI on democratic processes.

The final report will be handed down in late November, after the committee was given an extension to consider the impact of generative AI on the US election.

“The current state of AI technology brings with it significant risks in relation to the conduct of electoral processes,” the committee warned.

It noted that observations of a number of overseas elections from this year suggest that “there is a near certainty that the upcoming federal election in Australia will be subject to similar attempts at spreading disinformation.”

Electoral processes should be added to the list of high-risk settings where AI is being considered for special controls, with the committee noting “significant regulatory gaps in Australia’s capacity to respond effectively to the use of AI in the context of electoral processes”.

It argues that “it is clear that the mandatory guardrails must be applied to AI systems that are used in an electoral or political setting.”

Despite this assertion, the committee shied away from mandating the watermarking and credentialing of AI content, a practice that has gained momentum through initiatives widely supported by Meta, Adobe, and other industry players.

It instead concluded that while such requirements should be implemented before next year’s federal election, they will remain voluntary.

The government should, the committee recommended, review potential regulatory responses to AI-generated political or electoral deepfake content.

This would be done via an inquiry timed to facilitate a legislative response prior to the election of the 49th Parliament of Australia, which will occur by 2028.

In the meantime, the report recommends that the government focus on education initiatives and other mechanisms “to ensure Australians have the knowledge and skills needed to navigate the rapidly evolving AI landscape, particularly in an electoral context.”

The committee shied away from mandating the watermarking and credentialing of AI content. Photo: Shutterstock

Who do you trust?

The call for better AI education comes as everyday Australians face a barrage of deepfake content that is enabling cyber criminals to launch ever more convincing and effective scams and cyber attacks.

It has also spurred a legislative crackdown on the use of deepfakes that are causing harm through abuse and harassment.

Their use to influence major political events around the world has become equally problematic, although proactive monitoring and high-profile pursuit of non-compliant social media firms like X helped tech giants keep Australia’s 2022 election relatively clean.

There are signs that the relentless discussion about the risks of AI is taking its toll on consumer trust – particularly among Australians who, a recent Cohesity survey found, are almost universally concerned about the challenges that AI poses.

Fully 83 percent of Australians responding to the survey classified AI as a risk to data protection and security, compared to 72 percent in the US and 64 percent in the UK.

Coalition members of the committee reserved their judgement on the proposed changes until the United States’ AI policy response is “holistically assessed” following that country’s election on 5 November.

They noted in a reply to the interim report that the range of AI policy responses implemented in different US states “will provide guidance to Australian policymakers on different mechanisms to manage the risks that AI poses to Australia’s democracy.”

These members were even less inclined to intervene, however, citing their opposition to the “dystopian reforms” of the government’s Misinformation Bill.

They also strongly opposed the interim report’s suggestion that that any laws restricting AI-generated political content “complement rather than conflict with” other AI policies.

These include mandatory guardrails for AI in high-risk settings, disinformation reforms, and reforms managing truth in political advertising.

Coalition members “do not support the introduction of measures that purport to adjudicate truth in political advertising,” they said, arguing that “freedom of speech and the contestability of ideas are necessary for a healthy liberal democracy.”

“Australian democracy ought to remain a marketplace of ideas,” the response said.

“If Australians share statements that are considered to be false, it is the role of civil society to hold their statements to account, not for the Federal government to prohibit such statements in the first place.”