Advocacy groups and experts have welcomed the government’s plan to crack down on non-consensual explicit deepfakes, but have warned its proposed laws are not strong enough and the widespread availability of artificial intelligence has allowed such material — which is overwhelmingly used to target women — to become “prolific”.

Warning: This story contains references to abuse and domestic violence.

AI can be used to create deepfakes in which images of a person or their body are digitally altered to appear in pornographic material.

A public hearing into the federal government’s proposed laws, introduced in June, took place at Parliament House in Canberra on Tuesday.

While the legislation seeks to make the distribution of nonconsensual explicit deepfakes a criminal offence, advocates said they believed the creation or threat of creation of such material should also be considered offences.

Dr Rachel Burgin, CEO of Rape and Sexual Assault Research and Advocacy (RASARA), said “the bill simply does not go far enough".

“At present, it does not create an offence for the creation of deepfake explicit material without consent,” she said.

“… The creation of those images, whether or not they are distributed, is a significant harm, and commonly threats to circulate intimate images including deepfakes is a tactic used by abusers to instill fear and exert control and power over another person.”

The issue of nonconsensual explicit deepfakes made headlines in June when female students at a Victorian school had their likenesses used in AI-generated nude images, leading to the arrest of a male teenager.

The government’s proposed laws only deal with sexual material depicting adults, while content depicting children will continue to be dealt with under other existing laws.

The new bill seeks to impose criminal penalties of up to six years’ imprisonment for the sharing of material of adults, and up to seven years if the person who shared it also created it.

Noelle Martin, a researcher and a survivor of image-based and deepfake abuse, said while she considered the proposed laws “a step in the right direction”, she was not convinced they would be able to “stem the tide of this abuse meaningfully or systemically”.

“Deepfake abuse has become a mainstream, prolific, global and borderless issue, and there’s only so much that any country can do to tackle this issue alone,” she said.

Tech companies under question

Martin argued that despite “an unprecedented level” of nonconsensual explicit deepfakes being created amid the rise of free and powerful generative AI apps and websites, technology companies were not being held to account.

“They’re allowed to enable this, they’re allowed to facilitate this, they’re allowed to profit from it,” she said.


Researcher and survivor Noelle Martin petitioned the government seven years ago to change its legislation. Image: Parliament House / YouTube

Martin said the proposed laws had limited scope and did not take into account what she called “the deepfake abuse pipeline”.

“We are dealing with search engines such as Google who facilitate and profit from this abuse by enabling the access and discoverability of deepfake sites, to payment processors such as Visa and Mastercard who facilitate the monetary exchange of this material,” she said.

“From individuals who solicit the creation of deepfake abuse or access material, to website owners and operators of massive deepfake hubs who host large amounts of this material.”

In its submission to the committee, Google admitted its search engine "occasionally” contained results which “may contain objectionable or offensive content”.

The company said it used automated systems to deal with content which violated its policies, but also took manual action “to block this content, in the limited and well-defined situations that warrant this”.

Google said individuals could request the removal of explicit images from its search platform, and while it had systems “in place to detect and remove duplicates of that imagery”, the company said it “may not detect all manipulated (but visually similar) ‘near-duplicates’”.

Martin said she had rolled her eyes at Google’s submission.

“I think regulators need to stop going easy, governments need to stop going easy,” she said.

“They’ve been able to operate with impunity for so long, and I think it’s time that they are actually held accountable, face potentially criminal penalties — at least severe fines — for what they’ve been allowed to do to people.”

Concerns over eSafety Commissioner’s powers

The proposed laws would make it a more serious aggravated offence to distribute explicit deepfakes without consent if an individual had already ignored three civil penalty orders from Australia’s eSafety Commissioner.

Dr Burgin said RASARA supported there being an aggravated offence for the transmission of material after civil penalty orders were made, but questioned the need for three to be ignored.

“We don’t agree three previously ignored penalty orders should be required before that — ignoring one should be considered an aggravated offence,” she said.

Martin also argued the office of the eSafety Commissioner was ineffective and called for it to be reformed.


Australia's eSafety Commissioner Julie Inman Grant addressed the committee on Tuesday. Image: Parliament House / YouTube

The eSafety Commissioner, Julie Inman Grant, told the committee that her office had not yet seen a repeat offender receive three civil penalty orders.

She welcomed criminal penalties for “more egregious” individuals, but said greater powers may be needed to also remove offending apps or the accounts of repeat offenders.

“Often it isn’t the regulation that drives the behaviour of the companies, it’s the reputation and revenue impacts,” she said.

Grant also argued some of the burden should be put on tech companies to “engineer out potential misuse” from their systems, but maintained that most organisations had been cooperative with her office's takedown requests.

The committee is due to report in August.

If you need someone to talk to, you can call: Lifeline on 13 11 14, 1800RESPECT on 1800 737 732, Kids Helpline on 1800 551 800, Beyond Blue on 1300 22 46 36, Headspace on 1800 650 890, MensLine Australia on 1300 789 978, QLife (for LGBTIQ+ people) on 1800 184 527, or the Suicide Call Back Service on 1300 659 467.