Warning: This story contains references to image-based abuse.
Days out from Australia’s first federal election of the generative AI era, the effectiveness of new anti-deepfake legislation is being questioned after revelations a Canberra public servant can’t be punished for creating over 100 sexually explicit AI deepfake nudes of at least 16 of his coworkers.
Despite being reported to police last month, the 23-year-old ACT man reportedly can’t be charged because police couldn’t prove that he had distributed the images online, allowing him to dodge new legislation punishing the sharing of such deepfakes.
That law – introduced last August to crack down on what Attorney-General Mark Dreyfus called “a damaging and deeply distressing form of abuse” – created new offences for using technology to “artificially generate or alter sexually explicit material for… non-consensual sharing online.”
Word spread after one of the victims saw one of the altered images on the perpetrator’s phone – leaving her and other victims “traumatised” and “distraught” as ACT Police explained charges could not be laid because there was “no evidence to suggest the images had been distributed”, according to The Daily Telegraph.
That was despite some of the women reporting they had received anonymous messages saying the images had been circulated online which left them “sickened that the laws have failed us in this way.”
The incident underscores what Monash University Associate Professor of Criminology Dr Asher Flynn, chief investigator and deputy lead of Monash University’s Australian Research Council Centre for the Elimination of Violence Against Women, has called “a major legal gap in Australia”.
Laws failing to keep up with deepfake uses
With inconsistent laws meaning that deepfakes are only illegal in Victoria, Flynn said, federal deepfake laws “go some way towards curbing the accessibility of sexualised deepfake technologies” such as ‘nudify’ apps that are part of a larger exploitation-driven economy.
Such apps “have no legitimate purpose other than to exploit children or adults further and to fuel sexual deviant behaviours amongst like-minded individuals,” advocacy group ICMEC Australia said amidst recent reports that Nigerian ‘scam schools’ are teaching ‘sextortion’ to Australian teens.
Efforts to counter sextortion have seen active campaigns by advocacy groups like the Australian Centre to Counter Child Exploitation, but the age of the Canberra offender suggests that normalisation of ‘nudify’ apps amongst teenagers is being carried into the workplace as they age.
Evaluating the deepfake election
These latest revelations emerged just days before a federal election that has, among other things, been positioned as Australia’s first high-level test of how well technological and policy controls can manage deepfakes in the absence of explicit legal penalties until at least the 2028 election.
Despite concerns that easy access to transformational generative AI (genAI) technology has made it easier than ever to create and distribute partisan lies and misinformation – one expert labelled misinformation during this election “a near certainty” – authorities remain sanguine.
Despite “isolated examples” of deepfakes being used in the dozens of national elections held last year, the Australian Electoral Commission recently told Information Age election watchers “need to remember to keep things in perspective” before fear of genAI manipulation destroys voter trust.
Yet in the wake of local government election disinformation, a surge in AI ads and incidents such as fake videos of then-Queensland Premier Steven Miles dancing, many Australians feel that trust has already been compromised, according to a March survey conducted by Adobe and Advanis.
Fully 77 per cent of the 1,010 surveyed Australians believe political-related deepfakes have become more frequent in the past three months, the survey found, with 68 per cent saying that they have “rethought” a political candidate, policy or issue based on information they found online.
Equally concerning: just 12 per cent feel “very confident” in their ability to detect deepfakes in political content – and that 43 per cent believe politicians and political parties are the biggest drivers of political deepfakes, just behind the 53 per cent that blame social media platforms.
Even those social media giants have recognised the importance of integrity during election times, with Meta recently moving to label ad content generated by Meta AI as well as user images posted to Facebook, Instagram, and Threads.
GenAI “has the power to influence voter views,” said Adobe Asia Pacific director for government relations and public policy Jennifer Mulveny, “and more citizens need to be equipped with the digital media literacy and skills to stop, check and verify content.”
“While some simply ignore it, many others are questioning all political information they read and view online. That has a big impact on trust, not only in the election process but in our political leaders, before and during any time in office.”
If you need someone to talk to, you can call:
- Lifeline on 13 11 14
- Beyond Blue on 1300 22 46 36
- Headspace on 1800 650 890
- 1800RESPECT on 1800 737 732
- Kids Helpline on 1800 551 800
- MensLine Australia on 1300 789 978
- QLife (for LGBTIQ+ people) on 1800 184 527
- 13YARN (for Aboriginal and Torres Strait Islander people) on 13 92 76
- Suicide Call Back Service on 1300 659 467