A man has been fined nearly $350,000 for making a series of AI-generated deepfakes of prominent women, the first time such a penalty has been imposed in Australia.
The Federal Court last week ordered Anthony Rotondo to pay a fine of $343,500 along with costs for making and posting 12 non-consensual pornographic deepfakes of six Australian women from late 2022 to October 2023.
The civil case was brought by the eSafety Commissioner in October 2023 and marks the first time in Australia someone has faced such a penalty for sharing deepfake content.
“This action sends a strong message about the consequences for anyone who perpetrates deepfake image-based abuse,” a spokesperson for the eSafety Commission said.
“eSafety remains deeply concerned by the non-consensual creation and sharing of explicit deepfake images which can cause significant psychological and emotional distress.”
Two years to justice
Rotondo was arrested in late October 2023, and the court heard that he “acknowledged that his posting of deepfake images could cause upset and distress”, but that he “did not express remorse” and said that making the deepfakes was a “hobby” and that it was “fun”.
He had earlier been given a notice and direction to remove the deepfakes from a site called MrDeepFakes.com, which has since been shut down, but failed to comply with this.
After being arrested, Rotondo eventually provided authorities with his passwords, and the images and videos were removed.
In December 2023 Rotondo was also fined $25,000 for contempt of court after he admitted to breaching three court orders made as part of the case, requiring him to remove the images, refrain from posting more deepfakes and to not publish the orders as they contained the names of the women targeted.
A day after he received this notice, the court heard that Rotondo forwarded the order to eSafety Commission staff and 49 other emails, including those belonging to media outlets.
Federal Court Justice Erin Longbottom said that Rotondo’s actions were “serious, deliberate and sustained”.
One of the victims told the court that she was “horrified” to be caught up in the situation.
“Even though it is clearly a fake video, I feel violated, vulnerable and completely without agency,” she said.
Validation
On top of the penalty, Rotondo has also been ordered to pay the legal costs incurred by the eSafety Commission.
Justice Longbottom said that this “not only validates the decision taken by the Commissioner to prosecute the contraventions but also serves as a public warning to others that posting non-consensual intimate images of individuals is a serious matter”.
eSafety Commissioner Julie Inman Grant posted on LinkedIn, saying the agency will continue to look for relief and justice for Australians experiencing image-based abuse.
“We brought this case in 2023 and it has been filled with twists and turns, but we think this is an important test in the use of our remedial powers towards determined perpetrators and hope this serves as an important deterrent for this cruel, destructive and anti-social misuse of easily accessible AI tools,” Inman Grant said.
The eSafety Commission also recently began enforcement action against a tech company offering AI-generated “nudify” services used to create deepfake sexualised images of Australian school children.
Earlier this month it issued a formal warning to a UK-based tech firm for its provision of “nudify” deepfake services.
According to the eSafety Commission, this site is visited by about 100,000 Australians per month and has been used to generate deepfakes of school children.
Earlier this year legislation criminalising the sharing of non-consensual AI-generated sexually explicit deepfakes passed into law.
The Act introduces new criminal penalties for the sharing of non-consensual deepfake sexually explicit material, and an aggravated offence for those who created the content.
Soon after, it was revealed that a Canberra public servant had created over 100 sexually explicit AI deepfake nudes of at least 16 of his co-workers.
This was reported to the police, but there were concerns that he could not be charged as police couldn’t prove that he had distributed the images online, with the legislation focusing on the sharing of deepfakes.
Last month it was revealed that Grok’s new video generation mode was reportedly creating uncensored nude deepfakes of women and female celebrities, even when asked not to depict them naked.