A male high school student in Sydney is under investigation after he was accused of using artificial intelligence software to create explicit deepfake images depicting other students.
Warning: This story contains references to image-based abuse.
The senior student, who has not been named in order to protect alleged victims, attends a school in Sydney’s south-west.
He has been referred to police over reports of image-based abuse of female students and for allegedly making fake social media profiles depicting some students.
Authorities believe the images were created at home during the school holidays, and not on school grounds.
A New South Wales Department of Education spokesperson told Information Age that the department was “working closely” with police to assist in their investigation.
“We do not tolerate such behaviour and will take the appropriate action,” they said.
“Our highest priority is to ensure our students feel safe and any decision about this student’s future involvement in the school will be based on that.
“We are helping affected students with appropriate wellbeing support and will do so as long as required.”
‘There will be disciplinary action’
NSW education minister Prue Car described the alleged offending as “abhorrent” and “a disgusting turn of events” which would not be tolerated.
"We will leave that investigation of possible criminality to the police but there will be disciplinary action for the student,” she said on Thursday, as ABC News reported.
"Our priority is making sure that all the affected students are okay and that they are okay to return on day one, term one.”
In a statement posted to social media on Thursday, Australia’s eSafety Commissioner Julie Inman-Grant said Australians could report image-based abuse to her office to have the images removed.
“Whether real or doctored images, we know that image-based abuse can cause significant and enduring pain and anxiety,” she wrote.
“We are here to help remediate these harms and can use other functions to hold platforms and perpetrators to account.”
Explicit deepfakes on the rise at schools
The investigation into the alleged conduct of the male student in south-west Sydney comes amid a continued proliferation of accessible deepfake and ‘nudify’ applications, which can be used to manipulate real photos of a person to create non-consensual explicit material, including still images and video.
Inman-Grant previously said her office had received its first reports of sexually explicit content being generated by students and used to bully others in August 2023.
In June 2024, a male teenager in Victoria was arrested and later released after dozens of female students from private school Bacchus Marsh Grammar allegedly had their social media photos used to create AI-generated nude images.
In the same month, the federal government introduced legislation to implement jail sentences of up to six years for anyone who shared non-consensual AI-generated image-based abuse.
It also proposed introducing more serious aggravated offences for those who created such images “for the purposes of non-consensual sharing online”, with a higher penalty of seven years’ imprisonment.
Some experts and advocacy groups warned the proposed laws were not strong enough to prevent a “prolific” use of AI deepfake tools, before the legislation was eventually passed in law in August.
If you need someone to talk to, you can call:
- Lifeline on 13 11 14
- Beyond Blue on 1300 22 46 36
- Headspace on 1800 650 890
- 1800RESPECT on 1800 737 732
- Kids Helpline on 1800 551 800
- MensLine Australia on 1300 789 978
- QLife (for LGBTIQ+ people) on 1800 184 527
- 13YARN (for Aboriginal and Torres Strait Islander people) on 13 92 76
- Suicide Call Back Service on 1300 659 467