The eSafety Commissioner has revealed that new tech trends position statement provides specific safety interventions that the industry can adopt immediately to improve user safety.
“This month, we received our first reports of sexually explicit content generated by students using this technology to bully other students.
“That’s after reports of AI-generated child sexual abuse material and a small but growing number of distressing and increasingly realistic deepfake porn reports,” eSafety Commissioner Julie Inman Grant said.
She called on the tech industry not to turn a blind eye, warning that our collective safety and wellbeing as a nation is at stake if we don’t get protections in place immediately.
“If industry fails to systematically embed safety guardrails into generative AI from the outset, harms will only get worse as the technology becomes more widespread and sophisticated,” she said.
“The danger of generative AI is not the stuff of science fiction. Harms are already being unleashed, causing incalculable harm to some of our most vulnerable.
“Our colleagues in hotlines, NGOs and in law enforcement are also starting to see AI-generated child sexual abuse material being shared.
“The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake.
“And it has long been a concern that AI is being trained on huge datasets whose balance, quality and provenance have not been established, reinforcing stereotypes and perpetuating discrimination,” she said.
Incorporating advice from respected domestic and international AI experts, the paper details a range of safety measures and interventions, such as:
- Appropriately resourced trust and safety teams
- Age-appropriate design supported by robust age-assurance measures
- Red-teaming and violet-teaming before deployment
- Routine stress tests with diverse teams to identify potential harms
- Informed consent measures for data collection and use
- Escalation pathways to engage with law enforcement, support services or illegal content hotlines, like eSafety
- Real-time support and reporting
- Regular evaluation and third-party audits.
Inman Grant said while she doesn’t question the fact that generative AI holds tremendous opportunities for creativity and collaboration, she hoped that advanced AI would provide more accurate illegal and harmful content detection measures.
eSafety is also currently considering a revised Search Engine Code, which directly considers generative AI. Mandatory standards are also being developed for Relevant Electronic Services and Designated Internet Services.
“While our regulatory powers around online safety are the longest standing in the world, regulation can only take us so far. Online safety requires a coordinated, collaborative global effort by law enforcement agencies, regulators, NGOs, educators, community groups and the tech industry itself.
“Harnessing all the positives of generative AI, while engineering out the harms, requires a whole-society response,” she said.
“Let’s learn from the era of ‘moving fast and breaking things’ and shift to a culture where safety is not sacrificed in favour of unfettered innovation or speed to market.
“If these risks are not assessed and effective and robust guardrails integrated upfront, the rapid proliferation of harm escalates once released into the wild. Solely relying on post-facto regulation could result in a huge game of whack-a-troll.”
The tech trends position statement also sets out eSafety’s current tools and approaches to generative AI, which includes education for young people, parents and educators; reporting schemes for serious online abuse; transparency tools; and the status of mandatory codes and standards.
If you know a child being groomed or has had explicit material of them shared, or threatened to be shared, report it to the Australian Federal Police-led Australian Centre for Countering Child Exploitation.