Messaging, dating, online file storage and other online services will be required to scan for “seriously harmful” content after the eSafety Commissioner took over development of two new industry codes designed to help maintain what it calls a “safe online environment”.
Released this week in draft form and open for submissions until 21 December, eSafety moved to write its own mandatory industry codes – the Relevant Electronic Services Standard (RESS) and Designated Internet Services Standard (DISS) – after rejecting proposed industry regulations in May because, the regulator ruled, they “did not meet the statutory requirements for registration because they did not contain appropriate community safeguards for users in Australia”.
Deficiencies in the models previously proposed by big tech companies, eSafety Commissioner Julie Inman Grant said in announcing the new policy development process, included a failure by tech giants to demonstrate “a strong commitment to identify and remove known child sexual abuse material [CSAM]”.
Although providers of email, messaging, SMS and other communications services already use natural language processing rules to detect and filter spam and malware, Inman Grant said, they still aren’t doing the same for CSAM and other “worst of the worst” content – referred to in the code as Class 1A and Class 1B content – despite the existence of accurate and “widely available” tools like Microsoft PhotoDNA.
RESS will require online service providers to “proactively detect and remove” known CSAM and known pro-terror materials “if it is technically feasible,” a newly released discussion paper on the two standards explains.
Social-media giants have previously been grilled over their failure to police their content for harmful material, with eSafety fining Twitter/X $610,500 last month as Inman Grant excoriated it, Google, and Discord for a “concerning” lack of enforcement and alleged that the companies “are not living up to their responsibilities and the expectations of the Australian community.”
The new rules also address the growing abuse of Generative AI (GenAI) tools that can, eSafety warned, “be used by perpetrators to generate synthetic CSAM and pro-terror material more easily and at scale” – including using GenAI to “scale sexual grooming and sexual extortion activity, including by creating material used to threaten a child.”
Such material has exploded on online services, with the recent inaugural meeting of the Australia-US Joint Council on Combatting Online Child Sexual Exploitation warning that “online child sexual exploitation and abuse is increasingly prevalent, commodified, and worsened by the speed, scale, and scope of digital technologies.”
Amidst reports that students are using GenAI tools to bully other students with synthetic sexually explicit content, DISS would adopt a “clear and targeted” approach that pushes GenAI service providers to build “impactful protections” to monitor the use of “high impact” GenAI.
Dodging the encryption bullet
Calls for communications providers to actively police the content their users exchange revives the controversy around the security and privacy of edge computing devices – a practice so controversial that Apple ultimately walked back its plan to use edge AI techniques to scan iPhone photo libraries for known CSAM.
With eSafety now calling for messaging providers to do just that, that debate is likely to re-emerge as the government prepares to update the Privacy Act amidst concerns that AI regulation is not moving quickly enough to keep up with GenAI tools.
Recognising the ongoing controversy over the importance of encryption to messaging security, RESS would not cover SMS and MMS services – which are not encrypted – and while it would apply to email and private online messaging services, the discussion paper notes that RESS does not require encryption on those services to be “weakened or subverted”.
Service providers that claim it is technically infeasible to automatically scan and remove CSAM and pro-terror content will, however, be required to take “appropriate alternative actions” to “disrupt and deter the dissemination of” such material.
Coming on the heels of concerning UNSW research into Australian men’s sexual interest in minors, anti slavery advocate International Justice Mission Australia welcomed the government’s decision to take over policy development – months after it and more than 1000 people signed an open letter to the Prime Minister pushing for tighter rules on Big Tech.
”It has been concerning,” chief advocacy officer Grace Wong said, “to witness recent developments where big tech companies have refused to comply with Australia’s Basic Online Safety Expectations to report on measures they take to protect children from online sexual abuse on their platforms.”
The second phase of code development, which will address Class 2 materials – those that are not necessarily illegal but are “inappropriate for children” – has not yet commenced.