A new industry code banning abusive material generated by AI in search engine results has come into effect in Australia.

The code covering search engine providers such as Google and Bing, was confirmed in September last year and came into effect on Tuesday.

It imposes a number of obligations on these tech companies in terms of stamping out child sexual abuse material and other illegal content being returned in search results, particularly around the use of generative AI in producing these results.

The Registered Online Safety Code requires Google, Bing and other search engine providers to take adequate measures to address the risk of Class 1 material, which includes child sexual exploitation and pre-terror material, and crime and violence-related content.

The onus is now on these tech giants to prevent the spread of the “worst-of-the-worst” online content, eSafety Commissioner Julie Inman Grant said.

“The commencement of the search engine code is really significant as it helps ensure one of the key gateways to accessing this material, through online search engines, is closed,” Inman Grant said.

“It will target illegal content and I will be able to seek enforceable penalties if search engines fail to comply with a direction to comply with the code.”


Under the code, search engines will be required to remove AI-generated child abuse images and other illegal content from search results.

This comes as Google and Bing have increasingly incorporated their respective generative AI tools into their search engines.

The safety code had to be renegotiated to include these technologies as a primary focus.

“Creating this code has not been smooth sailing,” Inman Grant said.

“The sudden and rapid rise of generative AI and subsequent announcements by Google and Bing that they would incorporate AI functionality into their search engine services meant the original code would have been out of date before it commenced,” she said.

“But I want to give thanks to the industry associations and key search engine providers for their hard work and willingness to go back and redraft the code to make it fit for purpose.

“What we’ve ended up with is a robust code that delivers broad protections for Australians.”

Tech companies will have to respond to reports and delist requests from end-users, meaning they will be forced to help users to remove harmful content from search results under the code.

The search engine providers will also have to “promptly notify law enforcement or appropriate non-government organisations” about potentially threatening child sexual exploitation materials on their services, and “make clear” when its users are interacting with AI-generated content.

If a tech company subject to the code does not comply with these provisions, it will face a fine of up to $780,000 per day.

The online safety code was drafted by the search engine industry in consultation with the eSafety Commission.

Not good enough

The eSafety Commission is directly drafting codes for two other industry sectors after it was left unsatisfied with what the draft codes by the industry produced.

These will be for relevant electronic services, which covers a range of private messaging and other communication services, and designated internet services, which include websites and apps outside of the other categories covered by other codes.

Full versions of these codes will be tabled in Parliament later this year.

The use of new generative AI tools to produce illegal content, such as child sexual abuse material, has been a primary concern for regulators around the world.

Australia’s Online Safety Act was written before generative AI tools such as ChatGPT came into public prominence, leading to concerns it is not adequately keeping pace with the surrounding risks.

In response, the government has brought forward a planned review of the Act by a full year, with former ACCC chair Delia Rickard to report back by the end of October.