The eSafety Commissioner has issued a series of legal notices to big tech firms ordering them to reveal the efforts they are taking to address terrorism content being spread on their platforms.

Under the Online Safety Act, the eSafety Commission has the power to require certain companies to answer detailed questions on steps they are taking to address certain content on their platforms.

The Commission has this week issued notices to Google, Meta, X, WhatsApp, Telegram, and Reddit ordering them to answer questions on what they are doing to prevent the posting and spread of terrorist and violent extremist material and activity or face significant fines.

It coincides with the fifth anniversary of the Christchurch terror attack, which was broadcast live on Facebook on 15 March, 2019.

eSafety Commissioner Julie Inman Grant said she is still receiving reports of this footage being shared on mainstream platforms.

“We remain concerned about how extremists weaponise technology like livestreaming, algorithms and recommender systems and other features to promote or share this hugely harmful material,” Inman Grant said in a statement.

“The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm and that’s why we are sending these notices to get a look under the hood at what they are and are not doing.”

The Christchurch attack and other terrorism-related incidents have “underscored how social media and other online services can be exploited by violent extremists, leading to radicalisation and threats to public safety”, she said.

The spread of terrorist conduct on social media

According to a 2022 OECD report, Telegram tops the list as the mainstream platform with the most terrorist and violent extremist material, followed by YouTube in second and then X. Meta and Instagram rounded out the top five platforms.

“It’s no coincidence we have chosen these companies to send notices to as there is evidence that their services are exploited by terrorists and violent extremists,” Inman Grant said.

“We want to know why this is and what they are doing to tackle the issue.”

The legal notices are needed as none of the companies in questions provide the information requested voluntarily, Inman Grant said.

“Disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework – developed in conjunction with industry – provided by the OECD,” she said.

“This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities.

“Transparency and accountability are essential for ensuring the online industry is meeting the community’s expectations by protecting their users from these harms.”

The six companies now have 49 days to respond to the legal notices.

The eSafety Commission is in an ongoing legal fight with X over the social media giant’s refusal to comply with a similar notice requiring information about actions taken to deal with child sexual exploitation material.

The Commission issued X with a $610,000 fine over this failure, but the company has refused to pay it and is appealing the decision.

GenAI’s role to be scrutinised

The eSafety Commission has also demanded information on the role the companies’ generative AI offerings are having in the spread of this type of harmful content.

“We are concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm,” Inman Grant said.

It comes as a new industry code banning abusive material generated by AI in search engine results came into effect this month.

The code, covering search engine providers, implements a number of obligations in terms of stamping out child sexual abuse material and illegal content being returned in search results, particularly through the use of generative AI.