India’s “nest of spies” was created with old-school methods two years before generative AI (GenAI) emerged, but authorities now fear AI’s “step change in adversary capability” will flood Australia with a “tsunami” of espionage and child sexual abuse materials.
Noting the recent Wakeley church stabbing’s apparent link to extremist activities, ASIO director general of security Mike Burgess warned in a recent National Press Club address that widespread access to genAI “is likely to make radicalisation easier and faster.”
“We also anticipate AI will increase the volume of espionage” targeting Australia and other countries, he said – posing challenges for authorities who, he explained in a February address, face a “re-emergence” of sabotage threats against critical infrastructure and online collaborations amongst “nationalist and racist violent extremists”.
Their techniques had steadily improved in the few years since India’s ‘nest of spies’ – to which Burgess alluded in a 2021 speech but whose nation-state affiliation was only revealed in a new Washington Post investigation – was quietly removed from Australia.
And while foreign recruiters used a “cookie-cutter, one size fits all pitch” to win over Australian moles via social media platforms, Burgess warned that rapidly improving genAI capabilities “will likely empower more bespoke, personalised social engineering.”
“Terrorists and spies are early adopters” of new technologies, he said, warning that genAI “will facilitate foreign interference by allowing foreign intelligence services to conduct more prolific, more credible and more effective disinformation campaigns.”
“Adversaries will have a powerful tool to exploit big data sets, enhancing their ability to target vulnerabilities in people, software, and critical infrastructure.”
AFP commissioner Reece Kershaw fears GenAI’s use as a catalyst for extreme and exploitative activities, warning that many Australian children and other vulnerable people “are being bewitched online by a cauldron of extremist poison”.
This cauldron was being stirred by “jubilant” dark web child abusers who are using GenAI image generators to create made-to-order child sexual abuse material (CSAM) indistinguishable from the real thing.
Many images already uncovered by AFP investigators include the images and clothes of known real child abuse victims – as well as children whose images were scraped from the public social media accounts of enthusiastic parents.
The ease of generating such images – which Kershaw said could be readily detected a year ago but now “present real difficulties for even highly trained analysts to distinguish” – has left the AFP’s CSAM enforcement teams fearing that they will be overwhelmed by a “tsunami of AI generated child abuse material [that] we know is coming”.
Large-scale seeding of spurious, fake images could dilute enforcement capabilities, leaving investigators to spend weeks or months chasing digital ghosts that prevent them from finding and saving real children – whose abuse is escalating so quickly that the AFP’s Australian Centre to Counter Child Exploitation (ACCCE) has already received more CSAM reports this financial year than it did in the entirety of 2022-23.
Encryption is making enforcement harder
Concerns about the growing influence of nation-state affiliated cyber criminals were writ large in Verizon’s newly released 2024 Data Breach Investigations Report (DBIR), which found that 1 in 4 data breaches in the Asia-Pacific region involve espionage.
That’s far higher than Europe (6 per cent) and North America (4 per cent) – confirming that Australia and its neighbours are favoured targets for technology-enabled criminals and nation-state spies.
The region’s strong base of advanced manufacturing, R&D, and supply chains have driven the surge, Verizon Global Enterprise Asia-Pacific regional vice president Robert le Busque told Information Age.
As well as private firms trying to steal intellectual property, le Busque explained, Verizon’s security team has seen “highly organised, typically well-funded groups looking to gain access to critical infrastructure or other systems of national significance.”
“They are targeting information that might not be immediately monetisable, but creates the opportunity for effect or gain later on,” he said, urging organisations to “rapidly double down on efforts to better protect themselves and close that window of opportunity for cyber criminals.”
Staring down a surge of AI-enabled cyber crime, Burgess and Kershaw appealed to social media companies to help authorities apply existing laws – which allow for interception of communications with a warrant – to encrypted social-media platforms that had become safe havens for criminals.
“The reality for us is that in most of our counter terrorism and counter espionage cases, encrypted messaging apps are being used,” Burgess said, “and that’s a problem for us.”
For example, he said, ASIO is currently investigating a “nationalist and racist extremist network” using an encrypted chat platform to communicate – with other Australians and with overseas extremists – to share “vile propaganda, posting tips about homemade weapons and discussing how to provoke a race war.”
Because the chat room is encrypted, he said, “ASIO’s ability to investigate is seriously compromised” – forcing the organisation to use “more overt” monitoring techniques that “force us to declare our hand earlier than we’d like – tipping off the target and potentially compromising our investigation.”
“We have to expend considerably more resources to deal with these threats, which limits the number of threats we can deal with at any one time.”
Both leaders called for social media companies to engage in what Burgess called a “sensible conversation” about assisting legally authorised investigations.
“The majority of federal crime is technology enabled,” Kershaw said, citing not only online child exploitation but cyber crime, fraud, illicit drug trafficking, terrorism, and foreign interference – yet conceding that AI and encryption continue to keep law enforcement bodies around the world on their toes.
“We used to plan the future of policing through the lens of the years to come,” he said, “but not because of constant advances in technology, the years to come are almost every 24 hours.”