Australian cyber officials and financial regulators are warning local organisations to bolster their security posture after Anthropic’s unreleased Claude Mythos AI model allegedly identified cybersecurity gaps which had gone undiscovered for years.

The American AI firm officially revealed the existence of Claude Mythos earlier this month following an earlier leak, but said the model was too “powerful” to publicly release due to its advanced coding and cybersecurity capabilities.

Australian authorities have since issued warnings about the increased capabilities of frontier AI models, including Claude Mythos – which has so far been provided to major US technology companies to stress test their systems under Anthropic’s Project Glasswing.

In a post written for business and government audiences earlier this month, federal cybersecurity agency the Australian Signals Directorate (ASD) said Claude Mythos is “an illustrative example of what frontier AI technology could mean for the cybersecurity community and how we can collectively respond”.

Australian organisations “should implement a strong cybersecurity baseline” to prepare for increasingly advanced frontier AI models, including by using AI to identify vulnerabilities, said ASD, which admitted “no mitigation strategy can provide complete protection”.

While ASD pointed to Anthropic’s Project Glasswing as “an example of the cybersecurity benefit that AI can bring”, the agency did not directly respond to questions from Information Age about whether it had sought access to Claude Mythos Preview.

A spokesperson said ASD “works tirelessly to explore technical innovation and further build our strong partnerships with industry”.

The executive director of Australia’s National AI Centre, Lee Hickin, wrote that the capabilities demonstrated by Claude Mythos have “real implications for anyone operating online, from smaller firms to large industry leaders, particularly across sectors like healthcare, banking, energy, telecommunications and critical infrastructure”.

“As capability advances, risk management needs to lift with it,” he said.

“When models can expose or exploit system vulnerabilities, the implications extend well beyond big tech.

“That raises the bar for everyone.”

‘The worry is warranted, but the timeline is wrong’

Australian security researcher and ethical hacker Jamieson O’Reilly told Information Age that while he believes Anthropic’s gatekeeping of Mythos makes sense, it is not the safety net some people may think it is.

That’s because some researchers and adversaries have already demonstrated the ability to quickly find and chain together vulnerabilities using publicly available AI models, the founder of cybersecurity firms Dvuln and Aether AI said.

“The CISOs [chief information security officers] we're briefing are worried, but often about the wrong thing,” O'Reilly said.

“Most of the concern is focused on Mythos, specifically, ‘What happens when this model gets out?’

“The real concern should be that Mythos-like capabilities already exist using publicly available models wrapped in the right agentic framework.

“... The worry is warranted, but the timeline is wrong.

“Most CISOs think they have 12 to 18 months to prepare, but the capability is already operational.”


Anthropic is providing Claude Mythos Preview to major US-based technology companies through Project Glasswing. Image: Shutterstock

Reserve Bank, financial regulators ‘closely monitoring’ Mythos

Executives of major American banks were reportedly called to a meeting with the US treasury secretary and federal reserve chair earlier this month, to urgently discuss Claude Mythos.

The Reserve Bank of Australia (RBA) confirmed to Information Age that it is also “closely monitoring” Claude Mythos and has discussed the rise of powerful frontier models with government, peer regulators, and regulated entities in the financial system.

“The RBA has noted the recent announcement by Anthropic regarding its Claude Mythos release’s coding and vulnerability identification capabilities and the community’s response,” a spokesperson said.

“... RBA, along with peer regulators and government agencies will continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system.”

Banking, insurance, and superannuation regulator APRA (Australian Prudential Regulation Authority) said it is aware of Claude Mythos’s capabilities and “is closely monitoring this development including engaging with peer regulators, government agencies, and regulated entities to share insights and intelligence on emerging AI risks and opportunities”.

Corporate regulator ASIC (Australian Securities and Investments Commission) said it is also “closely monitoring” developments in AI following Claude Mythos’s announcement, “to assess possible implications for the Australian market”.

“All participants in the financial system have a duty to balance innovation with the responsible and ethical use of emerging technologies,” a spokesperson said.

“... ASIC expects financial services licensees to be on the front foot every day to ensure that their customers and clients aren’t put at risk by inadequate controls.”

Concern about frontier AI models among Australian authorities has recently been reflected in similar comments made by cybersecurity leaders in countries such as Singapore and the United Kingdom.

An Anthropic spokesperson told Information Age that the company is “coordinating closely with the US government on the timing, scope, and structure of any international access” to Mythos, as it believes “allied democracies need access to these defensive capabilities”.

“The vulnerabilities we are finding affect software deployed globally — this is not a problem any one country can solve alone,” they said.

America’s National Security Agency (NSA) is reportedly already using Mythos Preview, despite defence officials who oversee the NSA recently declaring Anthropic a supply chain risk because the company did not agree to new terms for military use of its AI systems.

Anthropic is also investigating reports that Mythos Preview may have already been accessed by a small group of unauthorised users through a third-party contractor working for Anthropic, according to Bloomberg.