Anthropic is partnering with some of the biggest technology companies in the world on a new project aiming to secure critical infrastructure from the threats posed by AI, revealing a yet-to-be released model has found “thousands” of significant cybersecurity vulnerabilities.

The AI giant behind the large language model Claude unveiled Project Glasswing this week – an initiative in partnership with the likes of AWS, Apple, Cisco, Google, Microsoft, Nvidia, and Palo Alto Networks, aiming to “secure the world’s most critical software”.

Anthropic announced that its unreleased frontier model Claude Mythos Preview had already found thousands of “high-severity vulnerabilities”, including some in every major operating system and web browser.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” an Anthropic blog post said.

“The fallout – for economies, public safety and national security – could be severe.

“Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

The project’s launch partners will use the preview of Mythos as part of defensive security work, with the lessons to be shared with the whole industry.

Anthropic has also extended access to the tool to more than 40 organisations that build or maintain critical software infrastructure, and has committed up to $US100 million ($142 million) in usage credits for it and $US4 million ($5.7 million) in direct donations to open-source security organisations.

‘We need to act now’

Project Glasswing is the “starting point” in the fight against the use of AI to exploit software vulnerabilities at an untold scale, Anthropic said.

“No one organisation can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play,” its blog post said.

“The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months.

“For cyber defenders to come out ahead, we need to act now.”


Partner organisations taking part in Project Glasswing. Image: Anthropic

Bugs in software used to run critical infrastructure may be used by malicious actors to hijack systems, disrupt operations, or steal data, Anthropic said, and the growth of generative AI tools meant the cost, effort, and level of expertise required to find such bugs has drastically dropped.

Mythos is now competitive with the best humans at finding and exploiting vulnerabilities, Anthropic added.

“The vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated,” the company said.

“Without the necessary safeguards, these powerful cyber capabilities could be used to exploit the many existing flaws in the world’s most important software.

“This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the US and its allies.”

Vulnerabilities secret for decades

Some of the vulnerabilities identified by the unreleased Claude Mythos included a 27-year-old issue in OpenBSD – which is known as one of the most hardened operating systems in the world – that would allow an attacker to “remotely crash any machine running the operating system just by connecting to it”, Anthropic said.

Mythos also allegedly found a 16-year-old vulnerability in audio and video platform FFmpeg, in a line of code that automated tools had tested five million times without the issue ever being detected.

These vulnerabilities have been reported and patched, Anthropic said.

The existence of Claude Mythos was inadvertently revealed last month when internal Anthropic documents found stored in a publicly accessible data cache described it as “by far the most powerful AI model we’ve ever developed”.

Anthropic, which recently inked an agreement with the Australian government, said it was “in ongoing discussions with US government officials about Claude Mythos Preview”, despite being embroiled in a legal battle with the Trump administration after the company didn’t agree to new terms for military use of its AI systems.