Anthropic says an issue “caused by human error” is to blame for the leaking of part of the internal source code for its popular artificial intelligence coding assistant Claude Code.
It comes as the American AI company signs an agreement with the Australian government, while still reeling from a previous data leak last month which detailed its upcoming AI models.
Anthropic confirmed on Wednesday (AEDT) that “no sensitive customer data or credentials were involved or exposed” in the leak of Claude Code’s source code, which could give competitors insights into how the system was built.
“This was a release packaging issue caused by human error, not a security breach,” an Anthropic spokesperson told Information Age.
“We're rolling out measures to prevent this from happening again.”
The leak allegedly involved a source map file (.map) used for internal debugging, which was accidentally included in version 2.1.88 of Claude Code, released by Anthropic on Monday.
The file was reportedly used to access part of Claude Code’s source code, containing more than 512,000 lines of code.
Software engineer Chaofan Shou called attention to the leak on social media platform X and posted a file containing the source code, which was soon shared across developer platform GitHub and examined by numerous users.
The source code leak is understood to contain details of Claude Code’s internal structure and how its memory system works – which could potentially be useful for reverse-engineering efforts by AI enthusiasts or cyber threat actors – but does not include the model itself or the so-called 'weights' it learned during training.
The rising popularity of Claude Code has helped establish Anthropic as a leading AI provider, and has caused stock market jitters for traditional software-as-a-service (SaaS) companies in recent months.
Claude code source code has been leaked via a map file in their npm registry!
— Chaofan Shou (@Fried_rice) March 31, 2026
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
Human error blamed for earlier leak confirming new Claude models
The leak of source code from Claude Code comes at an awkward time for Anthropic, after details of upcoming AI models and other documents were found to be publicly accessible in March, according to Fortune.
The blunder reportedly saw internal Anthropic documents stored in a publicly accessible data cache, including draft blog posts and details of a new and unreleased AI model called Mythos, under a new tier of models dubbed Capybara.
A draft blog post reportedly described Claude Mythos as “by far the most powerful AI model we’ve ever developed”, and Anthropic told Fortune the model marked “a step change” in AI performance.
The company planned to roll out Capybara-tier models to partner organisations first, a draft blog post reportedly said, as they were “currently far ahead of any other AI model in cyber capabilities”.
The post reportedly stated “it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders”.
After being told of the data leak, Anthropic reportedly told Fortune it had fixed the issue, which it said was caused by a “human error” in how the content management system (CMS) of its website was configured.
Analysts at Gartner wrote that while "Claude Code leads the AI coding agent market” Anthropic’s source code leak, blog post leaks, and recent reports of service issues constituted “reliability and communication failures that demand software engineering leaders require vendors to demonstrate operational maturity”.
Anthropic signs agreement with Australian government
Anthropic and the Australian government have signed a memorandum of understanding (MoU) on AI safety research and development, the two parties announced on Tuesday.
The agreement states that Anthropic and the government intend to collaborate on the goals of Labor’s National AI Plan, which it released in December 2025, including “capturing the opportunities of AI, spreading the benefits, and keeping Australians safe”.
Anthropic committed to sharing its research on AI capabilities and risks with Australia’s AI Safety Institute and taking part in security and safety evaluations of its models, while also sharing economic data with the government about how AI is being used domestically.

Anthropic CEO Dario Amodei speaks during an event at Parliament House in Canberra on Tuesday. Image: Supplied
The company also announced it would provide $3 million in Claude credits to researchers at the Australian National University, Murdoch Children’s Research Institute, the Garvan Institute of Medical Research, and Curtin University.
Anthropic, which has previously confirmed it is “exploring opportunities” to expand its computing capacity to Australia, also used its MoU to confirm it would abide by the government’s recently released expectations for data centre development.
The company had previously pledged to cover the cost of grid upgrades in Australia if it committed to building its own computing infrastructure in the country, like it has in the United States.
Anthropic has also confirmed plans to open an office in Sydney, and is actively recruiting.
Major competitor OpenAI already has an office in Sydney, and announced an Australian data centre deal in December 2025.
Anthropic CEO Dario Amodei, who met with politicians in Canberra this week, said Australia was “a natural partner for responsible AI development”, given its “investment in AI safety”.
Minister for industry, innovation, and science, Tim Ayres, said the government’s MoU with Anthropic sent “a clear signal to Australians that we are open for business, where investment aligns with Australia’s priorities and Australian values”.
Anthropic is currently embroiled in a legal fight with the US government, after it sued the Trump administration for labelling it a supply-chain risk because it did not agree to new terms for military use of its AI systems.
In comments to an Australian Senate committee last month, Anthropic warned that governments “should not resort to threats or retaliation” when dealing with contractors they do not agree with.