A public feud is playing out between American AI firm Anthropic – best known for its Claude series of AI models – and the United States White House and Department of War.
The disagreement has highlighted a tension between the desire of the most powerful military in the world to use frontier AI models, and a company which has marketed itself as a leader in ethical and safe AI deployment.
Here’s what you need to know about their feud, and why the Pentagon has ditched Anthropic.
How did the disagreement begin?
Despite being the first generative AI company to have its technology used in classified US military operations, Anthropic has been criticised by the White House for vocally supporting AI safeguards and ethical use.
The company was one of a handful of AI firms – alongside Google, OpenAI, and xAI – which signed contracts in 2025 with the US Department of War (previously known as the Department of Defence), worth up to $US200 million ($283 million) each.
Anthropic’s Claude models have since been used by US defence agencies for tasks such as intelligence analysis, simulations, planning, and cybersecurity operations, according to the company.
However, Anthropic has faced months of condemnation from the White House and its advisors, partly due to its backing of AI safeguards and support for state-based AI laws in the US (which the White House has tried to quash).
US President Donald Trump’s AI and crypto ‘czar’ David Sacks accused Anthropic of engaging in “fear-mongering” over AI regulation in 2025, and even suggested the company was pushing so-called “woke AI and other AI regulations”.
Tensions with Anthropic appeared to escalate when Claude was reportedly used during the US military’s capture of Venezuela’s then-president, Nicolás Maduro, in January 2026.
While the Pentagon did not confirm whether Claude was used during the raid, US defence officials soon began to reevaluate their contracts with Anthropic.
"Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was," an official reportedly told Axios.
Anthropic said it had "not discussed the use of Claude for specific operations with the Department of War”.
In the same month, US Secretary of War Pete Hegseth told his department to “utilise [AI] models free from usage policy constraints that may limit lawful military applications”.
When did things hit a turning point?
Things came to a head when Hegseth gave Anthropic an ultimatum during meetings last week.
He threatened to remove the company from the Pentagon’s supply chain if it did not agree by Friday to allow its technology to be used across military applications.
Hegseth threatened to label Anthropic a supply chain risk – a move typically aimed at foreign companies and adversaries, and one which would threaten Anthropic’s government contracts.
He also reportedly threatened to invoke the US Defence Production Act, which could allow the country's military to use Anthropic's technology however it wanted.
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” Anthropic CEO Dario Amodei said in a statement on Thursday.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

Anthropic is known for its Claude series of AI large language models (LLMs). Image: Shutterstock
Amodei had told Hegseth that the use of AI in autonomous weapons and in mass domestic surveillance were red lines which Anthropic would not cross.
The company had also voiced concern over the US government potentially purchasing data on citizens’ movements or web browsing by legal means, before analysing it using AI.
“Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale,” Anthropic said.
Amodei said the company had “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner”, but added that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values”.
The Pentagon said it has no interest in applying AI for domestic surveillance, or for autonomous weapons which could identify and engage with targets without human involvement.
Did the US government follow through on its threats to Anthropic?
Yes, partly.
After Anthropic refused to agree to the government's terms and the Friday deadline passed, Hegseth declared the company a supply chain risk to national security, but stopped short of invoking the Defence Production Act to potentially force its compliance.
US President Donald Trump wrote on social media that he had directed "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology”.
“We don't need it, we don't want it, and will not do business with them again!" he said, describing Anthropic as a “Radical Left AI company run by people who have no idea what the real World is all about”.
Hegseth accused Anthropic of betraying America by not allowing “full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic”.
“Instead, Anthropic and its CEO Dario Amodei have chosen duplicity,” he wrote on X.
“Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.”
How did Anthropic respond?
Amodei said Anthropic was “deeply saddened” by the US government’s decision and argued Hegseth’s designation was both “legally unsound and set a dangerous precedent for any American company that negotiates with the government”.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Amodei said in a statement.
“We will challenge any supply chain risk designation in court.”
Despite Trump’s call for the immediate severing of ties with Anthropic, the US military reportedly used Claude during its attacks against Iran on the weekend.
The Department of War would phase out Anthropic's products in the next six months “to allow for a seamless transition to a better and more patriotic service”, Hegseth had previously stated.
Have any other AI companies agreed with the US military’s demands?
At least two other leading American AI firms have agreed to allow their models to be used in classified US military systems, amid Anthropic’s ousting.
OpenAI, which makes the popular chatbot ChatGPT and previously employed Anthropic’s co-founders, announced an agreement with the Pentagon just hours after Anthropic’s fell through.
Elon Musk’s xAI has also reached an agreement with the US military for its controversial model Grok to be used in classified systems.
It remains unclear where Google’s negotiations stand, but its parent company Alphabet removed a longstanding ban on AI being used for developing weapons and surveillance tools in 2025.
OpenAI said its contract with the Department of War included red lines against mass domestic surveillance and autonomous weapon systems, and would also not allow its technology to be used for “high-stakes automated decisions”, such as ‘social credit’ systems.
“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s," the company said in a statement.
“In our agreement, we protect our red lines through a more expansive, multi-layered approach.
“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections.
“This is all in addition to the strong existing protections in US law.”
However, OpenAI CEO Sam Altman admitted he was worried about the potential for a future dispute with the Pentagon over what constituted legal use of AI.
“If we have to take on that fight we will, but it clearly exposes us to some risk,” he wrote on X.
Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk.
— Sam Altman (@sama) March 1, 2026
I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that.
As part of its deal with the Department of War, OpenAI said it asked the US government to make the same contract terms available to all other AI firms.
The company said it told the US government it did not believe Anthropic should be designated a supply chain risk.
Has Anthropic received much support from the public?
While the White House and its supporters have continued to criticise Anthropic for not complying with the Department of War's requests, the company has seen a swell of support from many consumers.
Anthropic’s Claude became the number one app in Apple’s US App Store download charts on Saturday.
It also rose to third place in Apple's Australian App Store and fourth on Google Play at the time of writing, still behind ChatGPT.
Numerous posts on social media have also shown ChatGPT users cancelling their subscriptions or accounts – with thousands more pledging to do so, citing the company’s deal with the Department of War.
“OpenAI just made a deal with a devil and lost this customer of two years,” one user wrote.
“Bye bye GPT, and hello Claude,” wrote another.
Some companies have also begun vocally moving away from ChatGPT.
Australian digital agency Enterprise Monkey announced on Monday it would switch from ChatGPT to Claude over OpenAI’s deal with the Pentagon, its recent introduction of ads in ChatGPT, and “hallucination issues” it had experienced with the chatbot.
CEO Aamir Qutub said, "When a government punishes a company for refusing to build surveillance tools, every tech company needs to decide which side they're on.
“We've decided."