Anthropic says governments “should not resort to threats or retaliation” in dealing with contractors they do not align with, after the American artificial intelligence firm sued the Trump administration for labelling it a supply-chain risk because it did not agree to new terms for military use of its systems.
The company dismissed an ultimatum from the US Department of War last month to allow its technology to be used across military applications.
Anthropic CEO Dario Amodei said the use of AI in autonomous weapons and in mass domestic surveillance were red lines which the company would not cross.
The decision meant Anthropic would lose lucrative government contracts, but saw the popularity of its Claude AI models spike and employees from other leading AI companies such as OpenAI and Google voice support for Anthropic’s actions.
“We are grateful to have received support from customers, partners, and numerous others,” Anthropic told an Australian Senate committee in written answers received by parliament on 17 March, and published this week.
“In our view, there is a strong consensus that, when a government disagrees with a contractor, the appropriate next step is offboarding that vendor in favour of another that is better aligned with the government’s desires,” the company said.
“The government should not resort to threats or retaliation.”
‘We do support the lawful use of Claude’
Anthropic was asked to comment on the Department of War case by independent senator David Pocock during a 12 March hearing of a Senate committee examining information integrity in climate and energy.
Anthropic’s only representative at the hearing – Evan Frondorf, the head of external policy and partnerships for the company’s safeguards team – said he could not comment on the US case at the time, so Anthropic took the question on notice to reply at a later date.
“As we’ve stated, we do support the lawful use of Claude for defence and intelligence purposes,” Frondorf said during the hearing.
“But with two exceptions: The mass domestic surveillance of Americans, because we believe that it is incompatible with democratic values; and the use of fully autonomous weapons, because we believe AI is not reliable enough at the moment to take humans out of the loop.
“... That is our core principle when it comes to defence and intelligence.”

Anthropic CEO Dario Amodei (left) and US Secretary of War Pete Hegseth (right) have engaged in a public feud. Images: YouTube / X
ChatGPT maker OpenAI, which previously employed Anthropic’s co-founders, announced an agreement with the Pentagon just hours after Anthropic’s fell through.
Its CEO Sam Altman later said on social media that the company “shouldn’t have rushed” to make its deal on the same day, and suggested it “looked opportunistic and sloppy”.
Following criticism, Altman said OpenAI was amending its agreement with the Department of War to add language which stated its AI system “shall not be intentionally used for domestic surveillance of US persons and nationals”.
OpenAI also said its agreement with the US government meant its technology could “not be used to independently direct autonomous weapons”, but the company has faced backlash for agreeing to the Department of War’s terms.
Anthropic pledges to cover cost of Australian grid upgrades
Anthropic also confirmed in its written responses to the Senate committee on climate change and energy that it would offer to cover the cost of grid upgrades in Australia if it committed to building its own computing infrastructure in the country, like it has in the US.
Asked by Greens senator Peter Whish-Wilson whether Anthropic had lobbied the US government for a national fossil fuel target for data centres, the company denied it had done so.
"Our public energy advocacy has focused on expanding all forms of generation — including renewables — to meet AI infrastructure demand,” it said in response to a question taken on notice.
“We have made a public commitment to cover the full cost of grid upgrades and bring net-new power generation online to match our demand, and would extend that commitment to Australia.”
Frondorf previously told the committee on 12 March that Anthropic expected to take that approach “in any market where we build infrastructure”.
Anthropic announced earlier this month that it was “exploring opportunities” to expand its computing capacity in Australia.
“We're exploring adding local capacity through our third-party partners in Australia, using infrastructure already in place,” the company said in a statement.
“… Beyond that, we're in early conversations about longer-term infrastructure in the region, and we'll share more as those plans take shape.”
Anthropic said it planned to "deepen" its engagement with Australian institutions and would "collaborate on projects that advance Australia’s national interests and priority sectors”.
The company also confirmed plans to open an office in Sydney, amid what it called “strong demand from businesses in Australia and New Zealand”.
Anthropic is currently recruiting for two roles in its Sydney office, as well as a government policy and external affairs leader in Canberra, according to its website.