Anthropic sued the Trump administration Monday, calling on federal judges in San Francisco and Washington to strike down a government order forbidding military contractors from partnering with the artificial intelligence company on the grounds that it poses a risk to national security.
“These actions are unprecedented and unlawful,” the company’s attorneys wrote in the San Francisco case. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
The Defense Department last week formally tagged the AI company as a supply-chain risk, the kind of label usually reserved for Chinese and Russian firms suspected of helping foreign spies. The move followed increasingly bitter negotiations over how the company’s technology might be used in warfare, with Anthropic seeking guarantees that the Claude model would not be used for mass domestic surveillance or to power fully autonomous weapons. The unprecedented step by the Pentagon came even as Anthropic’s tools were playing a central role in President Donald Trump’s bombing campaign in Iran.
The Pentagon declined to comment on the lawsuit.
In legal filings, Anthropic said the administration had overstepped its legal authority and violated the company’s First Amendment rights to speak about the limits of AI’s military applications. The company filed two cases to challenge two different laws the government is using to declare it a risk.
“Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible,” the company’s attorneys wrote in a complaint filed in U.S. District Court for the Northern District of California.
“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle.”
Mark Jia, a law professor at Georgetown University, said Anthropic had a strong chance of success in court because the law the Defense Department is relying on was designed to target companies linked to foreign adversaries.
“It is absurd for the government to argue that Anthropic is the kind of company meant to be addressed by this statute, when the War Department has repeatedly sought to obtain Anthropic’s services for national defense,” Jia said in an email, referring to the department by the Trump administration’s preferred name.
The battle has reverberated through Silicon Valley, raising questions about what limits AI developers should be able to impose on their technology when they do business with the government. Administration officials and the Defense Department have demanded the freedom to use AI systems for any lawful purpose, arguing that the government must have the final say.
After Anthropic CEO Dario Amodei refused to agree, Trump said last month that he was ordering federal agencies to stop using Claude. Defense Secretary Pete Hegseth went further, saying he was imposing a far-reaching ban on the company doing any work with military contractors.
But behind the scenes, the two sides continued to talk last week. Technology and defense figures lobbied the two sides to de-escalate, warning of the ripple effects that would come with branding a leading American company a security risk in an industry where AI labs, tech giants and hardware makers are intertwined with both one another and the Pentagon.
The discussions finally came to an end Thursday, according to a defense official — a day after tech news site the Information published a caustic internal staff memo in which Amodei said the administration was opposed to the company “because we haven’t given dictator-style praise to Trump.” The leak of the note contributed to the ultimate breakdown of the talks, according to the defense official and a second person familiar with the discussions.
“It blew up negotiations,” said the second person, speaking on the condition of anonymity to describe private talks. Amodei apologized for the memo in a statement.
Anthropic said the government’s actions had immediate consequences for the company, alleging that it placed hundreds of millions of dollars in jeopardy. Some of Anthropic’s partners that are also federal contractors have questioned whether they can continue to do business with the company, according to the company’s complaint.
For now, the military is continuing to rely on Claude to help carry out the assault on Iran. The AI tool is embedded in the military’s Maven Smart System, which helps commanders analyze intelligence and identify targets to strike. In the lead-up to the military campaign, the system suggested hundreds of targets, with precise coordinates, and ranked them in order of importance, people familiar with the system previously told The Washington Post. It also speeds up planning dramatically and helps evaluate the aftermath of strikes, one of the people said.
Defense officials have said they are aware of their dependence on the system, and Trump said he was providing a six-month phaseout of Anthropic’s tools.
In the long term, competitors are positioned to supplant Anthropic, even if the company is victorious in court. As officials were labeling the company a pariah, its chief rival, OpenAI, was finalizing an agreement to work on the Pentagon’s secret networks. OpenAI said it had been able to secure protections related to surveillance and autonomous weapons, while agreeing to the “all lawful uses” standard that officials wanted.
Aaron Schaffer contributed to this report.
The post Anthropic sues Pentagon over national security risk label appeared first on Washington Post.




