The Defense Department has officially informed Anthropic that it has labeled the artificial intelligence company a “supply chain risk,” which could prevent it from doing business with the U.S. government.
In a statement posted to the internet on Thursday, Anthropic’s chief executive, Dario Amodei, confirmed that the company had received a formal letter from the Pentagon. As the company previously said, he vowed to fight the designation in court.
“We do not believe this action is legally sound,” he wrote.
The Defense Department is using Anthropic’s technology as U.S. military forces engage in a widening war against Iran, two people familiar with the technology said on the condition of anonymity.
Anthropic analyzes data and imagery collected by the United States, which helps the military decide where to deploy its forces or launch strikes.
In recent weeks, Anthropic has tussled with the Pentagon over how its A.I. could be used on classified systems. The Defense Department demanded that it be able to use Anthropic’s A.I. system for all lawful purposes, or it would cut the company off from government business.
Anthropic said it needed terms that would ensure that its A.I. technology would not be used for domestic surveillance of Americans or with autonomous lethal weapons. But the Pentagon insisted that a private company like Anthropic could not decide how its tools would be used in national security work.
The two sides failed to agree on terms by the Pentagon’s Friday afternoon deadline. After the deadline passed, Defense Secretary Pete Hegseth declared Anthropic a “supply-chain risk to national security” in a social media post. The designation is typically applied only to firms with ties to the government of China.
Mr. Hegseth also said that “no contractor, supplier or partner that does business with the United States military may conduct any commercial activity” with the company.
Anthropic said it would not sue the Pentagon until it received a formal letter notifying that it had been labeled a supply chain risk.
In his statement, Dr. Amodei said Anthropic continued to discuss the matter with the Pentagon, even as the Defense Department publicly said it would remove the company’s technology within six months.
“Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so,” he wrote.
Although Anthropic is the only company that provides the Pentagon with artificial intelligence technologies for classified systems, other companies are angling to replace it. OpenAI and Elon Musk’s xAI have signed agreements with the Defense Department to provide technology on classified systems.
(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
OpenAI announced an agreement with the Pentagon on Friday, hours after President Trump ordered federal agencies to stop using Anthropic’s technology within six months.
Unlike Anthropic, OpenAI agreed to let the Pentagon use its A.I. systems for any “lawful purpose.” The company said it had also negotiated terms that allowed it to uphold its so-called safety principles by installing specific technical guardrails on its technology.
But after a weekend of criticism of that agreement, OpenAI said on Monday that it had amended its deal to include additional protections to prevent its technology from being used in mass surveillance of Americans, though the critics argued that it still allowed the Pentagon some loopholes.
Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.
The post Pentagon Officially Notifies Anthropic It Is a ‘Supply Chain Risk’ appeared first on New York Times.



