President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate intelligence analysis and defense work.
Writing on Truth Social, Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”
Calling the company “Leftwing nut jobs,” he said it had made a mistake trying to strong-arm the Pentagon. For days, Anthropic and the Pentagon have been locked in an escalating battle about how cutting-edge artificial intelligence technology will be used, and how it can aid in military operations.
Still, Mr. Trump announced a “Six Month phase out” for the Pentagon and some other agencies, which could allow for more extended negotiations between Anthropic and the Defense Department.
Mr. Trump’s statement came as the Pentagon and Anthropic were continuing to negotiate a compromise despite an escalating war of words. While some current and former American officials had expressed hope of some sort of deal before the Pentagon’s 5:01 p.m. deadline on Friday, Mr. Trump’s comments will undoubtedly complicate matters.
Mr. Trump’s post took Anthropic officials by surprise, according to people briefed on the discussions.
Democratic lawmakers quickly rallied to Anthropic’s side. Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said Mr. Trump and Pete Hegseth, the secretary of defense, were trying to intimidate a leading American company, actions that pose a risk to defense readiness.
“The president’s directive to halt the use of a leading American A.I. company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” Mr. Warner said.
Defense Department officials were already criticizing Anthropic’s leader in their own social media posts after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act.
On Thursday evening, Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who earlier that day released a statement about why the company would not agree to the Defense Department’s latest terms.
“It’s a shame that @DarioAmodei is a liar and has a God-complex,” Mr. Michael wrote. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
On the surface, the battle between the Pentagon and Anthropic is a contract dispute over technical details of how the artificial model works, and the military’s use of it. But as Mr. Trump’s comments showed, it has also ballooned into a deeply political fight.
The Pentagon wants all its contractors to adhere to a single standard — that the military can use what it buys however it wants, as long as it complies with the law. But Pentagon officials have also been happy to beat up on tech companies, particularly ones the Trump administration has branded as “woke.”
For Anthropic, a firm that prioritizes both national security and technological safety, the political stakes are high. Supporters cheered Mr. Amodei’s assertion that his company would not bend or allow its model to be used for mass surveillance of Americans or to command pilotless drones.
The company has said it is willing to continue negotiating but will not back down from its red lines.
Employees at the company have cheered their chief executive’s firm stance. And in a rare moment of unity across Silicon Valley A.I. companies, employees at two of Anthropic’s competitors, OpenAI and Google, signed letters backing Anthropic’s position.
One letter published Thursday was signed by nearly 50 employees at OpenAI and 175 at Google. It criticized the Pentagon’s negotiating tactics and called on its leaders to “put aside their differences and stand together to continue to refuse the Department of War’s current demands.”
“They’re trying to divide each company with fear that the other will give in,” the letter said.
In their initial potential compromise, the Pentagon said on Thursday that it had no interest in using Anthropic’s model that works on classified systems for either mass surveillance or fully autonomous weaponry. But in rejecting that offer, Anthropic said the Pentagon’s assertion that it would not use the model, called Claude, for those purposes was undercut by the legal language in the contract.
“In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values,” Mr. Amodei wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
Mr. Trump’s post appears to render moot the Friday deadline set by the Pentagon. But it may or may not derail the talks between the company and military officials. And the Pentagon could still take action on Friday.
Former government officials and people familiar with the negotiations had said that any action by the Pentagon to label the company a supply chain risk or to force it to comply with the Defense Production Act would likely prompt legal action by Anthropic. Mr. Trump’s order could do the same.
While many of the uses of artificial intelligence to assist military operations on the ground are still in a developmental stage, the models are actively used for intelligence analysis. Forcing Claude off government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts. It could also hamper C.I.A. analysts searching for patterns in intelligence reports.
Former officials have said C.I.A. officials are anxious to find a way to continue to use Claude, which has sped up their work and deepened their analysis. But before Mr. Trump’s comments, officials had warned that any order by the president could force the agency to find other solutions.
The Pentagon is ready to move forward with Grok, produced by Elon Musk’s xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching A.I. software would take time and almost certainly cause disruption.
Julian E. Barnes covers the U.S. intelligence agencies and international security matters for The Times. He has written about security issues for more than two decades.
The post Trump Orders Government to Stop Using Anthropic After Pentagon Standoff appeared first on New York Times.




