A standoff between the Pentagon and the artificial intelligence company Anthropic appeared to be deepening as the two sides hurtled toward a 5:01 p.m. deadline Friday that military officials gave the firm to either allow them unrestricted access to its most advanced model or face consequences.
Defense Department officials criticized Anthropic’s leader after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act.
Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Department’s latest terms.
“It’s a shame that @DarioAmodei is a liar and has a God-complex,” Mr. Michael wrote late Thursday. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
On the surface, the battle between the Pentagon and Anthropic is a contract dispute over technical details of how the artificial model works, and the military’s use of it. But it has also ballooned into a deeply political fight, involving questions of the military’s ability to employ cutting-edge technology the way it sees fit and what A.I. can or should be used for.
Officials from the State Department took to social media to reinforce the Pentagon’s case and chastise Anthropic, while Democratic senators backed the company.
Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, posted a video on social media on Thursday in which he said companies need to make some concessions with the government, but indicated he thought Anthropic’s concerns about surveillance and autonomous drones held merit. Mr. Warner argued that Anthropic was being threatened by Pete Hegseth, the defense secretary, for prioritizing safety.
“He is threatening them, literally by tomorrow, that if they don’t give up all controls on safety and other things that anyone who does business with them would be banned,” Mr. Warner said.
The Pentagon wants all its contractors to adhere to a single standard — that the military can use what it buys however it wants, as long as it complies with the law. But Pentagon officials have also been happy to beat up on tech companies, particularly ones the Trump administration has branded as “woke.”
For Anthropic, a firm that prioritizes both national security and technological safety, the political stakes are high. Supporters cheered Mr. Amodei’s assertion that his company would not bend or allow its model to be used for mass surveillance of Americans or to command pilotless drones.
The company has said it is willing to continue negotiating but will not back down from its red lines.
Employees at the company have cheered their CEO’s firm stance. And in a rare moment of unity across Silicon Valley A.I. companies, employees at two of Anthropic’s competitors, Open A.I. and Google, signed letters backing the position staked out by Anthropic.
One letter published Thursday was signed by nearly 50 employees at OpenAI and 175 at Google. It criticized the Pentagon’s negotiating tactics and called on its leaders to “put aside their differences and stand together to continue to refuse the Department of War’s current demands.”
“They’re trying to divide each company with fear that the other will give in,” the letter said.
The Pentagon said on Thursday that it had no interest in using Claude for Government, Anthropic’s model that works on classified systems, for either activity. Mr. Amodei said the Pentagon’s assertion that it would not use Claude for domestic surveillance or autonomous drones was undercut by the legal language in their contract.
“In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values,” he wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
It is unclear what exactly will happen after 5:01 p.m. Friday. Any action by the Pentagon to label the company a supply chain risk or to force it to comply with the Defense Production Act would prompt legal action by Anthropic.
Labeling the company a supply chain threat would block it from doing business with the government. But that, in turn, could have far-reaching effects for the Pentagon and intelligence agencies, because Anthropic’s Claude has been the primary A.I. program used in classified systems.
While many of the uses of artificial intelligence to assist military operations on the ground are still in a developmental stage, the models are actively used for intelligence analysis. Forcing Claude off government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts. It could also hamper C.I.A. analysts searching for patterns in intelligence reports.
The Pentagon is ready to move forward with Grok, produced by Elon Musk’s xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching A.I. software would take time and almost certainly cause disruption.
Julian E. Barnes covers the U.S. intelligence agencies and international security matters for The Times. He has written about security issues for more than two decades.
The post Pentagon Attacks Anthropic Chief as Deadline Looms in Standoff appeared first on New York Times.




