Defense Secretary Pete Hegseth has threatened Anthropic that it could invoke powers that would allow the government to force the artificial intelligence firm to share its novel technology in the name of national security if it does not agree by Friday to terms favorable to the military, people familiar with the ongoing discussions said.
But Anthropic is prepared to walk away from negotiations — and its $200 million contract with the Defense Department — if concerns over the use of its technology for autonomous weapons or mass surveillance are not addressed, according to the people familiar with the discussions.
Anthropic is the first firm to integrate its technology into the Pentagon’s classified networks, and the firm has aggressively positioned itself to be a key player in national security. In a meeting with Hegseth on Tuesday, Dario Amodei, the company’s co-founder and chief executive, held firm that its AI model Claude should not be used to power autonomous weapons or conduct mass surveillance of Americans, said the people familiar with the discussions.
Tensions have risen between the firm and the Pentagon in recent weeks over how Anthropic’s AI was applied during the raid to capture Venezuelan President Nicolás Maduro. Defense officials responded swiftly, suggesting that if Anthropic did not allow the Pentagon to apply the AI as it wants to, within lawful limits, the company would be considered a supply-chain risk, costing it and any firm subcontracting its AI future business opportunities.
At the Tuesday meeting, Hegseth went further, saying Anthropic could be subject to the Defense Production Act — which enables the government to gain control of firms and their products — in the name of national security. The DPA was used during the covid pandemic to address medical supply shortfalls.
Overall, the meeting was serious but respectful, according to one of the people familiar with the discussions, with Hegseth praising Anthropic’s technology. The secretary said he wanted to continue to work with the company, but threatened to cancel its contract by the end of the week, said the person, who spoke on the condition of anonymity to describe a private meeting.
Amodei argued that neither of the limits he is seeking would impinge on the department’s work, the person said.
“During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service,” Anthropic said in a statement. “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
The meeting comes after escalating criticism of Anthropic by Pentagon officials. Hegseth and his team have insisted in recent weeks that the military have free rein to use AI tools as it sees fit, limited only by the law rather than guardrails set by the companies that make the systems. Defense officials say other leading companies have agreed, at least for unclassified work, casting Anthropic as a holdout.
Anthropic and Amodei are trying to walk a fine line, positioning themselves as more than willing to work with the Pentagon and describing AI as a vital technology to allow democratic countries to defend themselves.
But shortly after Hegseth set forth his views in an internal directive, Amodei published an essay warning of the dangers of fully autonomous weapons and mass surveillance tools. He wrote that while democratic countries could be expected to have limits on the use of such systems, “some of these safeguards are already gradually eroding in some democracies.”
The Pentagon has sped up its efforts to integrate AI into its weapons systems, driven by competition with China — which is racing to acquire AI technology for its military — and new dangers such as super-fast hypersonic missiles that are difficult for humans to react to. The conflicts in Ukraine and Gaza have provided a preview of the role AI could play in a future war, with the widespread use of cheap semiautonomous drones and tools that analyze vast amounts of information to identify targets to strike.
The U.S. Air Force has tested an AI-piloted flight jet in recent years, finding that it can beat elite pilots by cutting tiny fractions of a second off turns and maneuvers.
Fully autonomous weapons are probably still several years away, experts say. The Defense Department’s current policy requires any system to undergo levels of review and have safeguards to ensure that humans would retain the decision-making on use of force. The policy will be reviewed as needed, officials have said.
Modern military operations are complex, involving thousands of people making life-and-death decisions quickly, said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology. Not surprisingly, those people make mistakes, Probasco said, and AI tools could manage campaigns in all sorts of ways short of pulling the trigger.
“Everyone is still trying to think what is the best way to use these systems to improve our decisions,” said Probasco, a former Navy officer. “Nobody’s really got the definitive answer yet.”
The post Hegseth threatens to force AI firm to share tech, escalating Anthropic standoff appeared first on Washington Post.




