Anthropic said late Thursday that it will not concede to the Pentagon’s terms for full access to its artificial intelligence tool Claude, saying it cannot loosen its restrictions against use in fully autonomous weapons or mass domestic surveillance.
The AI firm and the Defense Department have been at odds for weeks, after Anthropic reportedly raised questions over how Claude was used in the raid to capture Venezuelan President Nicolás Maduro, and the relationship soured further as the two sides issued conflicting accounts of the terms of their disagreement.
The Pentagon said it has never considered autonomous weapons or mass surveillance in the scope of its use but has not been willing to prohibit them in its contract with Anthropic, saying it will only pursue lawful applications.
However, Defense Secretary Pete Hegseth said the Pentagon must be able to use the technology for the full range of warfighting — a broad remit that left too many questions for Anthropic to be comfortable with. The mutual frustration culminated with the Pentagon giving Anthropic a 5:01 p.m. Friday deadline to comply or risk being forced to provide full access to its AI using the Defense Production Act.
On Thursday, Anthropic CEO Dario Amodei said in a lengthy statement that the firm was holding firm to its red lines — and hoped the Pentagon would reconsider.
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now,” he said, citing specifically autonomous weapons use and mass surveillance.
“We cannot in good conscience accede to their request,” Amodei wrote.
The Pentagon did not immediately respond to a request for comment.
The post Anthropic rejects Pentagon terms for lethal use of its chatbot Claude appeared first on Washington Post.




