At great risk to its business, Anthropic is taking a principled stand in the face of threats from the administration to commandeer its cutting-edge artificial intelligence technology.
Defense Secretary Pete Hegseth gave the company a deadline of 5:01 p.m. on Friday to either allow the military to freely use its Claude model or lose a $200 million government contract and be blacklisted as “a supply chain risk,” which would force defense contractors to drop them too. More troubling, Hegseth is threatening to invoke the Defense Production Act to compel Anthropic to drop its guardrails.
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” Anthropic CEO Dario Amodei wrote in a blog post late Thursday. “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
If Hegseth follows through, that will test not just the legal limits of a law intended for wartime emergencies but the practical limits of the state’s ability to coerce companies to its will.
The blowup follows Anthropic’s concerns about the classified use of its product during the successful operation to capture Venezuelan President Nicolás Maduro. Hegseth wants Anthropic to modify its contract to allow “any lawful use” of the technology. Anthropic is willing to rewrite its current terms of use but not to include mass surveillance of Americans or for weapons that operate without a person in the loop to make the final decision. The Pentagon denies that it has any plan to surveil Americans or take humans out of the kill chain.
The government seems to believe Anthropic shouldn’t get more say over how its product is deployed in battle than Lockheed does over the use of its fighter jets. Amodei believes his model poses such a potential risk to humanity if it becomes fully autonomous that he cannot let go of the reins. The clash raises fascinating philosophical questions about the future of war.
At its base, however, this is about economic freedom and the right of a private company to decide how and who it wants to do business with. If Anthropic does not want to comply with government diktats, there are plenty of other competitors who are eager to do so and want the business. The headache for Hegseth is that Claude may be the best product on the market, at least for now.
The government has reasonable concerns about its ability to act quickly in a crisis. As an American company, Anthropic has a patriotic responsibility to work in good faith with the Pentagon to ensure its products won’t freeze up in the event of an enemy attack that requires swift and overwhelming response. Amodei says they’ve done so. At the same time, the bar should be extremely high for when a privately held company is required to do risky work it sees as immoral. That threshold has not been met.
Invoking the DPA to try taking control of a model would put the government into legally murky waters. Anthropic could turn this into a drawn-out lawsuit, creating uncertainty. And if the government wins, what then? A court can compel performance, but it cannot compel good performance.
Interestingly, Anthropic slightly relaxed its AI safety commitments the same day its CEO met with Hegseth. The company says that is unrelated to its fight with the Pentagon. Rather, Anthropic is trying to keep competitors at bay, saying it will no longer halt development for safety concerns if a rival has released an equal or superior model. The government should take heed: Americans benefit from having as many companies as possible vying for government business, not by making Uncle Sam a nightmare customer.
The post Pete Hegseth seeks a Pyrrhic victory against Anthropic appeared first on Washington Post.




