Claude, Anthropic’s artificial intelligence tool, is helping the military identify and prioritize targets in Iran even during a very public feud between the Defense Department and the company’s chief executive. America’s adversaries are the only winners in this fight.
The Pentagon informed Anthropic on March 4 that it had been designated a supply chain risk, essentially calling the company a danger to national security. Anthropic CEO Dario Amodei apologized for bashing the Pentagon in a conciliatory interview published Friday but reiterated plans to file a lawsuit to challenging the label. The Defense Department’s undersecretary for research and engineering, Emil Michael, complained in separate interviews that Amodei was too difficult to negotiate with and wants too much sway over the potential uses of his product.
In a Feb. 27 order, President Donald Trump gave federal agencies six months to phase out their use of Anthropic. This leaves plenty of time to resolve these issues.
Designating Anthropic a supply chain risk, typically done to keep foreign adversaries away from mission-critical applications, is legally dubious. While courts would likely rule in the company’s favor, a prolonged lawsuit could constrain market penetration and access to capital for a great American startup.
The Pentagon has every right to decide that Anthropic’s contract demands make it unsuitable for military needs, and Anthropic has every right not to work with the government. That doesn’t mean either action is wise.
A company as innovative as Anthropic can surely be more flexible in designing guardrails to ensure that they don’t slow operational tempo but still comply with the law. Finding a way out would be patriotic but also in the firm’s self-interest. Just as America’s military might depends on the strength of its free enterprise system, the strength of the free enterprise system depends on American military power.
At the same time, the Pentagon does not have the right to destroy a business because contract negotiations aren’t going well. This is also as much a matter of self-interest as principle: Harming the company out of pique takes away from American troops what may be the best tool available.
The Pentagon has not gone as far as feared just yet. Contractors have been barred from using Anthropic in the work they perform for the government, not altogether. The administration has also yet to invoke the Defense Production Act to compel Anthropic to turn over its model for government use.
OpenAI sought to capitalize on the military’s falling out with Anthropic by rushing to sign its own deal, but CEO Sam Altman spent last week trying to quell a rebellion from some of his staffers. He announced that he got the Pentagon to agree to revisions to his deal that Anthropic said it had not been able to get. (The Post has a content partnership with OpenAI.)
Altman is a bitter rival of Amodei but said publicly that Anthropic should not be designated as a supply chain risk. Even in a hypercompetitive industry, maintaining America’s lead in AI is in everyone’s interest. And that’s helped by a healthy mix of companies racing to constantly make better products for their customers, whether those are companies, citizens or governments.
The post Can the military and AI firms get along? appeared first on Washington Post.




