At 5:01 p.m. Friday, the Pentagon may be at war. I’m not referring to Iran, nor to any other shooting war — but a potentially existential conflict between two parties, nonetheless: The artificial intelligence company Anthropic and the Department of Defense are fighting over the contractual terms for its continued use of Anthropic’s A.I. model.
Anthropic is insisting that the government agree to specific restrictions that would prevent the use of its model to conduct widespread surveillance of Americans or to control autonomous weapons like drones without a human in what is called the “kill chain.” The company reiterated on Thursday that it has no intention to change its position. The government says that the only requirement its contractors can insist on is that their products be used lawfully.
There is a lot at stake, and neither side is offering the correct solution. A.I. is poised to be the most transformative technology of our generation, perhaps of any generation, and we need to ensure the government and the private enterprises that develop these technologies have a constructive and mutually beneficial relationship consistent with American values. That can happen only if we use the mechanisms our country’s founders put in place to define the rules of the game, level the playing field and balance interests across the government and among individuals and businesses: through regulatory legislation passed by Congress.
The tool Anthropic is providing to the government is enormously powerful; like other tools, it can inherently be used for good or evil. Anthropic is rightly concerned that its tool could be used in ways that are unsafe or malicious. The company doesn’t want to see its A.I. model used without human control, which could result in the killing of noncombatants or friendly troops by automated weapons, nor deployed to spy broadly on Americans in ways that could violate dearly held values like privacy and freedom from illegal search and seizure or could suppress political dissent. Most Americans would probably agree.
On its side, the Department of Defense will not accept constraints on the use of products it has purchased. The government has a point. America’s national security team needs to have the freedom to use the products it buys within the law and not be beholden to preferences from the sellers.
The government is trying to force Anthropic to capitulate with two threats: invoking the Defense Production Act to force Anthropic to provide its product with no additional restrictions, and designating Anthropic as a “supply chain risk” contractor. The first of these is unusual but consistent with the law. Claude, Anthropic’s large language model, is the only A.I. product approved for use on classified Pentagon networks. It is not unreasonable for the government to assert that it must have access to Claude for national security reasons until a comparable product from a competitor becomes available (something that appears to be fairly imminent).
The government goes much too far, however, with its second threat. Declaring Anthropic a “supply chain risk” could put all its government contracts at risk and would be an egregious abuse of power. It would also imperil the company, because it could indirectly force all contractors that do business with the Defense Department to themselves stop using Anthropic’s products. In other words, it could effectively prohibit any American company working with the Pentagon from using Anthropic.
These threats are in conflict with each other. With the first, the government is saying that Anthropic is essential for national security, so its model has to be available to use without restrictions. The second is the government saying Anthropic is a national security risk that no American company can rely on. Both cannot be true. So what is happening is pure extortion.
But Anthropic is wrong in trying to use contractual terms to prevent the misuse of its products, or at least to deflect responsibility for that misuse. Under normal circumstances, a good-faith effort by Anthropic and the government could address these concerns without contractual limitations. In most administrations, it would be reasonable to assume that a product sold to the government will be used responsibly and safely.
That assumption, however, is far more questionable in an administration engaged in what I and many legal experts believe are extrajudicial executions of alleged drug traffickers and widespread surveillance of Americans, ostensibly to identify undocumented immigrants. Even so, if a company is unwilling to see its product used to support these and similar actions, it has the option of declining to sell to the government. The government cannot be expected to negotiate provisions like those Anthropic is demanding with every supplier. It would be a nightmare to administer and unenforceable.
If contract provisions are not an appropriate way to prevent government misuse of emerging A.I. technologies, then what is appropriate? Regulation by Congress. In a must-read essay, the Anthropic chief executive Dario Amodei outlines the risks he sees in these new and rapidly expanding technologies, including widespread surveillance and lethal autonomy. He also calls for government regulation that could effectively provide legally enforceable constraints.
I fully support that recommendation. We regulate most of the products we buy, from automobiles to airplanes to appliances. Existing and emerging A.I. models entail far more risk and scope of potential harm than these products. Congress needs to pass, as part of comprehensive A.I. regulation, restrictions on the most dangerous uses of these tools despite the Trump administration’s strong resistance to such limits.
This sort of legislative action is imperative today. One year ago, I wrote that America had “a rogue president.” We still do. The Trump administration isn’t going to be constrained by contract terms any more than it is constrained by the law. Anthropic’s fears are legitimate. If we want to protect Americans from government misuse of A.I., we need our 250-year-old system of checks and balances to respond to this challenge as it has done so many times in the past. That may be the only way to resolve this question, and similar ones that will follow.
Frank Kendall was the secretary of the Air Force from 2021 to 2025. He is a senior fellow at the Center for American Progress and the author of “Lethal Autonomy: The Future of Warfare, Whether We Like It or Not.”
Source imagery by Pavlo Stavnichuk, Anadolu/Getty Images.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post In His Dispute With Anthropic, Pete Hegseth Has a Point appeared first on New York Times.




