Suppose that you had to die in a terrible artificial-intelligence-related cataclysm. Would you feel worse knowing that the path to destruction was smoothed by the hubris of Silicon Valley tech lords pursuing dreams of utopia and immortality — or by the folly of Pentagon officials who give the A.I. a fateful dose of autonomy and power in the hopes of outcompeting the Russians or Chinese?
We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.
But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.
Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind.
Since the two uses that Anthropic’s current contract explicitly rules out are the employment of A.I. for mass surveillance and its use for fully autonomous weapons (meaning no humans in the to-kill-or-not-to-kill decision loop), it’s easy to get Skynet vibes from the Pentagon’s demands. As Matt Yglesias noted, all the weird and complicated scenarios spun out by A.I. doomers get a lot simpler if our government decides to start building autonomous killer robots.
That’s not what the Pentagon says it intends to do. Its professed concern is that it can’t embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees, and it risks a debacle when events don’t cooperate with corporate ideals. (The example the agency has offered is a hypersonic missile attack on the United States where an A.I. company refuses to assist in some crucial response because it falls afoul of the no-machine-autonomy rule.)
To the extent that this is a legitimate concern, however, it does not justify the administration’s plan (as of this writing, at least) to effectively make war against Anthropic, not just by ending the military’s relationship with the company but also by designating it a “supply chain risk,” which would cut off its relationships with any company that does business with the U.S. government.
Up until now, the Trump administration has been hyping the benefits of a decentralized, free-market approach to artificial intelligence. The attempt to break Anthropic implies the end of that freedom and a shift toward a more centralized and militarized approach. Indeed, to quote Dean Ball, one of the original architects of the administration’s A.I. policy, it arguably makes the U.S. government “the most aggressive regulator of artificial intelligence in the world.”
Which is an excellent reason for the entire A.I. industry to stand with Anthropic and resist. And to the extent that you’re most afraid of a Skynet scenario where military control drives unwise A.I. acceleration, you should absolutely be on Anthropic’s side as well.
But is that the scenario we should fear the most? Right now, if you listen to the head of Anthropic, Dario Amodei — for instance, in the interview I conducted with him two weeks ago — he sounds much more attuned than Pete Hegseth to the dangers of militarized or rogue A.I. (Hegseth is welcome to prove me wrong by coming on my podcast.)
Over the long run, though, one can imagine Pentagon officials offering some advantages over the typical A.I. mogul when it comes to safety and control. First, they tend to be focused more on concrete strategic objectives than on machine gods and the Singularity. Second, they are constrained from certain gambles by bureaucratic caution and the chain of command. Third, they answer to the public, through elections and civilian control, in a way that C.E.O.s do not.
Certainly to the extent that A.I. becomes the power that many moguls believe it will become — a civilization-altering power, more complex than nuclear weaponry but just as potentially destructive — it seems unimaginable that it can just rest comfortably in the hands of private industry while the American Republic goes on about its business. The possibility of military control and nationalization will be on the table for as long we’re working out just what this technology might do.
So what Hegseth and the Trump administration are doing, in a sense, is starting this inevitable conflict early, and bringing the essential political question — who actually controls A.I.? — to the surface of the debate.
But an impulse toward mastery is not a plan for exercising it. And beyond its refusal to accept corporate guardrails, I don’t see evidence that the administration has thought through how A.I. should be governed, or how the war it’s launched against Anthropic will yield either greater power or greater safety in the end.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post If A.I. Is a Weapon, Who Should Control It? appeared first on New York Times.




