The Trump administration waged its latest war of choice this week when it tried to coerce the tech company Anthropic into giving the military a blank check in how it uses the company’s artificial intelligence technology.
The confrontation sharply escalated on Tuesday when Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic’s chief executive, Dario Amodei: Lift all safeguards on its technology by 5:01 p.m. Friday or lose the company’s $200 million contract and any future business with the military. It culminated about an hour before that deadline when President Trump publicly declared he was “directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology.”
In typical florid fashion, the president went on to call the company “WOKE” and full of “Leftwing nut jobs” who meant to do the country harm. It’s a striking turn for Anthropic, which in late 2024 became the first major A.I. lab to work on classified U.S. military networks. Although military contracts made up a small percentage of its business, the company’s A.I. model was the most widely used across the American national security complex.
Anthropic’s technology enables troops and intelligence agents worldwide to synthesize and cross-reference oceans of classified information in a split second. In January it was reportedly used during the raid to capture Venezuela’s leader, Nicolás Maduro.
But the company has always had two red lines: The government can’t use its product in the mass surveillance of American citizens or install it in killer robots that operate outside human control. These safeguards have long been at the core of Anthropic’s safety-conscious business model and don’t differ much from other A.I. labs trying to do the tricky job of advancing their cutting-edge technology while ensuring they don’t compromise public safety.
When viewed this way, Anthropic’s limits are sensible and legal. Federal law almost always precludes the U.S. military from spying on American citizens, and a Defense Department directive has strict regulations around all lethal autonomous weapons that don’t have human oversight. But Mr. Hegseth couldn’t live with those terms, and Mr. Amodei refused to give in to the Pentagon’s threats, saying in a statement late on Thursday that his company was willing to suffer the consequences.
The apparent end of this partnership doesn’t make America any safer and, instead, unnecessarily sets back the nation’s ability to defend itself. It could take the Defense Department six months to remove Anthropic’s A.I. tools from internal computer systems, and another A.I. model will have to fill the vacuum. The Pentagon hasn’t yet identified a suitable backup.
It could also have a lasting impact on the military’s already fraught dealings with Silicon Valley. Since the advent of personal computers, the Pentagon’s relationship with technology companies has been hampered by mutual suspicion. Many U.S. troops use more modern technology in their daily lives than they do while in uniform. Anthropic’s A.I. technology is a rare instance of a potentially game-changing national security capability that was developed by the private sector, not the government — and a partnership that was, until recently, working. That matters in a future in which software will play a more critical role in warfare than hardware.
The Defense Department isn’t improving the chances that other innovative start-ups will do business with their budding technology. The prospect of running afoul of the Pentagon became even scarier after Mr. Hegseth announced plans to designate the company a threat to the supply chain for not responding favorably to his ultimatum. The unprecedented move would mean that Anthropic, along with any company that uses its technology, will be prohibited from future Pentagon contracts.
Private industry shouldn’t get in the habit of dictating policy to the federal government, but today’s A.I. presents a distinctive problem. While A.I. models have come a long way, the technology cannot yet be relied on for modern war fighting. Mr. Amodei knows this better than anyone, which is why Thursday’s statement said the company “cannot in good conscience accede” to the government’s request.
The company should be applauded for doing something most military contractors fail to do when presented with lucrative, multiyear contracts: admit their product doesn’t yet meet suitable standards. It’s important to understand that Anthropic was not saying it would not ever be willing to have its technology outfitted on autonomous weapon systems, such as drones. It was saying the tech wasn’t ready yet.
This important distinction didn’t stop Emil Michael, the Pentagon’s chief technology officer, from making half a dozen social media posts on X on Thursday that ridiculed Anthropic and labeled Mr. Amodei a “liar” with a “God-complex.” Mr. Michael later insisted the department would use the technology only for “lawful purposes.”
“At some level, you have to trust your military to do the right thing,” Mr. Michael told CBS News. “But we do have to be prepared for the future. We do have to be prepared for what China is doing.” He added, “We’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
While the standoff has been largely met with public silence from other A.I. labs, many of them also have established internal red lines regarding their technology that are similar to Anthropic’s position. Roughly 75 OpenAI employees and more than 450 Google employees published an open letter this week aligning with Anthropic and asking company leadership to “refuse the Department of War’s current demands.” On Friday, The Wall Street Journal reported that OpenAI’s chief executive, Sam Altman, had entered the fray to help try to “de-escalate” the situation, but that was before Mr. Trump’s outburst.
Before Mr. Trump’s announcement, Elon Musk wrote on X that “Anthropic hates Western Civilization.” Notably, Mr. Musk’s xAI, Anthropic’s competitor, has agreed to let its A.I. model be used on classified networks seemingly under the Pentagon’s conditions.
The military needs the very best A.I. to streamline its operations, and it should find ways to work with these companies, rather than erect barriers. Tech companies have long been outwardly hostile toward the Pentagon and its goals and missions. In 2018, thousands of Google workers signed a petition demanding that the company and its contractors put in a place a policy against building “warfare technology,” after Google contributed to an experimental drone targeting program.
For the first time in decades, that’s starting to change: Venture capital poured some $50 billion into military tech last year, nearly double its investments in 2024. In June the Army recruited four senior tech executives, from companies like Meta and OpenAI, to become officers in a newly established reserve innovation unit called Detachment 201. The secretary of the Army, Dan Driscoll, who worked in venture capital, has said, “I can say unequivocally that the Silicon Valley approach is absolutely ideal for the Army.”
Mr. Trump’s habit of infusing politics into business dealings complicates whether that sentiment can hold true in this administration. Although Mr. Amodei is a prominent Democratic donor who has been critical of the president, there’s no indication his personal politics came into play on this matter.
The Pentagon and the A.I. companies aren’t the only players that can help resolve this fight. Congress can establish guardrails around this emerging technology by outlawing its use in situations where civilians are present, by making human supervision mandatory and by ensuring kill switches for any system reliant on A.I. technology. The international community understands this pressing need. The United Nations secretary general and the International Committee of the Red Cross have called for a new treaty to be concluded this year on autonomous weapon systems.
The future of A.I. has already arrived, and humans are clearly having trouble keeping up. Finding sensible common ground between private industry and government is in everyone’s interest.
Mr. Hennigan writes about national security for Opinion.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Why Did Trump Go to War With Anthropic? appeared first on New York Times.




