DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

In its fight with the Pentagon, Anthropic confronts one of the biggest crises of its five-year existence

February 26, 2026
in News
In its fight with the Pentagon, Anthropic confronts one of the biggest crises of its five-year existence

AI company Anthropic is facing perhaps the biggest crisis in its five-year existence as it stares down a Friday deadline to remove restrictions on how the U.S. Department of War can use its technology or face the possibility that the Pentagon will take action that could cripple its business.

Pete Hegseth, the U.S. secretary of war, has demanded that Anthropic remove restrictions it currently stipulates in its contracts that prohibit its AI models being used for mass surveillance or from being incorporated into lethal autonomous weapons, which can make decisions to attack without human intervention. Instead, Hegseth wants Anthropic to stipulate that its technology can be used for “any lawful purpose” that the Department of War wishes to pursue. If the company does not comply by Friday, Hegseth has threatened to not only cancel Anthropic’s existing $200 million contract with his department, but to have the company labelled a “supply chain risk,” meaning that no company doing business with the Department of War would be allowed to use Anthropic’s models. That could eviscerate Anthropic’s growth—just as the company, which is currently valued at $380 billion, has been seeing significant commercial traction and is contemplating an initial public offering as soon as next year. A Tuesday meeting between Hegseth and Anthropic CEO Dario Amodei in Washington, D.C., failed to resolve the conflict and ended with Hegseth reiterating his ultimatum. The dispute comes against a backdrop of sometimes overt hostility towards Anthropic from other Trump administration officials. AI czar David Sacks in particular has publicly attacked the company on social media for representing “woke AI” and the “doomer industrial complex.” Sacks has accused the company of engaging in a “sophisticated regulatory capture strategy based on fearmongering.” His argument is basically that Anthropic executives disingenuously warn of extreme risks from AI systems in order to justify regulations on the technology with which only Anthropic and a few other AI companies can easily comply. Anthropic CEO Dario Amodei has called such views “inaccurate” and insisted that the company shares many policy goals with the Trump administration, including wanting to see the U.S. remain at the forefront of the development of AI technology. Nonetheless, Sacks and others within the administration may be hoping Hegseth makes good on his threats to blacklist Anthropic from the national security supply chain.

Other AI companies, such as OpenAI and Google, have apparently not imposed restrictions on how the U.S. military uses their tech.

Principles versus pragmatism

Working with the military has been controversial among some technology workers. In 2018, Google faced a vocal staff rebellion over its decision to help the Pentagon with “Project Maven,” an effort to use AI to analyze aerial surveillance imagery. The employee revolt forced Google to pull out of a bid to renew its contract to work on the project. But in the years since, the internet giant has quietly renewed its ties with the defense establishment, and in December, the Department of War announced it would deploy Google’s Gemini AI models for a number of use cases. Owen Daniels, associate director of analysis at the Center for Security and Emerging Technology (CSET) at Georgetown University, told the Associated Press that “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

But principles may be an unusually powerful motivator for Anthropic employees. The company was founded by a group of researchers who broke away from OpenAI in part because they were concerned that lab was allowing commercial pressures to divert it from its original mission of ensuring powerful AI is developed for humanity’s benefit. And more recently, Anthropic staked out principled positions on not incorporating advertising into its Claude products and not developing chatbots specifically designed to be romantic or erotic companions. Given the company’s culture, some outside commentators have speculated that at least some Anthropic staff will resign if the company gives in to Hegseth’s demands and drops the limitations currently built into its government contracts. Hegseth has also said there is another option available to the Pentagon if Anthropic does not comply with its request voluntarily. This would involve using the Defense Production Act of 1950 to compel Anthropic to offer the military a version of its Claude model without any restrictions in place.

That DPA, which was originally designed to allow the government to take charge of civilian manufacturing in the event of war, was invoked during the Covid-19 pandemic to compel companies to produce protective equipment and vaccines. Since then, it has been used numerous times, mostly by the Biden administration, even in the absence of a clear national emergency. For instance, in 2023 the Biden White House invoked the DPA to force tech companies to share information about the safety testing of their advanced AI models with the government.

Katie Sweeten, who served until September 2025 as the Department of Justice’s liaison to the Department of Defense and is now a partner at the law firm Scale, told CNN that Hegseth’s position didn’t make sense from a policy perspective. “I would assume we don’t want to utilize the technology that is the supply chain risk, right? So I don’t know how you square that,” she said. Dean Ball, who served as an AI policy advisor to the Trump Administration, helping to draft its AI Action plan, and who is now a senior fellow at the Foundation for American Innovation, also called the Pentagon’s position “incoherent” in a post on X. “How can one policy option be ‘supply chain risk’ (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?”he said. Ball told Tech Crunch that imposing the supply chain risk label would send a terrible message to any company doing business with the government. “It would basically be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business,’” he said.  Some legal commentators noted that both sides of the dispute had some legitimate arguments. “We wouldn’t want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly,” Alan Rozenshtein, an associate professor of law at the University of Minnesota and a fellow at Brookings, said in a column posted on the site Lawfare. But Rozenshtein also argued that Congress, not the Pentagon, should set the rules for how the U.S. military deploys AI. “The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO, with no democratic input and no durable constraints,” he wrote.

As of midweek, Anthropic showed no signs of backing down from its position.

Claude’s future at stake

Helen Toner, the interim executive director of Georgetown’s CSET and a former OpenAI board member, posted on X that the Pentagon is likely underestimating the extent to which Anthropic may be reluctant to abandon its position because—as weird as this sounds—doing so might set a bad example for future versions of Claude. Anthropic researchers have increasingly voiced concerns about what each successive version of Claude learns about its own character based on training data that now includes news articles and social media commentary about Claude itself.

But the company has compromised before when its back has been against the wall. In June 2025, Anthropic faced a potentially existential threat when a federal judge ruled that its use of libraries of pirated books to train its Claude AI models was likely a violation of copyright law. This left the company facing tens of billions of dollars in potential liabilities if it took the case to a full trial and lost. Instead of continuing to fight the case, Anthropic announced a $1.5 billion settlement with the copyright holders.

And just this past week, Anthropic demonstrated again, in a different context, that it is sometimes willing to put pragmatism and commercial imperatives ahead of high-minded principles. The company updated its Responsible Scaling Policy (RSP), dropping a previous commitment to never train an AI model unless it could guarantee it had adequate safety controls in place. The new RSP instead simply commits Anthropic to matching or surpassing the safety efforts being made by competitors. It also says Anthropic will delay developing models if the company believes it has a clear lead over the competition and it also thinks the model is training presents a significant catastrophic risk. Jared Kaplan, Anthropic’s head of research, told Time that “unilateral commitments” no longer made sense if “competitors are blazing ahead.” Whether Anthropic will make a similar concession to commercial pressures in its fight with the Department of War remains to be seen.

The post In its fight with the Pentagon, Anthropic confronts one of the biggest crises of its five-year existence appeared first on Fortune.

I’m a 2-time Olympic freeski medalist. I don’t drink coffee or scroll between runs.
News

I’m a 2-time Olympic freeski medalist. I don’t drink coffee or scroll between runs.

by Business Insider
February 26, 2026

Olympic skier Alex Hall. Provided by Rao's Homemade.This is an as-told-to essay based on a conversation with Alex Hall, an ...

Read more
News

Hunter College professor Allyson Friedman placed on leave for racist hot mic comments

February 26, 2026
News

Behind the Chaos at the Louvre, a French Leader’s Legacy Hangs in the Balance

February 26, 2026
News

A British Special Election Could Hardly Have Come at a Worse Time for Starmer

February 26, 2026
News

Scouted: Burned Out From Business Paperwork? This AI Platform Handles the Busywork—50% Off Today

February 26, 2026
Widower Exposes Perversion of MAGA Rep’s Texts to His Wife

Widower Exposes Perversion of MAGA Rep’s Texts to His Wife

February 26, 2026
A Deal or War? Crucial Talks to Begin Between U.S. and Iran

A Deal or War? Crucial Talks to Begin Between U.S. and Iran

February 26, 2026
After Bondi Beach Massacre, an Anti-Immigration Party Surges

After Bondi Beach Massacre, an Anti-Immigration Party Surges

February 26, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026