DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic sues Pentagon over being labeled a national security risk

March 9, 2026
in News
Anthropic sues Pentagon over being labeled a national security risk

Anthropic sued the Trump administration Monday, calling on a federal judge in San Francisco to strike down a government order forbidding military contractors from partnering with the artificial intelligence company on the grounds that it poses a risk to national security.

“These actions are unprecedented and unlawful,” the company’s lawyers wrote. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”

The Defense Department last week formally tagged the AI company as a supply-chain risk, the kind of label usually reserved for Chinese and Russian firms suspected of helping foreign spies. The move followed increasingly bitter negotiations over how the company’s technology might be used in warfare, with Anthropic seeking guarantees that the Claude model would not be used for mass domestic surveillance or to power fully autonomous weapons. The unprecedented step by the Pentagon came even as Anthropic’s tools were playing a central role in President Donald Trump’s bombing campaign in Iran.

The Pentagon declined to comment on the lawsuit.

In legal filings, Anthropic said the administration had violated its First Amendment rights to speak about the limits of AI’s military applications and had overstepped its legal authority.

“Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible,” the company’s lawyers wrote in a complaint filed in U.S. District Court for the Northern District of California.

“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle.”

The battle has reverberated through Silicon Valley, raising questions about what limits AI developers should be able to impose on their technology when they do business with the government. Administration officials and the Defense Department have demanded the freedom to use AI systems for any lawful purpose, arguing that the government must have the final say.

After Anthropic CEO Dario Amodei refused to agree, Trump said last month that he was ordering federal agencies to stop using Claude. Defense Secretary Pete Hegseth went further, saying he was imposing a far-reaching ban on the company doing any work with military contractors.

But behind the scenes, the two sides continued to talk last week. Technology and defense figures lobbied the two sides to de-escalate, warning of the ripple effects that would come with branding a leading American company a security risk in an industry where AI labs, industry giants and hardware makers are intertwined with both one another and the Pentagon.

The discussions finally came to an end Thursday, according to a defense official — a day after tech news site the Information published a caustic internal staff memo in which Amodei said the administration was opposed to the company “because we haven’t given dictator-style praise to Trump.” The leak of the note contributed to the ultimate breakdown of the talks, according to the defense official and a second person familiar with the discussions.

“It blew up negotiations,” said the second person, speaking on the condition of anonymity to describe private talks.

For now, though, the military is continuing to rely on Claude to help carry out the assault on Iran. The AI tool is embedded in the military’s Maven Smart System, which helps commanders analyze intelligence and identify targets to strike. In the lead-up to the attack, the system suggested hundreds of targets, with precise coordinates, and ranked them in order of importance, people familiar with the system previously told The Washington Post. It also speeds up planning dramatically and helps evaluate the aftermath of strikes, one of the people said.

Defense officials have said they are aware of their dependence on the system, and Trump said he was providing a six-month phaseout of Anthropic’s tools.

In the long term, Anthropic’s competitors are positioned to supplant it, even if the company is victorious in court. As officials were labeling the company a pariah, its chief rival, OpenAI, was finalizing an agreement to work on the Pentagon’s secret networks. OpenAI said it had been able to secure protections related to surveillance and autonomous weapons, while agreeing to the “all lawful uses” standard that officials wanted.

Elizabeth Dwoskin and Aaron Schaffer contributed to this report.

The post Anthropic sues Pentagon over being labeled a national security risk appeared first on Washington Post.

5 Games to Play if You Love Slay the Spire (and Slay the Spire 2)
News

5 Games to Play if You Love Slay the Spire (and Slay the Spire 2)

by VICE
March 9, 2026

Slay the Spire, and the recently-released Early Access sequel, Slay the Spire 2, are known as some of, if not ...

Read more
News

Don’t trust this $4 solution for getting a prescription

March 9, 2026
News

Rams agree to three-year deal with former Chiefs cornerback Jaylen Watson

March 9, 2026
News

Ari Emanuel Made His Name as a Talent Agent. Now He’s the Talent.

March 9, 2026
News

Explosion Damages Synagogue in Belgium

March 9, 2026
New law lets South Dakota voters challenge their neighbor’s citizenship

New law lets South Dakota voters challenge their neighbor’s citizenship

March 9, 2026
This Is the Moment Adam Smith Has Been Waiting For

This Is the Moment Adam Smith Has Been Waiting For

March 9, 2026
Boy George Says That Using AI to Write Songs Has ‘Really Helped Me As a Lyricist’

Boy George Says That Using AI to Write Songs Has ‘Really Helped Me As a Lyricist’

March 9, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026