DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

March 18, 2026
in News
Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company’s lawsuit against the government will fail.

“The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion,” US Department of Justice attorneys wrote.

The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon’s decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company’s technologies from being used inside the department. If the designation holds, Anthropic could lose up to billions of dollars in expected revenue this year.

Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.

Justice Department attorneys, writing for the Department of Defense and other agencies in the Tuesday filing, described Anthropic’s concerns about potentially losing business as “legally insufficient to constitute irreparable injury” and called on Lin to deny the company a reprieve.

The attorneys also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retained access” to government technology systems. “No one has purported to restrict Anthropic’s expressive activity,” they wrote.

The government argues that Anthropic’s push to limit how the Pentagon can use its AI technology led Defense Secretary Pete Hegseth to “reasonably” determine that “Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.”

The Department of Defense and Anthropic have been fighting over potential restrictions on the company’s Claude AI models. Anthropic believes its models shouldn’t be used to facilitate broad surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.

Several legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. But courts often favor national security arguments from the government, and Pentagon officials have described Anthropic as a contractor that has gone rogue and that its technologies cannot be trusted.

“In particular, DoW became concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” Tuesday’s filing states. “AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic—in its discretion—feels that its corporate ‘red lines’ are being crossed.”

The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing tech companies in the next few months. One of the military’s top uses of Claude is through Palantir data analysis software, people familiar with the matter have told WIRED.

In Tuesday’s filing, the lawyers argued that the Pentagon “cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use” on the department’s’s “classified systems and high-intensity combat operations are underway.” The department is working to deploy AI systems from Google, OpenAI, and xAI as alternatives.

A number of companies and groups, including AI researchers, Microsoft, a federal employee labor union, and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government.

Anthropic has until Friday to file a counter response to the government’s arguments.

The post Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems appeared first on Wired.

First senior official openly breaks with White House, resigns over war
News

Senior official openly breaks with White House, resigns over war

by Washington Post
March 18, 2026

The intelligence community’s top counterterrorism official said Tuesday he was stepping down over the U.S. and Israeli war on Iran ...

Read more
News

Pam Bondi subpoenaed and to face tough questions over Epstein files

March 18, 2026
News

Juliana Stratton Wins Democratic Primary for Senate in Deep-Blue Illinois

March 18, 2026
News

7-ton meteor streaks across Cleveland sky, unleashes energy of 250 tons of TNT in massive boom

March 18, 2026
News

Judge reinstates 1,000 Voice of America employees, deems wind-down illegal

March 18, 2026
Dozens of live pythons found on truck attempting to cross Mexican border

Dozens of live pythons found on truck attempting to cross Mexican border

March 18, 2026
Unraveling the FDA’s shortsighted ban on flavored vapes

Unraveling the FDA’s shortsighted ban on flavored vapes

March 18, 2026
Trump ally might soon depart after court loss shows ‘wheels are coming off’: author

Trump ally might soon depart after court loss shows ‘wheels are coming off’: author

March 18, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026