DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

After a deadly raid, an AI power struggle erupts at the Pentagon

February 22, 2026
in News
After a deadly raid, an AI power struggle erupts at the Pentagon

One of the nation’s leading artificial intelligence firms is negotiating whether it can continue to work with the military, according to people familiar with the discussions, after Pentagon officials called their once-close relationship into question in the wake of January’s raid to capture Venezuelan leader Nicolás Maduro.

Anthropic’s Claude model is one of a handful of leading AI systems that the Pentagon is using to rapidly build its capabilities in cyberwarfare, improve the performance of its autonomous weapons systems and increase the efficiency of its personnel.

Defense Secretary Pete Hegseth’s team has insisted in recent weeks that the military must have the freedom to use the powerful tools as it sees fit. Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks. (The Washington Post has a content partnership with OpenAI.)

The companies did not respond to requests for comment.

But Anthropic — which has sought to position itself as the most safety-minded of the companies — has corporate principles that may keep it from giving the Pentagon carte blanche. Unlike many traditional weapons, powerful AI systems can be deployed in many ways not foreseen by their designers and the dispute has raised questions about who should have the final say over their use by the military. While Anthropic has not said exactly what it’s qualms are with the Pentagon’s demands, its chief executive has recently warned of the dangers of autonomous weapons and AI-powered mass surveillance.

In a statement to The Washington Post, Anthropic said it is “committed to using frontier AI in support of U.S. national security.”

“Claude is used for a wide variety of intelligence-related use cases across the government, including the [Defense Department], in line with our Usage Policy,” Anthropic said. “We are having productive conversations, in good faith, with [the Defense Department] on how to continue that work and get these complex issues right.”

Until recent weeks Anthropic had been in an enviable position, with a $200 million contract and its technology uniquely approved for use within the Pentagon’s classified networks. That quickly began to change, Trump administration officials say, following Anthropic’s response to its recent use by the Pentagon in the Maduro operation.

Technology developed by defense firm Palantir and Anthropic’s Claude were used in preparation for the Jan. 3 raid, according a person familiar with the assault, who spoke on the condition of anonymity to share confidential details about the operation. During the raid, scores of Maduro’s security guards and Venezuelan service members were killed.

After the attack, a senior defense official said, an executive from Anthropic discussed the raid with an executive at Palantir, asking whether Anthropic’s tools had been used. The Palantir executive relayed the question to the Defense Department, saying it implied that Anthropic might have disapproved of how Claude had been used, the official said. That prompted department leaders to call into doubt whether the company could be fully relied on.

“They expressed concern over the Maduro raid, which is a huge problem for the department,” one administration official said.

However Anthropic said it had not discussed any specific operations with the Defense Department nor “discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”

The dispute appears to run deeper than any questions over the attack on Venezuela. Hegseth sees AI dominance as a must-have capability and his directives have pressed the military to move fast to embrace the technology. In January, he said that “speed wins” in an AI-driven future, and he has ordered the Pentagon to unblock data for AI to train, while pushing the department to move from “campaign planning to kill chain execution.”

“We must approach risk tradeoffs, ‘equities,’ and other subjective questions as if we were at war,” Hegseth wrote in the January 2026 directive.

Just over two weeks after Hegseth’s directive came down, Dario Amodei, Anthropic’s co-founder and chief executive, published an essay sketching a potential dystopia in which AI empowers a new generation of unstoppable weapons and surveillance tools.

“We should worry about them in the hands of autocracies, but also worry that because they are so powerful, with so little accountability, there is a greatly increased risk of democratic governments turning them against their own people to seize power,” Amodei wrote about swarms of AI-enabled drones.

Such a weaponry is likely still many years away, but failing to reach an agreement could quickly have far-reaching consequences for the company.

The Pentagon has suggested that it could be branded a “supply chain risk” something that would not only impact Anthropic, but any firm that uses the company’s AI. The designation has typically been aimed at Chinese and Russian companies.

“We may require that all our vendors and contractors certify that they don’t use any Anthropic model,” a defense official told The Post.

In the past, firms have been able to have riders in their contracts with the Pentagon indemnifying them from liability if their technology is used in an unlawful way and allowing them to bind the government to only use the technology for lawful purposes.

But it may be unreasonable for firms contracting with the Pentagon to try to set limitations on how their rapidly evolving technology can be applied, said Frank Kendall, who served as Air Force secretary during the Biden administration and oversaw its development of a fleet of autonomous warplanes.

“The military’s function is the application of violence, and if you’re going to give anything to the Defense Department, it’s likely going to be used to help kill people,” Kendall said.

The administration has held that its actions — which also include U.S. strikes on alleged drug boats in the Caribbean, its deployment of active duty troops on U.S. soil and its decision to use lethal force in Minneapolis, killing two U.S. citizens — have been lawful. But the Trump administration has also fired many of the independent military and Justice Department lawyers who would have had the ability to challenge the legality of those usages.

“If you’re worried about this administration doing unlawful things, you should just not work with them,” Kendall said.

The Pentagon has been integrating AI into some of its weapons systems for years but never at the speed at which it is now. That’s partly driven by its competition with China and evolving threats like hypersonic missiles — where a human’s reaction time can be inadequate.

But there’s also been an emphasis on making sure AI’s unpredictable learning could be fenced in.

At Edwards Air Force Base in 2024, the Air Force flew its first AI fighter jet in dogfights — and the jet, an F-16 that carried the AI in a computer in the back, was already besting elite test pilots by shaving milliseconds off turns and maneuvers. Even then, there was a human in the loop, a test pilot inside the jet who could disengage the AI as needed — and the AI itself was kept in a system that was not connected to any networks. As the Air Force moved forward withe the AI, it said making sure the data it learned on was clean was the priority, to avoid security risks.

In 2023, the Biden administration instructed the Pentagon that any AI use in systems would require levels of review, anti-tamper mechanisms and safeguards to ensure that humans would retain the decision on use of force.

That policy is still in force but will be reviewed as needed, the administration official told The Post.

The post After a deadly raid, an AI power struggle erupts at the Pentagon appeared first on Washington Post.

Keith Urban parts ways with longtime manager one month after finalizing divorce from Nicole Kidman
News

Keith Urban parts ways with longtime manager one month after finalizing divorce from Nicole Kidman

by Page Six
February 22, 2026

Country star Keith Urban is dealing with another breakup, only this time it’s business. The “Blue Ain’t Your Color” singer ...

Read more
News

F.B.I. Director Celebrates Hockey Victory as Bureau Stares Down Crises

February 22, 2026
News

‘We want to put this to rest’: C-SPAN beats back rumors about Trump tariff rant

February 22, 2026
News

Trump Keeps MAGA Heirs Guessing With His 2028 Quiz

February 22, 2026
News

Mexican embassy bashes Trump ally’s claims about cartel operation: ‘This post is false’

February 22, 2026
I Went to Prison and Found Support and Community

Falling From Ivy League Grad to Prisoner Expanded My Social Circle

February 22, 2026
N.C. Man Shot and Killed at Mar-a-Lago Liked to Draw Golf Courses

N.C. Man Shot and Killed at Mar-a-Lago Liked to Draw Golf Courses

February 22, 2026
New round of U.S.-Iran nuclear talks to start Thursday as Trump assembles largest military presence in Mideast in decades

New round of U.S.-Iran nuclear talks to start Thursday as Trump assembles largest military presence in Mideast in decades

February 22, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026