DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

A Guide to the Pentagon’s Dance With Anthropic and OpenAI

March 7, 2026
in News
A Guide to the Pentagon’s Dance With Anthropic and OpenAI

Late last month, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic, the only company that had provided the Pentagon with artificial intelligence technologies for use on classified systems.

If Anthropic did not allow the Pentagon to deploy these technologies for “all lawful uses,” Mr. Hegseth said, he would sever ties with the San Francisco start-up.

The threat set off a chain of events that resulted in the Defense Department’s labeling Anthropic a “supply chain risk,” which would prevent all military contractors from using the company’s technologies, and signing an agreement with OpenAI, its biggest rival.

The negotiations were, to say the least, confusing.

How does the Pentagon use Anthropic’s technology?

Anthropic’s technologies are widely used inside the Defense Department because the start-up agreed last year to integrate its systems with technology from Palantir, a data analytics company that is approved for classified operations.

Separately from Anthropic’s partnership with Palantir, the Pentagon also uses Anthropic’s technology to analyze imagery and other intelligence data as part of a $200 million A.I. pilot program.

Anthropic’s technology is being used as U.S. military forces engage in a widening war against Iran, two people familiar with the technology said on the condition of anonymity.

Google, OpenAI and Elon Musk’s xAI are also part of the pilot program, but are not yet used on classified systems. Anthropic was a step ahead of its rivals thanks to its partnership with Palantir.

Why did the Pentagon get angry at Anthropic?

On Feb. 15, The Wall Street Journal reported that Anthropic had raised concerns with Palantir about the role its technologies played in the U.S. military operation to capture Venezuela’s president, Nicolás Maduro. The story inflamed earlier tensions, as Mr. Hegseth and others at the Pentagon argued that Anthropic was resisting the military’s use of these A.I. systems.

The Defense Department was already in talks with Anthropic to establish new contractual language that allowed the Pentagon to use the company’s technologies for any lawful purpose. But Anthropic was reluctant to agree to those terms.

Why was Anthropic reluctant?

Anthropic wanted contractual language that prevented the Pentagon from using its technology with autonomous weapons or for mass surveillance of Americans. It argued that specific language was needed to ensure that the technologies were used only in ways that aligned with what they could “reliably and responsibly do.”

The Pentagon said private companies should not try to control how the military operated.

On Feb. 24, Mr. Hegseth met with Anthropic’s chief executive, Dario Amodei, and said that if Anthropic failed to agree to the Pentagon’s demands by 5:01 p.m. on the next Friday, he would designate the company a supply chain risk.

What does it mean to be a supply chain risk?

It means that a company’s technology cannot be used by the Pentagon or any of its contractors in their work with the government. The designation is typically applied only to firms with ties to the government of China.

Did cooler heads prevail?

No. The company published a blog post saying it could not “accede” to the Pentagon.

Minutes after the deadline passed, Mr. Hegseth deemed Anthropic a supply chain risk in a post to social media.

He added that “no contractor, supplier or partner that does business with the United States military may conduct any commercial activity” with the company. But the Pentagon planned to continue to use Anthropic’s technologies for up to six months as it arranged for alternatives.

The Pentagon later sent a letter to Anthropic saying it had officially designated the company as a supply chain risk.

Does Hegseth have the power to do that?

A court will probably decide. Anthropic has said it intends to sue the government, and legal scholars say a suit would most likely be successful.

“Anthropic’s case is very strong,” said Alan Rozenshtein, a professor of law at the University of Minnesota.

Legal scholars also say the Pentagon does not have the power to bar its contractors from commercial activity with the start-up beyond just using its technology. For instance, it cannot prevent contractors from investing in Anthropic, they said.

“The commercial activity language is flatly illegal,” Mr. Rozenshtein said.

That is an important point because Amazon and Google — two of Anthropic’s biggest investors — are also Defense Department contractors.

In a statement on Anthropic’s website, Dr. Amodei said Anthropic was still in discussions with the Pentagon over their contract. But Emil Michael, chief of technology for the Defense Department, quickly responded on social media that there were “no active” negotiations between the two.

Why didn’t the Pentagon just stop using Anthropic?

That would have been an easier solution to the dispute. “The correct response is to just cancel the contract and walk away,” Mr. Rozenshtein said.

Instead, the Pentagon appeared to make a political statement by labeling Anthropic a supply chain risk.

“It seems like the Pentagon just does not like Anthropic’s general political vibe and wants to destroy its entire business,” said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a policy adviser for A.I. under President Trump. “That is beyond the pale.”

How did OpenAI get involved?

A day after Mr. Hegseth met with Dr. Amodei, OpenAI’s chief executive, Sam Altman, started his own talks with the Defense Department.

Mr. Altman told the Pentagon that it should not give Anthropic the supply chain risk label because it would have a chilling effect on the department’s relationship with the tech industry. Like Anthropic, he said, OpenAI did not want its technologies used for mass surveillance of Americans or with autonomous weapons.

But Mr. Altman and OpenAI also worked on their own contract with the Pentagon. Just hours after Anthropic missed its deadline, he announced that they had reached an agreement.

OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose. But OpenAI also said it had negotiated terms that allowed the company to uphold its safety principles by installing specific technical guardrails on its systems.

Can technical guardrails prevent A.I. from being used for mass surveillance?

No. The guardrails built into today’s A.I. do not always work as they are designed. And even when these guardrails hold firm, there are many ways A.I. systems could still be used to feed surveillance or the use of autonomous weapons.

Three days later, OpenAI announced that it had amended its agreement with the Pentagon. It added language saying its A.I. systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

People following this odd contract shuffle argued that the Pentagon had made an agreement with OpenAI that it refused to make with Anthropic. This was another sign, they said, that the Pentagon’s response to Anthropic was politically motivated.

Does the amendment uphold OpenAI’s safety principles?

Maybe not. Legal experts point out that the Pentagon could inadvertently collect data about Americans as it worked to monitor foreigners and that it would still be allowed to analyze this data under the terms of the contract.

A contract like this is also difficult for a private company to enforce, because a violation of the terms may not be obvious, Mr. Rozenshtein said. In other words, whether a technology has been used for mass surveillance is sometimes open to debate.

Even if the government breaches the contract, OpenAI can at most cancel service and sue for damages, but it cannot force the government to live up to its end of the bargain, Mr. Rozenshtein said.

Mr. Altman and OpenAI also said the Pentagon had assured the company that its technology would not be used by defense intelligence agencies, including the National Security Agency. But OpenAI could, of course, sign a separate agreement that allows the N.S.A. to use its technologies.

So, what does all this mean?

“This is not just some dispute over a contract. This is the first conversation we have had as a country about control over A.I. systems,” Mr. Ball said. “What should the limitations be? And who gets to decide?”

But he and other experts said this was not the best way to decide these questions. They say Congress should step in to set firmer laws.

“Congress should be asking hard questions about this,” said David Bader, a professor at the New Jersey Institute of Technology. “We need deliberate bipartisan framework for the governance of A.I.”

Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

The post A Guide to the Pentagon’s Dance With Anthropic and OpenAI appeared first on New York Times.

Why Jamie Lee Curtis thought her career was over after ‘Freaky Friday’
News

Why Jamie Lee Curtis thought her career was over after ‘Freaky Friday’

by Page Six
March 7, 2026

Jamie Lee Curtis thought that 2003’s “Freaky Friday” would be a nice coda to her career, which she believed was ...

Read more
News

NASA’s New Odds ‘City Killer’ Asteroid 2024 YR4 Will Crash Into the Moon

March 7, 2026
News

I’ve lived in the Boston area for 14 years. Here are 5 things that live up to the hype and 2 I tell first-timers to skip.

March 7, 2026
News

Super heavyweight MMA fighter guilty of starving 5-year-old daughter to death in house full of food

March 7, 2026
News

Long-delayed Jan. 6 plaque honoring police quietly erected overnight at Capitol

March 7, 2026
Emergency Responders Say They’re Now Unpaid “Roadside Assistance” for Confused Waymos

Emergency Responders Say They’re Now Unpaid “Roadside Assistance” for Confused Waymos

March 7, 2026
Hillary Clinton and the Collective Rage of American Women

Hillary Clinton and the Collective Rage of American Women

March 7, 2026
When Will Xbox’s Project Helix Release?

When Will Xbox’s Project Helix Release?

March 7, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026