DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic is dropping its signature safety pledge amid a heated AI race

February 25, 2026
in News
Anthropic is dropping its signature safety pledge amid a heated AI race
Anthropic CEO Dario Amodei speaks during an event
Anthropic CEO Dario Amodei has repeatedly emphasized his company’s commitment to safety. Bhawika Chhabra/Reuters
  • Anthropic is weakening its foundational safety commitment.
  • A top official said it doesn’t make sense to pause AI model development in the current environment.
  • The announcement underlines the pressure on companies amid the AI race.

Anthropic is no longer daring to be quite so different.

The AI startup founded by former OpenAI employees, laser-focused on the proper development of the technology, is weakening its foundational safety principle.

In a statement on Tuesday, Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.

The new policy means Anthropic is far less constrained by safety concerns at a moment when its flagship chatbot, Claude, is upending financial markets and sparking concerns about the death of software.

As part of the changes, Anthropic now has separate safety recommendations, called its Responsible Scaling Policy, for itself and the AI industry as a whole. The policy was loosely modeled after the US government’s biosafety level (BSL) standards

Anthropic’s chief science officer, Jared Kaplan, told Time Magazine that the responsible scaling policy was not in keeping with the current state of the AI race.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan told Time. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

The new policy still includes a commitment to delay the development or release of “a highly capable” AI model, but only in more limited circumstances.

In a lengthy blog post, Anthropic cited “an anti-regulatory political climate” as part of the reason for its decision. The company and its CEO, Dario Amodei, have pushed for AI regulations with some success on the state level, but without any major steps at the federal level.

“We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust,” the company wrote. “But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”

The company said the scaling policy was always intended to be “a living document,” which was outlined in the first version in 2023. That said, Amodei has previously said the safety policy was meant to mitigate the risks AI could unleash — even quoting Uncle Ben’s famous admonition to Peter Parker, aka Spider-Man.

“The power of the models and their ability to solve all these problems in biology, neuroscience, economic development, governance, and peace, large parts of the economy, those come with risks as well, right?” Amodei told podcaster Lex Fridman in November 2024. “With great power comes great responsibility.”

Anthropic said another reason for changing the standards is that higher theoretical levels of risk, ASL-4 and beyond, in their framework, cannot be contained by any one company alone. (In the biosecurity world, BSL-4 refers to the highest level of protection that an extremely small number of labs implement to handle pathogens like the Ebola virus.)

Safety is the core of Anthropic’s soul

Amodei has repeatedly said his company’s commitment to safety is evident in one of its first major decisions: holding back on releasing Claude in the summer of 2022.

Looking back on the move, Amodei has said that Anthropic was worried that it could not develop safeguards quickly enough for the public release of a breakthrough technology. OpenAI released ChatGPT in November 2022, kick-starting the AI race. Months later, Anthropic finally released Claude.

“Now, that was very commercially expensive,” Amodei said during a recent interview with billionaire and investor Nikhil Kamath. “We probably seeded the lead on consumer AI because of that.”

The policy change also comes as Anthropic faces pressure from the Pentagon over the redlines the startup has for the use of its AI models. Amodei met with Defense Secretary Pete Hegseth on Tuesday to discuss the issue. Anthropic faces a Friday deadline, or Hegseth could reportedly seek to invoke powers to force the company to back down.

One of Claude’s previous training documents is internally referred to as the “Soul doc,” an example of rhetoric that would be out of place at most other AI companies.

Kamath pressed Amodei on how he responds to critics who say Anthropic is just pushing regulation to stop the growth of future competitors. Amodei said the 2022 decision was an example of how is company backs up its talk on safety. He also pointed to advocating for US export controls on advanced chips to China, a position that Nvidia CEO Jensen Huang has criticized.

“Anyone who thinks we benefit from being the only ones to do that, it’s really hard to come up with a picture where that’s the case,” Amodei said. “You look at any one of these and, ‘okay, fine,’ but you put enough of them together, and I don’t know, I ask you to judge us by our actions.”

Read the original article on Business Insider

The post Anthropic is dropping its signature safety pledge amid a heated AI race appeared first on Business Insider.

Judge Finds Trump Administration’s Third-Country Deportations Unlawful
News

Judge Finds Trump Administration’s Third-Country Deportations Unlawful

by New York Times
February 25, 2026

A federal judge in Boston on Wednesday found that the Trump administration’s policy of summarily deporting immigrants to so-called third ...

Read more
Media

Trump-Voting Podcast Bro Rips Into ‘Corny’ Keystone Kash Crashing Olympics Party

February 25, 2026
News

How the Pentagon picks what drones to buy in a Chinese-dominated market

February 25, 2026
News

MAGA Rep stuns when asked, ‘Aren’t some things bigger than politics?’

February 25, 2026
News

Disbarred Rudy Giuliani offers to represent NYC cops over snowball fight with ‘illegals’

February 25, 2026
Norwegian king will remain in Spanish hospital a few more days, doctor says

Norwegian king will remain in Spanish hospital a few more days, doctor says

February 25, 2026
‘A Tedious, Tiresome Performance’: The Best and Worst Moments From Trump’s State of the Union

‘He’s Debased This Country’: The Best and Worst Moments From Trump’s State of the Union

February 25, 2026
911 audio reveals harrowing details of Mary Cosby’s son Robert Jr.’s death

911 audio reveals harrowing details of Mary Cosby’s son Robert Jr.’s death

February 25, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026