
Anthropic is no longer daring to be quite so different.
The AI startup founded by former OpenAI employees, laser-focused on the proper development of the technology, is weakening its foundational safety principle.
In a statement on Tuesday, Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.
The new policy means Anthropic is far less constrained by safety concerns at a moment when its flagship chatbot, Claude, is upending financial markets and sparking concerns about the death of software.
As part of the changes, Anthropic now has separate safety recommendations, called its Responsible Scaling Policy, for itself and the AI industry as a whole. The policy was loosely modeled after the US government’s biosafety level (BSL) standards
Anthropic’s chief science officer, Jared Kaplan, told Time Magazine that the responsible scaling policy was not in keeping with the current state of the AI race.
“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan told Time. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
The new policy still includes a commitment to delay the development or release of “a highly capable” AI model, but only in more limited circumstances.
In a lengthy blog post, Anthropic cited “an anti-regulatory political climate” as part of the reason for its decision. The company and its CEO, Dario Amodei, have pushed for AI regulations with some success on the state level, but without any major steps at the federal level.
“We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust,” the company wrote. “But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”
The company said the scaling policy was always intended to be “a living document,” which was outlined in the first version in 2023. That said, Amodei has previously said the safety policy was meant to mitigate the risks AI could unleash — even quoting Uncle Ben’s famous admonition to Peter Parker, aka Spider-Man.
“The power of the models and their ability to solve all these problems in biology, neuroscience, economic development, governance, and peace, large parts of the economy, those come with risks as well, right?” Amodei told podcaster Lex Fridman in November 2024. “With great power comes great responsibility.”
Anthropic said another reason for changing the standards is that higher theoretical levels of risk, ASL-4 and beyond, in their framework, cannot be contained by any one company alone. (In the biosecurity world, BSL-4 refers to the highest level of protection that an extremely small number of labs implement to handle pathogens like the Ebola virus.)
Safety is the core of Anthropic’s soul
Amodei has repeatedly said his company’s commitment to safety is evident in one of its first major decisions: holding back on releasing Claude in the summer of 2022.
Looking back on the move, Amodei has said that Anthropic was worried that it could not develop safeguards quickly enough for the public release of a breakthrough technology. OpenAI released ChatGPT in November 2022, kick-starting the AI race. Months later, Anthropic finally released Claude.
“Now, that was very commercially expensive,” Amodei said during a recent interview with billionaire and investor Nikhil Kamath. “We probably seeded the lead on consumer AI because of that.”
The policy change also comes as Anthropic faces pressure from the Pentagon over the redlines the startup has for the use of its AI models. Amodei met with Defense Secretary Pete Hegseth on Tuesday to discuss the issue. Anthropic faces a Friday deadline, or Hegseth could reportedly seek to invoke powers to force the company to back down.
One of Claude’s previous training documents is internally referred to as the “Soul doc,” an example of rhetoric that would be out of place at most other AI companies.
Kamath pressed Amodei on how he responds to critics who say Anthropic is just pushing regulation to stop the growth of future competitors. Amodei said the 2022 decision was an example of how is company backs up its talk on safety. He also pointed to advocating for US export controls on advanced chips to China, a position that Nvidia CEO Jensen Huang has criticized.
“Anyone who thinks we benefit from being the only ones to do that, it’s really hard to come up with a picture where that’s the case,” Amodei said. “You look at any one of these and, ‘okay, fine,’ but you put enough of them together, and I don’t know, I ask you to judge us by our actions.”
Read the original article on Business Insider
The post Anthropic is dropping its signature safety pledge amid a heated AI race appeared first on Business Insider.




