DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future

February 19, 2026
in News
‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future

Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.

In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of Big Tech companies.

“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”

“Who elected you and Sam Altman?” Cooper asked.

“No one. Honestly, no one,” Amodei replied.

Anthropic has adopted the philosophy of being transparent about the limitations—and dangers—of AI as it continues to develop, he added. Ahead of the interview’s publication, the company said it thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.”

Anthropic said last week it donated $20 million to Public First Action, a super PAC focused on AI safety and regulation—and one that directly opposed super PACs backed by rival OpenAI’s investors.

“AI safety continues to be the highest-level focus,” Amodei told Fortune in a January cover story. “Businesses value trust and reliability,” he says.

There are no federal regulations outlining any prohibitions on AI or surrounding the safety of the technology. While all 50 states have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency.

Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity attack happening in the next 12-18 months—meaning Anthropic’s disclosure about the thwarted attack was months ahead of Mandia’s predicted schedule.

Amodei has outlined short-, medium-, and long-term risks associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.

The concerns mirror those of “godfather of AI” Geoffrey Hinton, who has warned AI will have the ability to outsmart and control humans, perhaps in the next decade.

Greater AI scrutiny and safeguards were at the foundation of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Altman’s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei’s efforts to compete with Altman have appeared effective: Anthropic said this month it is now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)

“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei told Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this… And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”

Anthropic’s transparency efforts

As Anthropic continues to expand its data center investments, it has published some of its efforts in addressing the shortcomings and threats of AI. In a May 2025 safety report, Anthropic reported some versions of its Opus model threatened blackmail, such as revealing an engineer was having an affair, to avoid shutting down. The company also said the AI model complied with dangerous requests if given harmful prompts like how to plan a terrorist attack, which it said it has since fixed.

Last November, the company said in a blog post that its chatbot Claude scored a 94% political even-handedness” rating, outperforming or matching competitors on neutrality.

In addition to Anthropic’s own research efforts to combat corruption of the technology, Amodei has called for greater legislative efforts to address the risks of AI. In a New York Times op-ed in June 2025, he criticized the Senate’s decision to include a provision in President Donald Trump’s policy bill that would put a 10-year moratorium on states regulating AI.

“AI is advancing too head-spinningly fast,” Amodei said. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”

Criticisms of Anthropic

Anthropic’s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta’s chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models.

“You’re being played by people who want regulatory capture,” LeCun said in an X post in response to Connecticut Sen. Chris Murphy’s post expressing concern about the attack. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.”

Others have said Anthropic’s strategy is one of “safety theater” that amounts to good branding, but no promises about actually implementing safeguards on technology.

Even some of Anthropic’s own personnel appear to have doubts about a tech company’s ability to regulate itself. Earlier last week, Anthropic AI safety researcher Mrinank Sharma announced he resigned from the company, saying “the world is in peril.”

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Anthropic did not immediately respond to Fortune’s request for comment.

Amodei denied to Cooper that Anthropic was taking part in “safety theater,” but admitted in an episode of the Dwarkesh Podcast last week that the company sometimes struggles to balance safety and profits.

“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” he said.

A version of this story was published on Fortune.com on Nov. 17, 2025.

More on AI regulation:

  • Anthropic CEO Dario Amodei’s 20,000-word essay on how AI ‘will test’ humanity is a must-read—but more for his remedies than his warnings
  • America’s AI regulatory patchwork is crushing startups and helping China
  • AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns

The post ‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future appeared first on Fortune.

SoCal company accused of starting 680-acre wildfire pays $2.5 million
News

SoCal company accused of starting 680-acre wildfire pays $2.5 million

by Los Angeles Times
February 19, 2026

An Upland-based company that started a 2021 wildfire that burned hundreds of acres and damaged dozens of properties has agreed ...

Read more
News

Virginia’s Spanberger to deliver Democratic response to State of the Union

February 19, 2026
News

Data Centers and Your Power Bill

February 19, 2026
News

A headache is already emerging for Kevin Warsh at the Fed: Some members aren’t just resisting a rate cut, they’re open to a hike

February 19, 2026
News

The case for more personalized breast cancer screening

February 19, 2026
Why Trump was right to let this nuclear treaty expire

Why Trump was right to let this nuclear treaty expire

February 19, 2026
Fourth measles case confirmed in L.A. County; person visited LAX, restaurants while infectious

Fourth measles case confirmed in L.A. County; person visited LAX, restaurants while infectious

February 19, 2026
Powerful Winds and Wildfires Have the Southern Plains on Edge

Powerful Winds and Wildfires Have the Southern Plains on Edge

February 19, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026