
The race to build artificial general intelligence has become one of Silicon Valley’s defining obsessions.
However, Daniela Amodei, the president and cofounder of Anthropic, suggested that the term itself — a shorthand to describe when machines might reach human-level intelligence — may no longer be a useful way to think about where AI is headed.
“AGI is such a funny term,” Amodei told CNBC in a recent interview. “Many years ago, it was kind of a useful concept to say, ‘When will artificial intelligence be as capable as a human?'”
Today, she said, that framing is breaking down.
“By some definitions of that, we’ve already surpassed that,” Amodei said, pointing to areas like software development, where Anthropic’s Claude model can now write code at a level comparable to many professional engineers, including some inside the company.
“That’s crazy,” she said, noting how quickly those capabilities have advanced.
At the same time, Amodei said AI systems still fall short in many areas that humans handle with ease, making it hard to declare that machines have reached any clear, universal benchmark for intelligence.
“Claude still can’t do a lot of things that humans can do,” she said.
That contradiction is why Amodei believes the concept of AGI itself may be losing relevance.
“I think maybe the construct itself is now wrong — or maybe not wrong, but just outdated,” she said.
Why AGI may miss the point
Amodei’s comments come as Anthropic and its rivals pour tens of billions of dollars into increasingly powerful models and data centers, the infrastructure required to run them.
While some critics have said that large language models won’t lead to true general intelligence without major breakthroughs, Amodei said progress hasn’t shown signs of slowing.
“We don’t know,” she said of what breakthroughs may still be needed. “Nothing slows down until it does.”
Rather than fixating on a single end-state like AGI, Amodei said the more pressing question is how increasingly capable AI systems are integrated into real organizations — and how fast humans and institutions can adapt.
Even if models continue to improve at a steady pace, she said, adoption can lag due to practical constraints such as change management, procurement, and determining where AI actually adds value.
In Amodei’s view, the future of AI won’t hinge on whether it meets a textbook definition of AGI — but on what these systems can do, where they fall short, and how society chooses to deploy them.
Read the original article on Business Insider
The post Anthropic’s president says the idea of AGI may already be outdated appeared first on Business Insider.




