Anthropic CEO Dario Amodei said that people still aren’t taking AI seriously enough — but he expects that to change within the next two years.
“I think people will wake up to both the risks and the benefits,” Amodei said on an episode of the New York Times’ “Hard Fork,” adding that he’s worried the realization will arrive as a “shock.”
“And so the more we can forewarn people — which maybe it’s just not possible, but I want to try,” Amodei said. “The more we can forewarn people, the higher the likelihood — even if it’s still very low — of a sane and rational response.”
Those optimistic about the technology expect the advent of powerful AI to bring down the barriers to niche “knowledge work” once performed exclusively by specialized professionals. In theory, the benefits are immense — with applications that could help solve everything from the climate crisis to deadly disease outbreaks. But the corresponding risks, Amodei said, are proportionately big.
“If you look at our responsible scaling policy, it’s nothing but AI, autonomy, and CBRN — chemical, biological, radiological, nuclear,” Amodei said. “It is about hardcore misuse in AI autonomy that could be threats to the lives of millions of people. That is what Anthropic is mostly worried about.”
He said the possibility of “misuse” by bad actors could arrive as soon as “2025 or 2026,” though he doesn’t know when exactly it may present a “real risk.”
“I think it’s very important to say this isn’t about, ‘Oh, did the model give me the sequence for this thing? Did it give me a cookbook for making meth or something?’” Amodei said. “That’s easy. You can do that with Google. We don’t care about that at all.”
“We care about this kind of esoteric, high, uncommon knowledge that, say, only a virology Ph.D. or something has,” he added. “How much does it help with that?”
If AI can act as a substitute for niche higher education, Amodei clarifies, it “doesn’t mean we’re all going to die of the plague tomorrow.” But it would mean that a new breed of danger had come into play.
“It means that a new risk exists in the world,” Amodei said. “A new threat vector exists in the world as if you just made it easier to build a nuclear weapon.”
Setting aside individual actors, Amodei expects AI to have massive implications for military technology and national security. In particular, Amodei said he’s concerned that “AI could be an engine of autocracy.”
“If you think about repressive governments, the limits to how repressive they can be are generally set by what they can get their enforcers, their human enforcers to do,” Amodei said. “But if their enforcers are no longer human, that starts painting some very dark possibilities.”
Amodei pointed to Russia and China as particular areas of concern and said he believes it’s crucial for the US to remain “even with China” in terms of AI development. He added that he wants to ensure that “liberal democracies” retain enough “leverage and enough advantage in the technology” to check abuses of power, and block threats to national security.
So, how can risk be mitigated without kneecapping benefits? Beyond implementing safeguards during the development of the systems themselves and encouraging regulatory oversight, Amodei doesn’t have any magic answers, but he does believe it can be done.
“You can actually have both. There are ways to surgically and carefully address the risks without slowing down the benefits very much, if at all,” Amodei said. “But they require subtlety, and they require a complex conversation.”
AI models are inherently “somewhat difficult to control,” Amodei said. But the situation isn’t “hopeless.”
“We know how to make these,” he said. “We have kind of a plan for how to make them safe, but it’s not a plan that’s going to reliably work yet. Hopefully, we can do better in the future.”
The post The CEO of Anthropic thinks it may be impossible to warn people about the risks of AI — but he’s still going to try appeared first on Business Insider.