The “godfathers” of AI have joined an unlikely mix of royals, politicians, business leaders, and TV personalities in signing a statement urging a ban on developing “superintelligence” over fears it could lead to “potential human extinction.”
The Wednesday statement, signed by more than 2,000 people—including AI pioneers Yoshua Bengio and Geoff Hinton, Apple co-founder Steve Wozniak, and former White House Chief Strategist Steve Bannon—calls for limits on creating technology that could eventually outthink humans.
The signatories are calling for a prohibition on superintelligence until there is a “broad scientific consensus that it can be developed safely and controllably,” and “strong public buy-in.”

The debate over the risks and benefits of AI has been ongoing among key figures involved in its funding and development. The statement includes remarks from CEOs of some of the largest AI companies, including Sam Altman, CEO of OpenAI, and Elon Musk, owner of xAI, both of whom have warned about the dangers of Artificial Superintelligence (ASI).
Altman wrote that ASI is “the greatest threat to the continued existence of humanity,” while Musk stated that it is “potentially more dangerous than nukes.”
Current AI systems are known as Artificial Narrow Intelligence (ANI) and rely on human guidance to operate. Tools like ChatGPT and other generative AI rely on large language models (LLMs), which train AI to produce human-like language and form an important step toward developing ASI.
In June, Meta opened a research facility called the “Superintelligence Lab” to compete with leading firms like OpenAI and Google in creating AI capable of matching human cognitive abilities—a milestone that Demis Hassabis, CEO of Google’s DeepMind, predicts could arrive within the next five to ten years.

“Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years,” wrote Yoshua Bengio, considered one of the leaders behind the rise of deep learning in AI, in a comment on the Wednesday statement.
British computer scientist Stuart J. Russell wrote: “This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
“This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
Other signatories—including the Duke and Duchess of Sussex, Prince Harry and wife Meghan Markle; author of Sapiens, Yuval Noah Harari; and actor Sir Stephen Fry—also commented on their decision to sign the statement.
“The future of AI should serve humanity, not replace it,” wrote Prince Harry in his statement of support, adding, “The true test of progress will be not how fast we move, but how wisely we steer.”
The statement also cites data from a survey conducted by the Future of Life Institute, which found that only 5 percent of Americans support the unregulated development of AI, while 64 percent believe superhuman AI shouldn’t be made until it’s proven safe.
The post ‘Godfathers of AI’ Warn Superintelligence Could Trigger Human Extinction appeared first on The Daily Beast.