Ilya Sutskever has revealed what he’s working on next after stepping down in May as chief scientist at OpenAI. Along with his OpenAI colleague Daniel Levy and Apple’s former AI lead and Cue co-founder Daniel Gross, the trio announced they’re working on Safe Superintelligence Inc., a startup designed to build safe superintelligence.
In a message posted to SSI’s currently barren website, the founders write that building safe superintelligence is “the most important technical problem of our time.” In addition, “we approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
What exactly is superintelligence? It’s a hypothetical agent with intelligence far superior to that of the smartest human.
It’s a continuation of Sutskever’s work from when he was at OpenAI. He was part of the company’s superalignment team, tasked with designing ways to control powerful new AI systems. But with Sutskever’s departure, that group was disbanded, a move that was heavily criticized by one of the former leads, Jean Leike.
Of course, OpenAI’s co-founder played a central role in the brief November 2023 ousting of chief executive Sam Altman. Sutskever would later say he regretted his role in the situation.
SSI claims it will pursue safe superintelligence in “a straight shot, with one focus, one goal, and one product.”
The post OpenAI co-founder Ilya Sutskever announces new startup to tackle safe superintelligence appeared first on Venture Beat.