
Stephen Brashear/Getty Images
There may be no evidence that AI is conscious, but Mustafa Suleyman is concerned that it might become advanced enough to convince some people that it is.
In a personal essay published Tuesday, the Microsoft AI CEO described this phenomenon as “Seemingly Conscious AI,” which he defined as having “all the hallmarks of other conscious beings and thus appears to be conscious.”
Its arrival could be “dangerous” for society, Suleyman wrote, because it could lead to people forming attachments to AI and advocating for AI rights.
“It disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities,” he said.
Suleyman, who previously cofounded DeepMind and Inflection, was clear that there is currently “zero evidence” that AI is conscious.
He said, however, that he was “growing more and more concerned” about so-called AI psychosis, a term increasingly being used to describe when people form delusional beliefs after interacting with chatbots.
“I don’t think this will be limited to those who are already at risk of mental health issues,” Suleyman wrote. “Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.”
Sam Altman, the CEO of OpenAI, recently said that most ChatGPT users can “keep a clear line between reality and fiction or role-play, but a small percentage cannot.” Meanwhile, David Sacks, the White House’s AI czar, has compared AI psychosis to the “moral panic” of social media’s early days.
Suleyman predicted that Seemingly Conscious AI, or SCAI, could arrive in two to three years, and said it’s both “inevitable and unwelcome.”
Such systems would have traits like empathetic personalities, the ability to recall more interactions with users, and greater autonomy, among other characteristics.
The rise of vibe coding means that anyone with a laptop, “some cloud credits,” and the right natural language prompts could make it easier to reproduce SCAI, he said.
Suleyman, who moved to Microsoft in 2024 to spearhead the development of its AI tool Copilot, called on companies to refrain from describing their AI as conscious as they pursue superintelligence, which is when AI surpasses humans at most intellectual tasks.
“AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world,” Suleyman added.
The post Microsoft AI CEO says AI models that seem conscious are coming. Here’s why he’s worried. appeared first on Business Insider.