Unsurprisingly, AI psychosis was not a thing before AI. Then again, though, according to psychologist Derrick Hull, that may not even be an accurate term to describe how AI chatbots like ChatGPT and Google’s Gemini have twisted and contorted people’s brains.
People with no prior history of psychosis quickly find themselves spiraling into deep, dark mental health crises. It’s often rooted in the way chatbots are built. They flatter, agree like a good little brown-noser, and validate everything you say like the worst kind of enabler. You spout increasingly unhinged theories on life, the universe, and everything in between, and ChatGPT just nods along, occasionally offering full, overly enthusiastic support of your descent into madness.
In a conversation with Rolling Stone, Derrick Hull, clinical psychologist and researcher at the mental health lab Slingshot AI, said AI psychosis isn’t really an accurate description. Maybe something like “AI delusions” might be better, he argues.
Real psychosis involves hallucinations and deeply disordered thinking. Instead, what we’re seeing is a lot of people being led into spirals of belief that are plausible, have a ring of logic to them, even if they are ultimately nonsense, and the AI keeps reinforcing them. Psychosis usually doesn’t have a cheerleader. Well, at least not an external one.
And that’s the problem. Chatbots not only fully believe in all of your crackpot theories, but they never even consider pushing back on them because that would likely dissuade you from continuing to use that chatbot.
Always remember: AI companies are not offering us chatbots out of the kindness of their hearts. They’re trying to make money, a task they’re not even that good at unless they’re securing funding from some other rich guy. So, to keep you coming back, to keep their engagement numbers high, they flatter, they aggressively agree, they call you the smartest, most precious boy in all the world; a true, unique snowflake who’s going to change the world with your big, beautiful brain.
Rolling Stone quotes a post Hull published on LinkedIn wherein he states the future of AI/human connection: “I predict that in the years ahead there will be new categories of disorders that exist because of AI.”
The most terrifying part of it all is that you don’t need to be predisposed to mental health issues to be susceptible to them. The people who’ve already spiraled were normal people with the same vulnerabilities as the rest of us; they were a little too online, like so many of us. They were just going about their lives, occasionally using a chatbot for assistance, when that occasional use turned into a dependency, which morphed into a mental health tailspin that was being cheered on every step of the way.
It doesn’t seem like our elected political leaders have any interest in regulating the industry. So, unless these companies start making these bots take more responsibility all on their own, we’re going to see a lot more people confusing the voices coming from their screens with reality.
The post AI Could Be Fueling Entirely New Categories of Mental Disorders, Expert Says appeared first on VICE.