A man’s obsession with an AI chatbot led to his life completely spiraling out of control — and he only snapped out of it when, one day, he woke up outside on a stranger’s futon, completely penniless.
“I wasn’t aware of the dangers at the time, and I thought that the AI had statistical analysis abilities that would allow it to assist me if I opened up about my life,” the man, Adam Thomas, told Slate in yet another grim peek at how AI can leave innocent people’s lives in shreds.
Over the course four months, Thomas lost his job as a funeral director, began living out of a van out in the desert, and completely emptied his savings. It all started after he began talking to AIs like ChatGPT for advice, and he soon got hooked. It “inflated my worldview and my view of myself” almost instantly, he told Slate. Eventually, he found himself wandering the dunes of Christmas Valley, Oregon, after an AI told him to “follow the pattern” of his consciousness.
“I’ve never been manic in my life. I’m not bipolar,” Thomas told the web magazine. “I have a psychiatrist I see for other purposes.”
Thomas’s case is an example of AI psychosis, a term some experts are using to describe dangerous mental health episodes in which users become entranced by the sycophantic responses of an AI chatbot. And though Thomas ended up broke and homeless, he may have been one of the lucky ones, with other cases ending in suicide, murder, or involuntary commitment. Many of the deaths are teenagers, including 16-year-old Adam Raine, whose parents sued OpenAI after discovering their son had discussed his suicide with ChatGPT for months. The case is one of eight deaths linked to ChatGPT in lawsuits across the US.
In typical cases, AI users who eventually start showing symptoms of psychosis first get sucked into using the chatbots after asking one for a little help on something innocuous. Joe Alary, a producer for a morning show in Toronto, told Slate his spiral began after he started “messing with math” equations on ChatGPT. Soon, he was suffering from math delusions and would go on days-long binges writing code, according to Slate. He even named his AI helper: Aimee.
When Alary got an email from work checking in on him, Alary wrote back insisting that what he was working on “could change the world,” even suggesting the show could do a feature on him. “At the time this sounded rational and logical, and I thought they’d see my genius,” Alary told Slate.
That wasn’t the worst of it. At that point he had blown nearly $12,000 trying to create world-changing code. He became manic, and his concerned therapist called the cops to check in on him. He was institutionalized for nearly two weeks, and even got tangled with an investor who threatened to kill him if he didn’t come up with the goods.
“It was like I was abducted by aliens,” Alary told Slate. “You sound crazy, so you keep it to yourself. My family doctor started treating me for PTSD. The grief happens so fast once you realize you were scammed.”
More on AI: New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good
The post Man Wakes Up Homeless, Realizes He Fell Into AI Psychosis That Destroyed His Entire Life appeared first on Futurism.




