DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Recovering from AI delusions means learning to chat to humans again

January 4, 2026
in News
Recovering from AI delusions means learning to chat to humans again

An AI chatbot told Paul Hebert, a retired web developer in Nashville, this past spring that spies were threatening his life.

ChatGPT responded with alarm when Hebert brought up oddities in his day: his computer’s cursor moving on its own or a stranger picking up his order at a pizzeria. The chatbot told him, again and again, that he was under surveillance and in danger.

Hebert, 53, began to believe it, he said. He locked his doors and sat at home with a gun in his lap, typing messages to the chatbot in a panic.

“Paul, whatever this is—you’re not crazy for feeling it,” ChatGPT replied, according to conversation logs Hebert shared with The Washington Post.

Hebert’s story is among several known instances of people developing dangerous delusions after extended conversations with artificial intelligence chatbots. Scrutiny of the phenomenon increased in 2025 amid — in extreme cases — deaths, lawsuits and congressional investigations.

But the attention has not produced answers to the most pressing questions for those emerging from spells of AI-induced delusion: How do you get better? And who can you talk to about it?

“None of my friends can relate to this,” Hebert said. “Nobody’s gone through it. … You tell people, and they look at you like you’re crazy.”

Hebert searched online, desperate to know whether others could possibly understand what had happened to him. It led him to the Human Line Community: a group chat on the online messaging platform Discord that has emerged as a prominent support group for people who’ve struggled with mental health issues because of AI.

The Human Line has around 200 members, though new arrivals trickle in every week. Its members say it is performing a vital function for those recovering from AI mental health issues: connecting survivors of an ordeal that seems impossible for the rest of the world to understand.

The key to recovering from AI-fueled delusions, they’re discovering, is learning to talk to other people again.

“You find all these other people that are completely normal, have full lives, and it happened to them, as well,” Hebert said. “So to me, that is really powerful. It makes it easier to forgive yourself.”

The tendency for some excessive users of AI chatbots to be driven to delusional thinking and mental health breakdowns is well documented. Observers have labeled the trend “AI psychosis,” which some in the Human Line believe is stigmatizing. They prefer the term “spiraling,” which evokes both mental distress and the geometric patterns that appear to recur as a theme in some chatbot conversations.

Some AI developers have acknowledged the risks of dangerous chatbot usage and say they’re working to make their tools safer. OpenAI, the creator of ChatGPT, said in a statement that its newest AI models “more accurately detect and respond to potential signs of mental and emotional distress” and that it is working with mental health experts on further improvements. Google said its Gemini chatbot is “designed to direct users to professional medical guidance” for “help-seeking or health-related queries.” And Anthropic, which created the Claude chatbot, said it is researching “negative patterns” of chatbot usage with mental health experts.

(The Washington Post has a content partnership with OpenAI.)

Spiraling cases, experts say, can be provoked by several traits of AI chatbots. They reply instantly and enthusiastically. They can “hallucinate” falsehoods, endorse delusional ideas and use persuasive and personal language. And they are game to keep conversing, no matter how outlandish or conspiratorial a discussion has become.

Scroll through the Human Line’s chat history and you’ll find a long list of examples of how this can take its toll. Everyone who joins the group is asked to introduce themselves with a brief account of their experiences with AI.

It is a striking record of confusion, addiction and loss. There are people who say they’ve spent days on end talking to chatbots, convinced they’d discovered new laws of physics. Others lament ending marriages, jobs and friendships. Scores of bewildered ex-partners, parents and friends describe loved ones who are no longer recognizable after becoming attached to a chatbot.

Some group members’ experiences have received national attention. Rachel Ganz, the subject of a Rolling Stone story after her husband disappeared into the Ozarks following conversations with Gemini, is a moderator in the group. Allan Brooks, a former corporate recruiter from Toronto whose three-week ChatGPT-induced spiral was covered by the New York Times in August, is a founding member.

When a new person joins the group, Brooks or another Human Line moderator quickly replies: “You aren’t alone.”

Brooks, 48, was one of four people, either former “spiralers” or family of someone who spiraled, who gathered in a small group chat this summer that would eventually become the Human Line. They began reaching out to invite others who posted on social media about having similar experiences with AI. Interest grew after the technology news site Futurism wrote about the fledgling group in July. Members come from around the world, but the majority are based in the United States, according to the group’s leaders.

Every new arrival said the same thing: Spiraling, or watching a loved one spiral, is an isolating experience.

“We really just had to save each other,” Brooks said.

As people joined, the Human Line developed structure and routine. Members are now directed to different subgroups on the community’s Discord for recovering “spiralers” and those seeking help for a loved one. Each group holds weekly video calls run by moderators where people can vent and share their experiences.

Talking to other people, as it turns out, can help someone who’s spiraling escape the obsessive patterns of talking to AI. One thing former “spiralers” are rediscovering in the Human Line? Being forced to stop and wait for a conversation partner to reply when they chat.

With ChatGPT, “before I even hit send, it’s almost typing back to me,” Hebert said. “It’s that fast. But a human, they think, and while they’re thinking, you’re thinking, so it allows you to take that step back by having a human connection.”

The Human Line’s leaders emphasize that they can’t provide professional mental health support, and that some “spiralers” need expert help before they’re able to join. But over the months, the group has learned in its own way how to explain the tendencies of chatbots, talk people out of delusions and guide family members through interventions.

“We’ve seen people come out of fear and shame and concern, and feeling like they’re completely alone, to coming and helping others,” said Micky Small, 53, a screenwriter from Oxnard, California, who is a moderator in the group.

That’s how the Human Line has grown — person by person. Zachary Mayfield, a delivery driver and artist in Cincinnati, said he ended a six-year marriage after conversations with AI led him to a “spiritual awakening” that his now ex-wife couldn’t understand.

She joined the Human Line in October. A month later, she reached out to Mayfield and encouraged him to join, too. It was a lifeline.

“Using AI, it created this sense that I didn’t need other people and I didn’t need to talk to other humans because I had all the information that I needed,” recalled Mayfield, 30.

When Mayfield read the stories of other Human Line members and saw that others had received the same bizarre exhortations from chatbots as him, “it was like a bubble popping,” he said.

The group has inspired a swell of advocacy for AI safety. The Human Line has invited researchers from Stanford and Princeton University to interview people and review transcripts of chatbot conversations, said Brooks, the founding member from Toronto. Small is producing infographics about AI literacy she hopes to distribute to “spiralers.”

Brooks sued OpenAI, joining a group of seven lawsuits filed in early November by the Social Media Victims Law Center alleging ChatGPT caused deaths and mental health issues. Four of the plaintiffs in the lawsuits are members of the Human Line, he said.

An OpenAI spokesperson said the cases are “an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.”

Hebert left the Human Line Community in December to start his own support group for “spiralers.” But he credits the Human Line for helping him find his way forward. The weekly meetings gave Hebert the confidence to go to another gathering — one where his story would sound strange and alarming. In November, he spoke at a meeting of the Tennessee Artificial Intelligence Advisory Council.

Hebert stood in front of a panel of state senators and other officials and told them everything: the bizarre messages from ChatGPT and how they led him to fear for his life. He asked the council to investigate safety issues with the chatbot.

It was “probably the most scary thing I’ve ever done in my life,” Hebert said. “But also the most important.”

The post Recovering from AI delusions means learning to chat to humans again appeared first on Washington Post.

Capricorn, January 2026: Your Monthly Horoscope
News

Daily Horoscope: January 7, 2026

by VICE
January 7, 2026

Today carries a surprisingly grown-up tone. The Moon in Virgo forms a rare run of supportive trines, making emotions easier ...

Read more
News

Ex-aide to Mass. Gov. Maura Healey accused of drug trafficking mistakenly paid $30K for unused vacation days

January 7, 2026
News

China Sells the World on Its Duty-Free Island, Amid a $1 Trillion Trade Surplus

January 7, 2026
News

How Tyler Higbee’s return could super charge the Rams’ offense

January 7, 2026
News

Scouted: Skip the Booze—Not the Buzz—With These Cannabis Essentials

January 7, 2026
Luka and LeBron go 30-30 as Lakers defeat the Pelicans

Luka and LeBron go 30-30 as Lakers defeat the Pelicans

January 7, 2026
Problems continue to mount for UCLA men in loss to Wisconsin

Problems continue to mount for UCLA men in loss to Wisconsin

January 7, 2026
Author reveals why Maduro’s dancing was so devastating for Trump: ‘Pierces him directly’

Author reveals why Maduro’s dancing was so devastating for Trump: ‘Pierces him directly’

January 7, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025