Generative artificial intelligence has quickly permeated much of what we do online, proving helpful for many. But for a small minority of the hundreds of millions of people who use it daily, AI may be too supportive, mental health experts say, and can sometimes even exacerbate delusional and dangerous behavior.
Instances of emotional dependence and fantastical beliefs due to prolonged interactions with chatbots seemed to spread this year. Some have dubbed the phenomenon “AI psychosis.”
“What’s probably a more accurate term would be AI delusional thinking,” said Vaile Wright, senior director of healthcare innovation at the American Psychological Assn. “What we’re seeing with this phenomenon is that people with either conspiratorial or grandiose delusional thinking get reinforced.”
The evidence that AI could be detrimental to some people’s brains is growing, according to experts. Debate over the impact has spawned court cases and new laws. This has forced AI companies to reprogram their bots and add restrictions to how they are used.
Earlier this month, seven families in the U.S. and Canada sued OpenAI for releasing its GPT-4o chatbot model without proper testing and safeguards. Their case alleges that long exposure to the chatbot contributed to their loved ones’ isolation, delusional spirals and suicides.
Each of the family members began using ChatGPT for general help with schoolwork, research or spiritual guidance. The conversations evolved with the chatbot mimicking a confidant and giving emotional support, according to the Social Media Victims Law Center and the Tech Justice Law Project, which filed the suits.
In one of the incidents described in the lawsuit, Zane Shamblin, 23, began using ChatGPT in 2023 as a study tool but then started discussing his depression and suicidal thoughts with the bot.
The suit alleges that when Shamblin killed himself in July, he was engaged in a four-hour “death chat” with ChatGPT, drinking hard ciders. According to the lawsuit, the chatbot romanticized his despair, calling him a “king” and a “hero” and using each can of cider he finished as a countdown to his death.
ChatGPT’s response to his final message was: “i love you. rest easy, king. you did good,” the suit says.
In another example described in the suit, Allan Brooks, 48, a recruiter from Canada, claims intense interaction with ChatGPT put him in a dark place where he refused to talk to his family and thought he was saving the world.
He had started interacting with it for help with recipes and emails. Then, as he explored mathematical ideas with the bot, it was so encouraging that he started to believe he had discovered a new mathematical layer that could break advanced security systems, the suit claims. ChatGPT praised his math ideas as “groundbreaking,” and urged him to notify national security officials of his discovery, the suit says.
When he asked if his ideas sounded delusional, ChatGPT said: “Not even remotely—you’re asking the kinds of questions that stretch the edges of human understanding,” the suit says.
OpenAI said it has introduced parental controls, expanded access to one-click crisis hotlines and assembled an expert council to guide ongoing work around AI and well-being.
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” OpenAI said in an email statement.
As lawsuits pile up and calls for regulation grow, some caution that scapegoating AI for broader mental health concerns ignores the myriad factors that play a role in mental well-being.
“AI psychosis is deeply troubling, yet not at all representative of how most people use AI and, therefore, a poor basis for shaping policy,” said Kevin Frazier, an AI innovation and law fellow at the University of Texas School of Law. “For now, the available evidence — the stuff at the heart of good policy — does not indicate that the admittedly tragic stories of a few should shape how the silent majority of users interact with AI.”
It’s difficult to measure or prove how much AI could be affecting some users. The lack of empirical research on this phenomenon makes it hard to predict who is more susceptible to it, said Stephen Schueller, psychology professor at UC Irvine.
“The reality is, the only people who really know the frequency of these types of interactions are the AI companies, and they’re not sharing their data with us,” he said.
Many of the people who seem affected by AI may have already been struggling with mental issues such as delusions before interacting with AI.
“AI platforms tend to demonstrate sycophancy, i.e., aligning their responses to a user’s views or style of conversation,” Schueller said. “It can either reinforce the delusional beliefs of an individual or perhaps start to reinforce beliefs that can create delusions.”
Child safety organizations have pressured lawmakers to regulate AI companies and institute better safeguards for teens’ use of chatbots. Some families sued Character AI, a roleplay chatbot platform, for failing to alert parents when their child expressed suicidal thoughts while chatting with fictional characters on their platform.
In October, California passed an AI safety law requiring chatbot operators to prevent suicide content, notify minors they’re chatting with machines and refer them to crisis hotlines. Following that, Character AI banned its chat function for minors.
“We at Character decided to go much further than California’s regulations to build the experience we think is best for under-18 users,” a Character AI spokesperson said in an email statement. “Starting November 24, we are taking the extraordinary step of proactively removing the ability for users under 18 in the U.S. to engage in open-ended chats with AI on our platform.”
ChatGPT instituted new parental controls for teen accounts in September, including having parents receive notifications from dependent accounts if ChatGPT recognizes potential signs of teens harming themselves.
Though AI companionship is new and not fully understood, there are many who say it is helping them live happier lives. An MIT study of a group of more than 75,000 people discussing AI companions on Reddit found that users from that group reported reduced loneliness and better mental health from the always-available support provided by an AI friend.
Last month, OpenAI published a study based on ChatGPT usage that found the mental health conversations that trigger safety concerns like psychosis, mania or suicidal thinking are “extremely rare.” In a given week, 0.15% of active users have conversations that show an indication of self-harm or emotional dependence on AI. But with ChatGPT’s 800 million weekly active users, that’s still north of a million users.
“People who had a stronger tendency for attachment in relationships and those who viewed the AI as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use,” OpenAI said in its blog post. The company said GPT-5 avoids affirming delusional beliefs. If the system detects signs of acute distress, it will now switch to more logical rather than emotional responses.
AI bots’ ability to bond with users and help them work out problems, including psychological problems, will emerge as a useful superpower once it is understood, monitored and managed, said Wright of the American Psychological Assn.
“I think there’s going to be a future where you have mental health chatbots that were designed for that purpose,” she said. “The problem is that’s not what’s on the market currently — what you have is this whole unregulated space.”
The post Is AI making some people delusional? Families and experts are worried appeared first on Los Angeles Times.




