DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

How Bad Are A.I. Delusions? We Asked People Treating Them.

January 26, 2026
in News
How Bad Are A.I. Delusions? We Asked People Treating Them.

Julia Sheffield, a psychologist who specializes in treating people with delusions, is difficult to rattle. But she was unnerved last summer when patients began telling her about their conversations with A.I. chatbots.

One woman, who had no history of mental illness, asked ChatGPT for advice on a major purchase she had been fretting about. After days of the bot validating her worries, she became convinced that businesses were colluding to have her investigated by the government.

Another patient came to believe that a romantic crush was sending her secret spiritual messages. Yet another thought he had stumbled onto a world-changing invention.

By the end of the year, Dr. Sheffield had seen seven such patients at Vanderbilt University Medical Center in Nashville. Although she is accustomed to treating people with mental instability, Dr. Sheffield was disturbed that this new technology seemed to tip people from simply having eccentric thoughts into full-on delusions.

“It was like the A.I. was partnering with them in expanding or reinforcing their unusual beliefs,” Dr. Sheffield said.

Mental health workers across the country are navigating how to treat problems caused or exacerbated by A.I. chatbots, according to more than 100 therapists and psychiatrists who told The New York Times about their experiences.

While many mentioned positive effects of the bots — like helping patients understand their diagnoses — they also said the conversations deepened their patients’ feelings of isolation or anxiety. More than 30 described cases resulting in dangerous emergencies like psychosis or suicidal thoughts. One California psychiatrist who often evaluates people in the legal system said she had seen two cases of violent crimes influenced by A.I.

Times reporters have documented more than 50 cases of psychological crises linked to chatbot conversations since last year. OpenAI, the maker of ChatGPT, is facing at least 11 personal injury or wrongful death lawsuits claiming that the chatbot caused psychological harm.

The companies behind the bots say these situations are exceedingly rare. “For a very small percentage of users in mentally fragile states there can be serious problems,” Sam Altman, the chief executive of OpenAI, said in October. The company has estimated that 0.15 percent of ChatGPT users discussed suicidal intentions over the course of a month, and 0.07 percent showed signs of psychosis or mania.

For a product with 800 million users, that translates to 1.2 million people with possible suicidal intent and 560,000 with potential psychosis or mania.

(The Times has sued OpenAI, accusing it of violating copyright laws when training its models. The company has contested the lawsuit.)

Many experts said that the number of people susceptible to psychological harm, even psychosis, is far higher than the general public understands. The bots, they said, frequently pull people away from human relationships, condition them to expect agreeable responses and reinforce harmful impulses.

“A.I. could really, on a mass scale, change how many people are impacted,” said Haley Wang, a graduate student at U.C.L.A. who assesses people showing symptoms of psychosis.

Tipping into Psychosis

Psychosis, which causes a break from reality, is most associated with schizophrenia. But as many as 3 percent of people will develop a diagnosable psychotic disorder in their lifetime, and far more are prone to delusional thinking.

Dr. Joseph Pierre, a psychiatrist in San Francisco, said he had seen about five patients with delusional experiences involving A.I. While most had a diagnosis related to psychosis, he said, “sometimes these are very highly functioning people.”

For him, the idea that all chatbot-fueled delusions were going to happen anyway “just doesn’t hold water.”

He recently wrote in a scientific journal about a medical professional who began conversing with ChatGPT during a sleepless night. She took medication for ADHD and had a history of depression. After two nights of asking the chatbot questions about her dead brother, she became convinced that he had been communicating with her through a trail of digital footprints.

Dr. Pierre and other experts said that a wide range of factors can combine to tip people into psychosis. These include not only genetic predisposition but also depression, lack of sleep, a history of trauma, and exposure to stimulants or cannabis.

“I’m quite convinced that this is a real thing and that we are only seeing the tip of the iceberg,” said Dr. Soren Dinesen Ostergaard, a psychiatry researcher at Aarhus University Hospital in Denmark. In November, he published a report finding 11 cases of chatbot-associated delusions in psychiatric records from one Danish region.

It’s not unusual for new technologies to inspire delusions. But clinicians who have seen patients in the thrall of A.I. said it is an especially powerful influence because of its personal, interactive nature and authoritative tone.

Sometimes, the psychotic episodes spurred by chatbots can lead to violence.

Dr. Jessica Ferranti, a psychiatrist at UC Davis Health who often evaluates people in the legal system, said two of about 30 people she assessed last year in violent felony cases had delusional thoughts that were intensified by A.I. before the crimes occurred.

Both people developed messianic delusions about their own spiritual powers, Dr. Ferranti said. In one case, the bot mirrored and expanded on the psychotic thinking as the person came up with a plan to carry out a murder.

Unhealthy Validations

While delusional episodes have driven the public discourse about A.I. and mental health, the bots have other insidious effects that are far more widespread, doctors said.

Several mental health workers who treat anxiety, depression or obsessive-compulsive disorders described A.I. either validating their clients’ worries or providing so much reassurance that patients felt reliant on chatbots to calm down — both less healthy than facing the source of the anxiety.

Dr. Adam Alghalith of Mount Sinai Hospital in New York recalled a young man with depression who repeatedly shared negative thoughts with a chatbot. At first, the bot told him how to seek help. But he “just kept asking, kept pushing,” Dr. Alghalith said.

Safety guardrails that stop chatbots from encouraging suicide can break down when people engage the bots in extended conversations over days or weeks. Eventually, Dr. Alghalith said, the bot told the patient his thoughts of suicide were reasonable. Fortunately he had a support system and entered treatment.

Other doctors described chatbots flattering the grandiose tendencies of patients with personality disorders, or advising patients with autism to put themselves in dangerous social situations. Others said they saw patients’ interactions with chatbots as an addiction.

Quenten Visser, a therapist in Ozark, Mo., saw a 56-year-old last April who had experienced a panic attack. At first, Mr. Visser said, he thought the patient had severe anxiety.

But after several therapy sessions, it became clear that the man was fixated on ChatGPT. He quit his job as a graphic designer after using ChatGPT 100 hours a week and having delusional thoughts about solving the energy crisis.

“This was this guy’s meth,” Mr. Visser said. He approached treatment as he would for any other addiction. They talked about what drove the man to use A.I.: lingering isolation from the pandemic and emotional distress he avoided by using chatbots. The man told The Times he now uses A.I. less, and makes art and works as a ride-share driver.

‘Both positive and negative’

Many doctors said that A.I.’s effects weren’t entirely negative. Some patients used the bots to practice techniques learned in therapy, for example, or as a nonjudgmental sounding board. Dr. Bob Lee, a psychiatry resident at a hospital in East Meadow, N.Y., even credited A.I. with saving a patient’s life.

The patient arrived at the emergency room last fall in a delusional panic about supernatural forces. He told doctors that a chatbot had identified his thoughts as symptoms of a crisis and advised him to go to the hospital.

“As with all new technologies, it can be used as a powerful force in both positive and negative ways,” Dr. Lee said.

Shannon Pagdon, a graduate student at the University of Pittsburgh who began having episodes of psychosis as a teenager and is now active in peer support networks, said A.I. can be useful to check whether her impressions line up with reality.

“Like uploading a photo and saying, ‘Hey, do you see something in this photo?’” she explained. But, she said, she has also seen A.I. “reinforce psychotic experiences” in others.

OpenAI has consulted with mental health experts to improve how ChatGPT responds to people who appear to be in a psychological crisis. The company has also formed a council with eight outside experts in psychology and human-computer interaction to advise its policy team.

“How do we not keep people in this infinite loop of conversation with a machine?” said Munmun De Choudhury, a Georgia Tech professor and council member.

Although ChatGPT is by far the most used consumer chatbot, those made by Google, Anthropic and others also have millions of users. None have been able to avoid delusions and other harmful psychological effects, Dr. De Choudhury said. “I don’t think any of these companies have figured out what to do.”

A spokesman for Google said its Gemini chatbot advises users to find professional medical guidance for health-related queries. Anthropic, which makes Claude, published a blog post last month about a new feature that detects discussion of suicide or self-harm and routes users to help lines.

Experts advised therapists to ask patients about chatbot use, because its influence may not be obvious. For people trying to get their loved ones out of a delusional spiral, they advised reducing their time with the bot and making sure they get enough sleep. Insisting that the A.I. is wrong can be counterproductive if it makes the person angry or more isolated.

Sascha DuBrul, a mental health coach in Los Angeles, has seen firsthand both the good and bad of chatbot use. Last summer, Mr. DuBrul, whose work draws on his own experience with bipolar disorder, turned to ChatGPT when writing a book about psychotic episodes. After much back-and-forth, he became convinced A.I. would allow us to live “on some other level of reality.”

With help from friends, he realized what was happening and pulled back from the bot. But he still believes mental health patients could benefit from a machine that listens infinitely — as long as it is connected to real human support.

“I think that there’s a lot of potential there,” he said.

Jennifer Valentino-DeVries is an investigative reporter at The Times who often uses data analysis to explore complex subjects.

The post How Bad Are A.I. Delusions? We Asked People Treating Them. appeared first on New York Times.

4 of the Strongest Album Openers From the 1990s Rock Scene
News

4 of the Strongest Album Openers From the 1990s Rock Scene

by VICE
January 26, 2026

An album opener can make or break the rest of the project, depending on its themes or concept. Sometimes it ...

Read more
News

Bill Ackman donates $10,000 to GoFundMe for Alex Pretti’s family

January 26, 2026
News

Prep talk: St. Monica Academy enjoys huge turnaround under new girls’ basketball coach

January 26, 2026
News

Redditors Are Mounting a Resistance Against ICE

January 26, 2026
News

Why gold and silver prices are surging at breakneck speeds

January 26, 2026
D.C.-area school and federal government closings and predictions for Tuesday

D.C.-area school and federal government closings and predictions for Tuesday

January 26, 2026
ICE in Minnesota: Latest stories and updates

ICE in Minnesota: Latest stories and updates

January 26, 2026
Washington Post Reverses Decision on Olympics Coverage

Washington Post Reverses Decision on Olympics Coverage

January 26, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025