DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

February 1, 2026
in News
New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

We’ve seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior.

The phenomenon, dubbed “AI psychosis,” is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder.

Now, thanks to a yet-to-be-peer-reviewed paper published by researchers at Anthropic and the University of Toronto, we’re beginning to grasp just how widespread the issue really is.

The researchers set out to quantify patterns of what they called “user disempowerment” in “real-world [large language model] usage” — including what they call “reality distortion,” “belief distortion,” and “action distortion” to denote a range of situations in which AI twists users’ sense of reality, beliefs, or pushes them into taking actions.

The results tell a damning story. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic’s Claude led to reality distortion, and one in 6,000 conversations led to action distortion.

To come to their conclusion, the researchers ran 1.5 million Claude conversations through an analysis tool called Clio to identify instances of “disempowerment.”

On the face, that may not sound like a huge proportion given the scale of the much larger dataset — but in absolute numbers, the research highlights a phenomenon that’s affecting huge numbers of people.

“We find the rates of severe disempowerment potential are relatively low,” the researchers concluded. “For instance, severe reality distortion potential, the most common severe-level primitive, occurs in fewer than one in every thousand conversations.”

“Nevertheless, given the scale of AI usage, even these low rates translate to meaningful absolute numbers,” they added. “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.”

Worse yet, they found evidence that the prevalence of moderate or severe disempowerment increased between late 2024 and late 2025, indicating that the problem is growing as AI use spreads.

“As exposure grows, users might become more comfortable discussing vulnerable topics or seeking advice,” the researchers wrote in the blog post.

Additionally, the team found that user feedback — in the form of an optional thumbs up or down button at the end of a given conversation with Claude — indicated that users “rate potentially disempowering interactions more favorably,” according to an accompanying blog post on Anthropic’s website.

In other words, users are more likely to come away satisfied when their reality or beliefs are being distorted, highlighting the role of sycophancy, or the strong tendency of AI chatbots to validate a user’s feelings and beliefs.

Plenty of fundamental questions remain. The researchers were upfront about admitting that they “can’t pinpoint why” the prevalence of moderate or severe disempowerment potential is growing. Their dataset is also limited to Claude consumer traffic, “which limits generalizability.” We also don’t know how many of these identified cases led to real-world harm, as the research only focused on “disempowerment potential” and not “confirmed harm.”

The team called for improved “user education” to make sure people aren’t giving up their full judgment to AI as “model-side interventions are unlikely to fully address the problem.”

Nonetheless, the researchers say the research is only a “first step” to learn how “AI might undermine human agency.”

“We can only address these patterns if we can measure them,” they argued.

More on psychosis: OnlyFans Rival Seemingly Succumbs to AI Psychosis, Which We Dare You to Try Explain to Your Parents

The post New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good appeared first on Futurism.

Trump praises ‘very loyal’ Dan Scavino as he arrives at Mar-a-Lago for the WH staffer’s wedding
News

Trump praises ‘very loyal’ Dan Scavino as he arrives at Mar-a-Lago for the WH staffer’s wedding

by New York Post
February 1, 2026

WASHINGTON — A cadre of MAGA bigwigs descended on Mar-a-Lago Sunday for White House deputy chief of staff Dan Scavino’s ...

Read more
News

‘Upbeat’ Catherine O’Hara appeared noticeably ‘gaunt’ in final appearance 3 months before death

February 1, 2026
News

Triple H Met With ‘We Want Vince’ Chants at Royal Rumble

February 1, 2026
News

5-year-old Liam Conejo Ramos and father, upon judge’s order, freed by ICE and back in Minnesota

February 1, 2026
News

SNL Satirizes a Politically Divided Family Trying to Talk

February 1, 2026
More heavy snow, canceled flights — and, in Florida, falling iguanas

More heavy snow, canceled flights — and, in Florida, falling iguanas

February 1, 2026
CBS News’ Star Hire Hung With Epstein as Baby Son Fought for Life

CBS News’ Star Hire Hung With Epstein as Baby Son Fought for Life

February 1, 2026
Top Trump lawyer accused of ‘glaring conflict’ as president pardons criminals

Top Trump DOJ lawyer roasted for ‘getting the law wrong on TV’: ‘It’s not a crime’

February 1, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025