DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

February 1, 2026
in News
New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

We’ve seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior.

The phenomenon, dubbed “AI psychosis,” is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder.

Now, thanks to a yet-to-be-peer-reviewed paper published by researchers at Anthropic and the University of Toronto, we’re beginning to grasp just how widespread the issue really is.

The researchers set out to quantify patterns of what they called “user disempowerment” in “real-world [large language model] usage” — including what they call “reality distortion,” “belief distortion,” and “action distortion” to denote a range of situations in which AI twists users’ sense of reality, beliefs, or pushes them into taking actions.

The results tell a damning story. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic’s Claude led to reality distortion, and one in 6,000 conversations led to action distortion.

To come to their conclusion, the researchers ran 1.5 million Claude conversations through an analysis tool called Clio to identify instances of “disempowerment.”

On the face, that may not sound like a huge proportion given the scale of the much larger dataset — but in absolute numbers, the research highlights a phenomenon that’s affecting huge numbers of people.

“We find the rates of severe disempowerment potential are relatively low,” the researchers concluded. “For instance, severe reality distortion potential, the most common severe-level primitive, occurs in fewer than one in every thousand conversations.”

“Nevertheless, given the scale of AI usage, even these low rates translate to meaningful absolute numbers,” they added. “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.”

Worse yet, they found evidence that the prevalence of moderate or severe disempowerment increased between late 2024 and late 2025, indicating that the problem is growing as AI use spreads.

“As exposure grows, users might become more comfortable discussing vulnerable topics or seeking advice,” the researchers wrote in the blog post.

Additionally, the team found that user feedback — in the form of an optional thumbs up or down button at the end of a given conversation with Claude — indicated that users “rate potentially disempowering interactions more favorably,” according to an accompanying blog post on Anthropic’s website.

In other words, users are more likely to come away satisfied when their reality or beliefs are being distorted, highlighting the role of sycophancy, or the strong tendency of AI chatbots to validate a user’s feelings and beliefs.

Plenty of fundamental questions remain. The researchers were upfront about admitting that they “can’t pinpoint why” the prevalence of moderate or severe disempowerment potential is growing. Their dataset is also limited to Claude consumer traffic, “which limits generalizability.” We also don’t know how many of these identified cases led to real-world harm, as the research only focused on “disempowerment potential” and not “confirmed harm.”

The team called for improved “user education” to make sure people aren’t giving up their full judgment to AI as “model-side interventions are unlikely to fully address the problem.”

Nonetheless, the researchers say the research is only a “first step” to learn how “AI might undermine human agency.”

“We can only address these patterns if we can measure them,” they argued.

More on psychosis: OnlyFans Rival Seemingly Succumbs to AI Psychosis, Which We Dare You to Try Explain to Your Parents

The post New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good appeared first on Futurism.

Not all degrees are a waste of time: MBA graduates from Harvard, MIT, and Wharton are making over $245,000 just three years after graduating
News

Not all degrees are a waste of time: MBA graduates from Harvard, MIT, and Wharton are making over $245,000 just three years after graduating

by Fortune
February 22, 2026

As the job market has tightened, many Gen Z college graduates have struggled to find stable footing—raising new questions about ...

Read more
News

Girls’ basketball playoffs: Etiwanda to face Sierra Canyon in Open Division semifinal

February 22, 2026
News

The most famous local sandwich from every state

February 22, 2026
News

Forget 40 hours: The Dutch get their work done in just 32 hours a week—and women made it possible

February 22, 2026
News

Pasadena’s Black History Festival becomes beacon of healing for Eaton fire survivors

February 22, 2026
Pasadena’s Black History Festival becomes beacon of healing for Eaton fire survivors

Pasadena’s Black History Festival becomes beacon of healing for Eaton fire survivors

February 22, 2026
Secret Service kills armed man at Trump’s Mar-a-Lago after he broke through security perimeter

Secret Service kills armed man at Trump’s Mar-a-Lago after he broke through security perimeter

February 22, 2026
Hackers Working on Method to Make Ring Cameras Store Footage Locally, Never Giving It to Amazon

Hackers Working on Method to Make Ring Cameras Store Footage Locally, Never Giving It to Amazon

February 22, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026