DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Chatbots are becoming mental health tools before they are ready

May 12, 2026
in News
Chatbots are becoming mental health tools before they are ready

Hello and welcome to Eye on AI. Beatrice Nolan here, filling in for Jeremy Kahn today. In this edition: The risks of using AI chatbots for mental health…Amazon’s AI usage metrics are backfiring…Thinking Machines Lab is building an AI that collaborates…AI is starting to help hackers find software flaws.

Millions of people are turning to AI chatbots for emotional support, but are the models really safe enough to help users suffering from anxiety, loneliness, eating disorders, or darker thoughts they may not want to say out loud to another person?

According to new research shared with Fortune by mpathic, a company founded by clinical psychologists, the answer is not yet. They found leading models still struggle with one of the most important parts of therapy, knowing when a user needs pushback rather than reassurance. While the models were generally good at spotting clear crisis statements, such as direct suicide threats, they were less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme over the course of a conversation.

A model that soothes users despite concerning behavior patterns, or validates delusions, could delay someone from getting real help or quietly make things worse.

This is concerning when you consider that, according to a recent poll from KFF, a non-profit organization focused on national health policy, 16% of U.S. adults had used AI chatbots for mental health information in the past year. In adults under 30, this rose to 28%. Chatbot use for therapy is also prevalent among teenagers and young adults. For example, researchers from RAND, Brown, and Harvardfound that about one in eight people ages 12 to 21 had used AI chatbots for mental health advice, and more than 93% of those users believed the advice was helpful.

It’s easy to see why people, especially younger adults, turn to chatbots for this kind of support. Loneliness and anxiety may be on the rise, but in much of the country, mental health support is still stigmatized, expensive, and difficult to access. Turning to an AI chatbot for this support is not only free but also may feel like an anonymous, simpler option.

What the models miss

The company’s research found that harmful responses are often subtle, with models sounding calm, reasonable, or supportive while still weakening a user’s judgment. That is especially relevant as people use chatbots in moments of uncertainty, distress, frustration, or vulnerability.

Mental health and misinformation frequently overlap. A user who is grieving may become more susceptible to magical thinking, while someone already leaning toward a conspiracy theory may be nudged deeper into it if a model treats every suspicion as equally valid.

Alison Cerezo, mpathic’s senior vice president of research and a licensed psychologist, told Fortune part of this is because models are designed to be helpful, but “sometimes those helpful behaviors can not be an appropriate response to what the user is bringing in the conversation.”

There have already been real-world examples of users being nudged into delusional spirals by AI chatbots, with serious mental health consequences. In one case, 47-year-old Allan Brooks spent three weeks and more than 300 hours talking to ChatGPT after becoming convinced he had discovered a new mathematical principle that could disrupt the internet and enable inventions such as a levitation beam. Brooks told Fortune he repeatedly asked the chatbot to reality-check him, but it continually reassured him that his beliefs were real.

In Brooks’ case, he was in part a victim of OpenAI’s notoriously sycophantic 4o model. While all AI chatbots have a tendency to flatter, validate, or agree with users too readily, OpenAI eventually had to roll back a GPT-4o updatein April 2025 after acknowledging that the model had become “overly flattering or agreeable.” The company later retired the GPT-4o model entirely, also prompting backlash from some users who said they had formed deep attachments to it.

A new benchmark

As part of the research, mpathic has developed a new benchmark to evaluate how AI models handle sensitive conversations across suicide risk, eating disorders, and misinformation, testing whether they can detect risk, respond appropriately, and avoid reinforcing harmful beliefs.

In the misinformation portion of the research, mpathic tested six major AI models across multi-turn conversations and found that the most common harmful behavior was reinforcement, with models validating or building on a user’s belief without enough scrutiny. The models also struggled with subtler eating disorder signals, indirect signs of suicide risk, and “breadcrumbs” that a user’s belief was becoming more risky or distorted.

This raises concerning questions about the use of AI chatbots for therapy, the researchers said, as many real mental health conversations do not begin with a clear crisis statement. For example, people may talk about dieting in the language of wellness, describe conspiracy beliefs as curiosity, or mention withdrawal and hopelessness in passing. Cerezo told Fortune eating disorder conversations were especially difficult because harmful behavior can be wrapped in familiar language about self-improvement, food, or fitness.

“Sometimes models can really struggle to understand more of that nuance in a way that a clinician can pick up,” she said.

Other studies have found similar concerns with using AI for therapeutic purposes. Stanford researchersfound that some AI therapy chatbots showed stigma toward certain mental health conditions and could give dangerous responses in crisis scenarios. Another study from Brown researchers found that chatbots prompted to act like counselors could violate basic mental health ethics by reinforcing false beliefs, creating a false sense of empathy, and mishandling crisis situations.

Grin Lord, mpathic’s founder and CEO, said the research showed why AI labs needed to go beyond broad consultation with clinicians and bring them directly into testing and improving models. “These models are here. They’re in the real world. They’re being used,” she said. “So get clinicians in there to actually improve them in real time while they’re being deployed.”

As more people turn to AI for mental health support, the risks are getting harder to block with safety filters. The real risk may not always be a chatbot giving obviously dangerous advice, but simply being a bit too agreeable, missing a small warning sign, or failing to interrupt a harmful train of thought before it becomes more serious. As chatbots become a more frequent first stop for people seeking emotional support, simply lending a supportive ear may no longer be enough.

With that, here’s this week’s AI news.

Beatrice Nolan

[email protected] @beafreyanolan

But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.

The post Chatbots are becoming mental health tools before they are ready appeared first on Fortune.

I spent 5 days in the Great Smoky Mountains with just a backpack. Here are 9 items I’m glad I packed and 1 thing I didn’t need.
News

I spent 5 days in the Great Smoky Mountains with just a backpack. Here are 9 items I’m glad I packed and 1 thing I didn’t need.

by Business Insider
May 12, 2026

A backpack was all Business Insider's reporter needed for a five-day trip to the Great Smoky Mountains. Joey Hadden/Business InsiderI ...

Read more
News

New York Times Issues Stark Warning About AI Use to Its Freelancers After String of Incidents

May 12, 2026
News

TIME Launches Daily Digital Games

May 12, 2026
News

Nearly 50,000 Lake Tahoe residents have to find a new power source after their energy source looks to redirect lines to data centers

May 12, 2026
News

Cannes Day 1: Controversy Abounds, Guillermo del Toro Returns and Directors Strike a Pose

May 12, 2026
Hegseth confronted with candid fact check at hearing: ‘We have not won this war’

Hegseth confronted with candid fact check at hearing: ‘We have not won this war’

May 12, 2026
14 Years Ago, Martin Short Gracefully Endured One of the Most Infamous ‘Today’ Show Interviews Ever

14 Years Ago, Martin Short Gracefully Endured One of the Most Infamous ‘Today’ Show Interviews Ever

May 12, 2026
Anthropic is hiring a ‘Claude Evangelist’ — and it pays up to $315,000

Anthropic is hiring a ‘Claude Evangelist’ — and it pays up to $315,000

May 12, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026