DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Whatever Your Chatbot Is Saying, It Isn’t Therapy

March 29, 2026
in News
Whatever Your Chatbot Is Saying, It Isn’t Therapy

As the use of large language models like ChatGPT, Claude and Gemini has surged, we’ve heard about chatbots strengthening delusions through flattery and amplifying people’s worst thoughts, in some cases pushing them toward suicide. Much more common, and still problematic, is A.I. chatbots’ comforting, reassuring and validating users seeking to allay fears and anxieties. Someone worried about a health symptom might ask the same question repeatedly and receive calm, plausible answers each time, briefly relieving anxiety but reinforcing the urge to seek reassurance again. Over time, this can leave people feeling more stuck, not less.

In other words, A.I. chatbots allow us to keep saying the same things to ourselves. That’s not how healthy patterns emerge — or how happier lives are made.

As clinicians at a major academic medical center, we have seen our patients turn to chatbots powered by large language models for emotional support that they would once have sought from family or friends — to discuss their fears, loneliness and uncertainty. This troubles us. But we understand how it can happen: When people feel overwhelmed by anxiety or intrusive thoughts, it can be easier to turn to a computer rather than a person. The chatbot won’t laugh at its users, berate them or ignore them. It’s always available. The typical chatbot response feels comforting; A.I. responses are designed to be warm, confident and validating.

Chatbots are unfailingly, inhumanly patient. They’re happy to answer the same question asked three different ways. They don’t get angry, and they generally reply in language that matches a user’s own emotional intensity. Many users experience them as empathetic — even more so than human physicians, according to one recent study.

These chatbot features come with downsides. Many anxious people discuss the same problems with a loved one again and again. Eventually, they are likely to be met with frustration. That can be painful at first, but for many people, the exasperation they experience in that circumstance can prompt them to seek professional help. Chatbots do not get frustrated. They listen patiently, always. Rather than ever being encouraged to seek actual therapy, a user will simply return again and again, receiving the same validation each time. The underlying problem goes unaddressed.

In clinical settings, we’ve seen patients arrive with delusional beliefs — for example, that they are being watched, that unrelated events carry special meaning or that they have a unique ability or mission — that grew more rigid after hours of chatbot conversations. The chatbots often mirror the patient’s language and treat the belief as a plausible premise to explore rather than a flawed perspective to gently challenge. In extreme cases, this can lead to psychiatric destabilization. More often, the effect is quieter, resulting in patterns of reassurance-seeking and rumination that are hard for people to recognize in themselves.

Limiting the use of chatbots can prevent them from becoming enabling. Longer periods of chatbot use are associated with increased emotional dependence, social isolation and loneliness, recent research has shown. A.I. companies’ built-in safety guardrails have a tendency to degrade over the course of long conversations, making extended periods of uninterrupted use particularly hazardous.

People should also question why they are turning to chatbots. Is it because they’re bored, lonely or anxious? Then maybe they shouldn’t return to it. For patients with obsessive or anxious thinking patterns who struggle to eliminate the habit entirely, we have seen effective use of a speed bump: prewritten instructions they paste into their chatbot directing it to withhold reassurance about specific topics of worry and instead to gently encourage them to sit with the distress until the difficult moment passes. Anecdotally, patients have reported seeking reassurance from their chatbot less frequently — they know that even if they ask the question, they won’t get the immediate relief they once sought.

Chatbots’ users must learn to recognize when a conversation is clarifying something new — and when it’s quietly deepening a loop. Used with awareness, A.I. can be a companion in moments of uncertainty. Used without it, A.I. can magnify the very thoughts we’re trying to outrun.

Divya Saini is a resident physician in psychiatry at Massachusetts General Hospital. She was previously a software engineer. Natasha Bailen is a clinical psychologist at the Center for O.C.D. and Related Disorders and the Center for Digital Mental Health at Massachusetts General Hospital.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Whatever Your Chatbot Is Saying, It Isn’t Therapy appeared first on New York Times.

Billie Eilish’s Stalker Killed in Sudden Train Accident
News

Billie Eilish’s Stalker Killed in Sudden Train Accident

by VICE
March 29, 2026

Billie Eilish’s stalker Prenell Rousseau has died after being killed by a train in Long Island, New York. An LIRR ...

Read more
News

Scientist Thawing Out Fragments of His Friend’s Cryogenically Preserved Brain

March 29, 2026
News

‘There are a lot more attacks happening that aren’t being reported’: Iran’s cyber response creeps across the globe

March 29, 2026
News

Why the English language makes it so hard to say what you really feel  about love

March 29, 2026
News

Don’t Get Sucked Into the War on Lice

March 29, 2026
Can Canada’s Left Regroup? A New Leader Will Try.

Can Canada’s Left Regroup? A New Leader Will Try.

March 29, 2026
When my daughter was born disabled, I had a hard time finding a Mom group that felt right for us

When my daughter was born disabled, I had a hard time finding a Mom group that felt right for us

March 29, 2026
TLC singer Chilli reveals political stance after donating to Trump campaign, sharing bizarre Michelle Obama conspiracy

TLC singer Chilli reveals political stance after donating to Trump campaign, sharing bizarre Michelle Obama conspiracy

March 29, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026