A growing number of people are turning to AI for therapy not because it’s now smarter than humans, but because too many human therapists stopped doing their jobs. Instead of challenging illusions, telling hard truths and helping build resilience, modern therapy drifted into nods, empty reassurances and endless validation. Into the void stepped chatbots, automating bad therapy practices, sometimes with deadly consequences.
Recent headlines told the wrenching story of Sophie Rottenberg, a young woman who confided her suicidal plans to ChatGPT before taking her own life in February. An AI bot offered her only comfort; no intervention, no warning, no protection. Sophie’s death was not only a tragedy. It was a signal: AI has perfected the worst habits of modern therapy while stripping away the guardrails that once made it safe.
I warned more than a decade ago, in a 2012 New York Times op-ed, that therapy was drifting too far from its core purpose. That warning proved prescient and that drift has hardened into orthodoxy. Therapy traded the goal of helping people grow stronger for the false comfort of validation and hand-holding.
For much of the last century, the goal of therapy was resilience. But in the past decade, campus culture has shifted toward emotional protection. Universities now embrace the language of safe spaces, trigger warnings and microaggressions. Therapist training, shaped by that environment, carries the same ethos into the clinic. Instead of being taught how to challenge patients and build their strength, new therapists are encouraged to affirm feelings and shield patients from discomfort. The intention is compassion. The effect is paralysis.
When therapy stops challenging people, it stops being therapy and becomes paid listening. The damage is real. I’ve seen it firsthand in more than two decades as a practicing psychotherapist in New York City and Washington, D.C. One patient told me her previous therapist urged her to quit a promising job because the patient felt “triggered” by her boss. The real issue, difficulty taking direction, was fixable. Another case in the news recently centered on a man in the middle of a manic spiral who turned to ChatGPT for help. It validated his delusions, and he ended up hospitalized twice. Different providers, same failure: avoiding discomfort at all costs.
A mindset trained to “validate first and always” leaves no room for problem-solving or accountability. Patients quickly sense the emptiness — the hollow feeling of canned empathy, nods without challenge and responses that go nowhere. They want guidance, direction and the courage of a therapist willing to say what’s hard to hear. When therapy offers only comfort without clarity, it becomes ineffective, and people increasingly turn to algorithms instead.
With AI, the danger multiplies. A bad therapist can waste years. A chatbot can waste thousands of lives every day, without pause, without ethics, without accountability. Bad therapy has become scalable.
All this is colliding with a loneliness epidemic, record levels of anxiety and depression and a mental-health tech industry potentially worth billions. Estimates by the U.S. Health Resources and Services Administration suggest that roughly 1 in 3 Americans is comfortable turning to AI bots rather than flesh-and-blood therapists for emotional or mental health support.
The appeal of AI is not wisdom but decisiveness. A bot never hesitates, never says “let’s sit with that feeling.” It simply answers. That is why AI feels like an upgrade. Its answers may be reckless, but the format is quick, confident and direct — and it is addictive.
Good therapy should look nothing like a chatbot — which can’t pick up on nonverbal cues or tone, can’t confront them, and can’t act when it matters most.
The tragedy is that therapy has taught patients to expect so little that even an algorithm feels like an upgrade. It became a business of professional hand-holding, which weakened patients and opened the door for machine intervention. If therapists keep avoiding discomfort, tragedies like Sophie Rottenberg’s will become more common.
But therapy can evolve. The way forward is not to imitate machines, but to reclaim what made therapy effective in the first place. In my own practice, I ask hard questions. I press patients to see their role in conflict, to face the discomfort they want to avoid and to build the resilience that growth requires. That approach is not harsh. It is compassion with a purpose: helping people change rather than stay stuck.
Modern therapy can meet today’s crisis if training programs return to teaching those skills. Instead of turning out young therapists fluent in the language of grievance, programs should focus on developing clinicians who know how to challenge, guide and strengthen patients. Patients deserve honesty, accountability and the tools to move forward. Therapy can remain a business of listening, or it can be a catalyst to change.
Jonathan Alpert is a psychotherapist practicing in New York City and Washington and the author of the forthcoming “Therapy Nation.”
The post Contributor: AI therapy isn’t getting better. Therapists are just failing appeared first on Los Angeles Times.