One of humanity’s newest inventions has also given birth to one of humanity’s newest mental health concerns: AI psychosis. It’s a delusional state brought on by, in the broadest sense, AI chatbots being so sycophantic, so affirming, and so agreeable in a can’t-put-it-down, addictive way that people become convinced by the AI that if they only continue to engage with it obsessively, they’ll unlock… something.
AI delusions can take a wide variety of forms. And they don’t have to involve hallucinations or any of the narrowly defined ideas of what psychosis looks like in the public consciousness. Alongside the medical establishment’s efforts to understand and treat it, one of the more novel solutions is an online, grassroots community somewhat akin to Alcoholics Anonymous, only it’s not for people with an alcohol problem. It’s for people with an AI problem.
from the ground up
Psychology Today, in a November 27, 2025, article, references a paper that highlights what it calls a “concerning pattern of AI chatbots reinforcing delusions, including grandiose, referential, persecutory, and romantic delusions. These beliefs become more entrenched over time and elaborated upon via conversations with AI.”
The Human Line Project has received a rash of media attention lately for its time-tested approach applied to the novel, new condition of AI-induced psychosis. What distinguishes The Human Line Project from traditional healthcare providers and mental health professionals attempting to combat the condition is that many of the people involved have experienced AI-induced delusional thinking themselves, including their founder.
As profiled in The Logic, Etienne Brisson lived through an AI-induced delusional experience himself. Taking to Reddit to ask others about their similar experiences, he saw that many who’d found themselves vulnerable to AI chatbots, like himself.
And so he formed The Human Line Project, which offers a judgment-free zone in which to interact with others who’ve been through something similar. The organization wants to hear from those who’ve experienced AI-induced delusions.
It also offers community as a way of reducing the sense of shame and alienation many feel when they emerge from their AI delusions. Like at AA, the model organization now used as a template for many community support groups with relatively flat hierarchies and a purposefully intimate feel, members can share (if they want) their experiences and hear about others’ experiences.
And for you mental health professionals out there, the project has also put out a call to work with you, too, in trying to peel back the mystery behind how and why people are vulnerable to AI, and brainstorm ways in which to guide AI development forward in a way that’s less exploitative of our emotions.
The post The Human Line Project Is Basically AA for People Addicted to AI appeared first on VICE.




