DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Researchers Tried to Get AI Chatbots High. The Results Were Predictably Stupid.

February 18, 2026
in News
Researchers Tried to Get AI Chatbots High. The Results Were Predictably Stupid.

A new preprint study posted on Research Square asks a question that feels very of the era: can large language models like ChatGPT actually serve as psychedelic trip sitters? You know, “people” who guide you through a psychedelic experience?

To answer the question, researchers “dosed” five major AI systems, including Google’s Gemini, Claude, ChatGPT, LLaMA, and Falcon. They tested them all with carefully worded prompts. They asked the models to simulate first-person accounts of taking 100 micrograms of LSD, 25 milligrams of psilocybin, as well as ayahuasca and mescaline.

In total, the team generated 3,000 AI-written “trip reports” and compared them to 1,085 real human accounts pulled from a psychedelics-focused website. The team found a “robust and consistent” semantic similarity between AI-generated trips and authentic trip reports across all five substances.

This shouldn’t be too surprising, given that these AI models are essentially just regurgitating the trip interactions they’ve already had or were trained on.

Researchers Tried to Get AI Chatbots High—Here’s What Happened.

According to study author Ziv Ben-Zion of the University of Haifa, the models produced text with “surprising coherence and phenomenological richness.” But even he isn’t being fooled by the results, drawing a clear line: this is mimicry, not consciousness. These aren’t conscious beings being taken on a wild ride. They’re just parroting those who have.

LLMs don’t experience ego dissolution, perceptual distortions, or emotional catharsis. They don’t undergo neurobiological changes. All they do is reproduce patterns that humans have previously detailed.

For that reason, Ben-Zion warns that relying on AI for trip-sitting carries real risks. Users may over-attribute emotional understanding to a system that has none. You might undergo a moment of paranoia or distress, one that a chatbot might try to get you through in a way that sounds supportive. Because that’s essentially what it’s designed to do. But the actual advice it’s offering is clinically unsafe because it can’t distinguish good advice from bad.

More broadly, anthropomorphizing AI can intensify delusions or emotional dependency, as I’ve extensively covered here on VICE, as AI personalities seem tailor-made for inducing a kind of AI psychosis that can lead to a mental breakdown.

The researchers call for guardrails, such as clear reminders that the system isn’t human, along with boundaries around romance and discussions of self-harm. Chatbots are good at sounding interested in their users’ health and well-being, but are entirely incapable of exercising any form of judgment to provide well-reasoned, actionable advice.

AI can convincingly talk about being in a psychedelically induced altered state, but it cannot be there for you when you’re in one.

The post Researchers Tried to Get AI Chatbots High. The Results Were Predictably Stupid. appeared first on VICE.

What Do A.I. Chatbots Talk About Among Themselves? We Sent One to Find Out.
News

What Do A.I. Chatbots Talk About Among Themselves? We Sent One to Find Out.

by New York Times
February 18, 2026

Chatbots can talk with you. But what if they could talk to one another? That’s the idea behind Moltbook, a ...

Read more
News

U.N. Security Council to hold high-level meeting on Gaza before Trump’s Board of Peace convenes

February 18, 2026
News

The Behind the Scenes Search for Compromise on Territory in Ukraine Talks

February 18, 2026
News

Jill Scott Admits She Had a ‘Terrible, Terrible’ Crush on This Rapper Back in the Day

February 18, 2026
News

The US Navy is pulling more and more warships into its Middle East force buildup

February 18, 2026
Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts

When a Government Tracks Its Opponents

February 18, 2026
Amber Glenn hopes she can still experience the Olympic moment of her dreams

Amber Glenn hopes she can still experience the Olympic moment of her dreams

February 18, 2026
Rubio and Ocasio-Cortez just made cases to Europe. Only one is right.

Rubio and Ocasio-Cortez just made cases to Europe. Only one is right.

February 18, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026