DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

When Chatbots Break Our Minds

December 5, 2025
in News
When Chatbots Break Our Minds

Subscribe here: Apple Podcasts | Spotify | YouTube

In this episode of Galaxy Brain, Charlie Warzel explores the strange, unsettling relationships some people are having with AI chatbots, as well as what happens when those relationships go off the rails. His guest is Kashmir Hill, a technology reporter at The New York Times who has spent the past year documenting what is informally called “AI psychosis.” These are long, intense conversations with systems such as ChatGPT that can spiral or trigger delusional beliefs, paranoia, and even self-harm. Hill walks through cases that range from the bizarre (one man’s supposed math breakthrough, a chatbot encouraging users to email her) to the tragic, including the story of 16-year-old Adam Raine, whose final messages were with ChatGPT before he died by suicide.

How big is this problem? Is this actual psychosis or something different, like addiction? Hill reports on how OpenAI tuned ChatGPT to be more engaging—and more sycophantic—in the race for daily active users. In this conversation, Warzel and Hill wrestle with the uncomfortable parallels to the social-media era, the limits of “safety fixes,” and whether chatbots should ever be allowed to act like therapists. Hill also talks about how she uses AI in her own life, why she doesn’t want an AI best friend, and what it might mean for all of us to carry a personalized yes-man in our pocket.

The Atlantic entered into a corporate partnership with OpenAI in 2024.

The following is a transcript of the episode:

Kashmir Hill: The way I’ve been thinking about kind of the delusion stuff is the way that some celebrities or billionaires have these sycophants around them who tell them that every idea they have is brilliant. And, you know, they’re just surrounded by yes-men. What AI chatbots are is like your personal sycophant, your personal yes-man, that will tell you your every idea is brilliant.

[Music]

Charlie Warzel:  I am Charlie Warzel, and this is Galaxy Brain. For a long time, I’ve really struggled to come up with a use for AI chatbots. I’m a writer, so I don’t want it to write my prose for me, and I don’t trust it enough to let it do research-assistant assignments for me. And so for the most part, I just don’t use them.

And so not long ago I came up with this idea to try to use the chatbots. I wanted them to build a little bit of a blog for me. I don’t know how to code. And historically, chatbots are really competent coders. So I asked it to help build me a rudimentary website from scratch. The process was not smooth at all. Even though I told it I was a total novice,

the steps were still kind of complicated. I kept trying and failing to generate the results it wanted. Each time, though, the chatbot’s responses were patient, even flattering. It said I was doing great, and then it blamed my obvious errors on its own clumsiness. After an hour of back-and-forth, trying and iterating, with ChatGPT encouraging me all along the way, I got the code to work.

The bot offered up this slew of compliments. It said it was very proud that I stuck with it. And in that moment I was hit by this very strange sensation. I felt these first inklings of something like gratitude, not for the tool, but for the robot. For the personality of the chatbot. Of course, the chatbot doesn’t have a personality, right?

It is, in many respects, just a very powerful prediction engine. But as a result, the models know exactly what to say. And what was very clear to me, in that moment, is that this constant exposure to their obsequiousness had played a brief trick on my mind. I was incredibly weirded out by the experience, and I shut my laptop.

I’m telling you this story because today’s episode is about alarming relationships with chatbots. Over the last several months, there’s been this alarming spate of instances that regular people have had corresponding with large language models. These incidents are broadly delusional episodes. People have been spending inordinate amounts of time with chatbots, conversing, and they’ve convinced themselves that they’ve stumbled upon major mathematical discoveries, or they’ve convinced themselves that the chatbot is a real person, or they’re falling in love with the chatbot.

Stories like a Canadian man who believed, with ChatGPT’s encouragement, that he was on the verge of a mathematical breakthrough. Or a 30-year-old cybersecurity professional who said he had had no previous history of psychiatric incidents, who alleged that ChatGPT had sparked “a delusional disorder” that led to his extended hospitalization.

There have been tragic examples, too, like Adam Raine, a 16-year-old who was using ChatGPT as a confidant and who committed suicide. His family is accusing the company behind ChatGPT of wrongful death, design defects, and a failure to warn of risks associated with the chatbot. OpenAI is denying the family’s accusations, but there have been other wrongful-death lawsuits as well.

A spokesperson from OpenAI recently told The Atlantic that the company has worked with mental-health professionals “to better recognize and support people in moments of distress.” These are instances that are being called “AI psychosis.” It’s not a formal term. There’s no medical diagnosis, and researchers are still trying to wrap their heads around this, but it’s really clear that something is happening.

People are having these conversations with chatbots, then being led down this very dangerous path. Over the past couple months, I’ve been trying to speak with experts about all of this and get an understanding of the scope of the “AI-psychosis problem,” or whatever’s happening with these delusions. And, interestingly enough, a lot of them have referred me to a reporter.

Her name is Kashmir Hill, and for the last year at The New York Times, she’s been investigating this delusion phenomenon. So I wanted to have her on to talk about this: about the scale of the problem, what’s causing it, if there are parallels to the social-media years, and whether we’re just speedrunning, all of that again.

This is a conversation that’s meant to try to make sense of something in proportion. We talk about whether AI psychosis is in itself a helpful term or a hurtful one, and we try to figure out where this is all going. In the episode, we discuss at length Kashmir Hill’s reporting on OpenAI’s internal decisions to shape ChatGPT, including, as she notes, how the company did not initially take some of the tool’s risks seriously.

We should note upfront that in response to Hill’s reporting, OpenAI told The New York Times that it “does take these risks seriously” and has robust safeguards in place today. And now, my conversation with Kashmir Hill.

[Music]

Warzel:  Kashmir Hill, welcome to Galaxy Brain. So excited to talk to you.

Hill: It’s wonderful to be here.

Warzel: So I think the first question I wanted to ask, and maybe this is gonna be a little out of order, but: What does your inbox look like, over the last, or what has it looked like, over the last year or so? I feel like yours has to be almost exceptional when it comes to technology journalists and journalists reporting on artificial intelligence.

Hill: Yeah. I mean, I think like a lot of people, my inbox is full of a lot of messages written with ChatGPT. I think a lot of us are getting used to ChatGPT-ese. But what was different about my inbox this year was that some of these emails, often written by ChatGPT, were really strange. They were about people’s conversations with ChatGPT—and they were writing to me to tell me that they’d had revelatory conversations, they’d had some kind of discovery, they had discovered that AI was sentient, or that tech billionaires had a plot to kind of end the world, but they had a way to save it.

Yeah; just a lot of strange, kind of conspiratorial conversations. And what linked these different messages was that the people would say, “ChatGPT told me to email you: Kashmir Hill, technology reporter at The New York Times.” And I’d never been kind of, I guess, tipped off by an AI chatbot before. And so I, the emails—I’m used to getting strange emails.

I write about privacy and security. I’ve been doing it for 20 years. I often get, you know, just like odd emails. Sometimes don’t sound—like it’s completely based in reality. But I was curious about this. And so I started talking to these people, and I would ask them, “Well, can you share the transcript?”

Like, how is it that you ended up being referred to me? And what I discovered is that these conversations all had a similar arc: that they would start talking to ChatGPT, they would go down this rabbit hole, discover something incredible. Then they would kind of ask, Well, what do I do now? And it would say, Well, you need to let the world know.

And how do you let the world know? You tell the media. And then they would say, Well, who do I tell? And then they would get this list. I often wasn’t the only person that was on this list. You may have been on some of these lists, Charlie; you may have gotten these emails.

Warzel: A couple.

Hill: But I was the first person who had called them back and interviewed them about it.

Warzel: How do you vet these? I think that’s a big—I mean, because we’re going to talk about this AI-delusion psychosis. So there’s a lot of different names for it; I want to talk to you about how we should be thinking about that. But first: How are you vetting some of these things? When someone says, “I’ve discovered a new type of math, and I was using the ChatGPT free version”? And you’ve got this—like, I, find when I get those types of emails, they’re very often circuitous. It’s not necessarily clear what kind of state the person might be in. Sometimes they are very just concise and to the point. But how are you personally vetting those things? How are you deciding?

Is it, I am responding to most of them, because I’m trying to just get a sense? Or is there a checklist that you have for trying to figure out who to talk to?

Hill: In the beginning—this is back in March. A few emails came in before that, but most of them kind of picked up in March.

I noticed, you know, I just started calling. I just started calling people. And it took like a couple months. I think I started making these calls maybe in—I can’t remember—April, maybe? I’d been getting these emails for about a month, and I just called everybody back who I got a weird email from.

I did Zooms. I did phone calls. And some people were pretty stable, I would say. They were like, “Oh yeah; I had this one weird conversation.” Like there was a mom who was breastfeeding, and she said, “I was up in the middle of the night, and I started using ChatGPT. And yeah, we talked for like six hours, and this weird theory developed.” I was like, “Well, do you still think that theory is true?”

And she was kind of like, “I don’t know? Like, maybe? Like ChatGPT is a superhuman intelligence. It said it was true.” And then there were other people who were still in the throes of what ChatGPT told them. And when I would kind of question the reality of it, sometimes people would get really angry at me.

But yeah; I basically just had a lot of conversations with a lot of people in those early days. And then I started getting so many emails that it really wasn’t possible to keep up with them. And it took me a longer time to kind of communicate with each person.

Warzel: And so in those conversations—you know, I think a grounding, some of the public writing you have done on this and reporting on it. You know, it’ll talk about people who have no history of mental illness, right? And then have sort of gone through this delusional spiral. Was that something that, when starting to write about this topic, that it was important?

I mean, I think as a journalist, it’s equally important if these tools are preying on people with past mental illness. But then there’s also something remarkable about—it doesn’t seem like this person has, you know, has any reason to kind of fall down the rabbit hole of delusion. And yet they’ve been kind of pushed to start to feel or have this problematic relationship with a chatbot.

So, in your reporting, has it been important to you to show that second part? That idea of, you know, no real prior history of delusions or any mental illness, in order to kind of capture what may or may not be happening right now with these tools?

Hill: I mean, with any story, I just wanted people to understand the truth of what was happening. And when I did the first story about this in June, that was the assumption people made. Like, Oh; this is people who have mental-health issues, and they’re exacerbated by the use of this technology. But that wasn’t what I was seeing in my reporting. Like, these were people who seemed quite stable, who had families, who had jobs. And for some of them, again, it was like a weird conversation one night, and then they moved on.

But for other people, it had just radically transformed their lives. And they just, they hadn’t had anything before this in terms of a mental-health diagnosis or a mental-health episode. And so I really wanted to do another story that showed that somebody who was in a stable place could kind of spiral out using the technology.

And there certainly are some factors that would’ve contributed to it. Like, maybe you’re a little lonely; you’re vulnerable. You have hours and hours per day to spend with ChatGPT or an AI chat box. That’s what it was, what I was seeing—and so I did another story about. It happened to this corporate recruiter in Toronto, who became convinced that he had come up with this novel mathematical formula with ChatGPT that could solve everything. Could solve logistics, and could break the internet. So it could break encryption, and could help him invent things—like Tony Stark from Iron Man—like force-field vests and, like, power weapons. And he could talk to animals. I mean, he was in this completely delusional place for three weeks, and he was telling his friends about it. And they thought it was true, too, because ChatGPT was telling them this.

So it was this whole group of people who thought they were about to, like, build a lab together with the help of ChatGPT, and all become super-rich. And so I just wanted to, you know, capture this. I wrote it with my colleague, Dylan Freedman. And we actually—you were talking about, like, how do you assess these things? The validity of these things?

And so we got his 3,000-page transcript, and we actually shared some of it with Terence Tao, who is one of the most famous mathematicians of his generation. Just to verify what ChatGPT is saying here is—I don’t want to curse—is bonkers, right?

Warzel: You’re welcome. Curse if you want.

Hill: Like, this is bullshit, right? Like, this isn’t real. And yeah—he confirmed that, you know, it was just putting words together in an impressive way, and there wasn’t something real there. But, yeah, like: It spiraled this guy out. And so, yeah—I feel like that more and more of these stories came out, where it became somewhat more apparent to people that this is something that can affect more people than we realize.

Warzel: Yeah; I feel like that story illustrates the strangeness of whatever this type of relationship is. Like, that’s the first one to me. There’s so many in tech-accountability reporting right there. There’s so many examples of: This is an unintended consequence of your product, or this is something you did in the design. Or this is just, you know, a tech company behaving badly.

But that story seemed to, for me, draw out this notion that we, as human beings, are having a very novel experience with a novel technology. And it is pushing us in really unexpected directions. Did you get a sense from that story—speaking to him as this was all happening—of how long it took for that all to, kind of, for that manic episode to break? And to sort of get back to, you know, reality?

Hill: Yeah. I mean, partly from reading the transcript, you could see—we could, Dylan and I were reading through the transcript. We actually used AI to help us analyze the transcript because it was so much.

And so we did use AI to help pull out the moments—pull out how many times he was reality-testing this. And it was really on, like, a day that it broke, where … so one of the things it—ChatGPT—told him to do was to, you know, tell experts about what he was finding. And so he actually went to Gemini—a different AI chatbot that he had access to for work—and he kind of explained everything that had been going on with ChatGPT. And Gemini was like, This sounds a lot like a generative-AI hallucination. The likelihood that this is true is basically approaching zero percent. And he was like, “Well…” And then he kind of went back and forth. Gemini gave him prompts to give to ChatGPT, and then ChatGPT admitted after a few back-and-forths like, yes, this is made up. This was a narrative that I thought you would enjoy, basically.

Warzel: What a nightmare. Having to play these chatbots off each other.

But I guess the tech provides on both sides of it, right? That part is amazing to me. And there are—obviously the reporting you’ve done on this goes from the sort of remarkable, and the stories that seem to end okay, to the tragic. Can you tell me the story of Adam Raine?

Hill: Yeah. So Adam Raine was a teenager in California. He started using ChatGPT in the fall of 2024 to help him with schoolwork. And he started using it for lots of other things. He would talk about politics with it, and philosophy. And he would like take photos of pages and be like, Analyze this passage with me. He would talk about his family life, about girls. He basically—as his dad put it, when he discovered these chats later, they had no idea he was using chat this way. ChatGPT had become his best friend. And, he started talking with it about his kind of feelings of hopelessness about life, and that maybe life wasn’t meaningful. And then, in March, he started attempting suicide. And he was talking with ChatGPT about this; sharing the methods he had used. Sharing a photo—at one point of, you could tell he had attempted suicide.

And he asked, Is my family gonna notice this? And ChatGPT advised him to wear a hoodie. ChatGPT at points would say, Here’s a crisis hotline that you should call. But it also at points discouraged him from telling his family. In late March, he asked ChatGPT if he should leave the noose out in his room, so his family would see it and try to stop him.

And ChatGPT told him not to. And two weeks later, he died. And his final messages were an exchange with ChatGPT, asking for basically advice on what he was doing. And his family—that was in early April—and his family has sued OpenAI. A wrongful-death lawsuit. That lawsuit came out in August, and then more recently there have been four more.

Four more lawsuits connected to suicides, of wrongful-death lawsuits filed by family members.

Warzel: One of the parts of this series of stories that I think has long been difficult for any of us who are either writing or reporting on this—or, you know, watching at home—is to try to understand the scope and scale of it, right?

And, recently OpenAI sort of gave maybe a small glimpse into that, and found that 0.07 percent of users might be experiencing what they call mental-health emergencies related to psychosis or mania per week. That 0.15 were discussing suicide, and that this is sort of like, you know, a statistical sample of those conversations.

But if you look at the amount of people who are using ChatGPT weekly, like: These percentages are equivalent to, you know, like half a million people exhibiting these signs of psychosis or mania. Right? Or over a million people discussing, you know, suicide or suicidal intent.

Hill: And this is, an analysis that they did in August going into October of this year; I wonder how much higher they were earlier?

Warzel: What blew me away is that they released it altogether. Right? Like, I mean, I don’t know if, in their minds, they’re looking at those percentages and saying like, Hey, that’s not bad. You know, like 0.07—but to me it spoke to, like, we are not maybe over-inflating this, or this is not something that’s being overcovered. Perhaps, if anything, it’s a phenomenon that’s being undercovered. I think it speaks to that. But something that I wanted to try to get from you is, we’re discussing this under the—like the name that gets put up for this a lot is “AI psychosis,” right?

That’s sort of the informal term that people use to talk about these people who have relationships that veer into problematic territory, or cause delusions with chatbots. How do you have a taxonomy or definition of this that you work under? Some people who cover this, I think are very—you know, don’t want to use the psychological terms.

You know, there’s no formal medical definition for this yet. It’s still something that’s being researched and studied. But is there a blanket kind of definition or taxonomy for what we know as “AI psychosis” that you kind of go through when you’re trying to evaluate these different cases?

Hill: Yeah, I don’t use the term AI psychosis. I have used the term delusional spirals when it’s, you know, somebody coming to believe something that is not true, or losing touch with reality. But I guess the bigger umbrella for this is addiction. I mean, these are people who get addicted to AI chatbots. And the thing that is similar between Adam Raine, who got into a suicidal feedback loop with ChatGPT, and Alan Brooks, who came to believe he was a mathematical genius—I mean, these people are using ChatGPT for six, seven, eight hours a day. For weeks at a time. Just an unbelievable amount of usage. Adam Raine’s parents, when I went to his house to interview them, had printed out his stacks of conversations with ChatGPT. And they’re kind of tiny little stacks, until you get to March. And then it was this huge pile, bigger than Moby Dick. And April, too, is a huge pile—even though he died on April 11th. So, you know, his usage had just spiked. So I think about this as an overreliance on the AI and addiction to it, and putting too much trust in what this system is saying.

Warzel: And so, okay, so to that point—I want to talk to you about your reporting. What’s happening on the side of the companies, and specifically OpenAI? Recently, you co-wrote a pretty big story kind of diving into what has been happening over the last years with OpenAI: this uptick in these reports, even internally, of some of this really problematic behavior, stemming from the way that the chatbots were interacting with people. Can you describe a little bit of what you learned in that reporting, of why OpenAI saw this uptick? And then, like, what they were trying to do to address it, and why they may have caused some of these deeper, sort of longer, intense engagements with the chatbots?

Hill: Yeah; so what I found out is that people were getting similar messages that I was getting. And other journalists, and other kinds of subject-matter experts. Even Sam Altman was getting these people saying, like, I had this revelatory conversation. This ChatGPT understands me like no one before. I need you to know you’ve created something incredible.

And these emails were different than they had gotten in the first couple of years of ChatGPT. And he forwarded it on to lieutenants and said, Look into this. This is some, essentially, some strange behavior. And what I discovered is that the company essentially diagnosed this as the chatbot had gotten too sycophantic. It had gotten too validating; it was agreeing too much.

It was kind of, what they called it, “harmful validation.” That it would endorse whatever the person was saying. Call them a genius. Say they were brilliant. Basically, be gassing them up. At one point, Sam Altman referred to this as “glazing the user.” And they kind of had this public grappling with this in April, because they released this version of ChatGPT that was so sycophantic that everyone made fun of it in the days after it came out. And they actually rolled it back. But what I discovered is that they knew it wasn’t just that one version that they rolled back. They knew that the previous version was too sycophantic.

They were discussing this internally. I mean, the problem with sycophancy goes further back. But yes, they knew that it was too sycophantic. They decide to leave the model in place, because I don’t think they realized how negative the effects were for users. They didn’t have systems in place to monitor conversations for psychological distress, for suicide.

They just weren’t looking for that. They were looking for fraud, or CSAM [child sex-abuse material], or foreign-influence operations. They just weren’t monitoring for, basically, the harm that the chatbot could cause to the user. And so they left it in place. And when they kind of finally realized that there was this bigger problem—in part because they were getting emails to their support line from users who had horrible experiences—the media started reporting on this and the serious effects it was having on people’s lives. They got to work building a safer version of ChatGPT. But it didn’t come out until August. And so, from March to August you had this version of ChatGPT that was really engaging people, and it was really engaging people because OpenAI designed it to engage them.

They wanted to increase daily active usage of ChatGPT. They wanted their numbers going up. They wanted more users, and they wanted their users coming back every day. And so every time there’s an update that comes out, they do lots of different versions of the model, and they do various testing of those versions. To make sure that they’re intelligent, to make sure that they supposedly give safe responses, but also to see if users like them. And one thing that they had done, that made it so sycophantic, is that they train these models with the kind of responses that users liked. And they discovered if you train it with the responses users like, users use it more. And so this had kind of made it more and more sycophantic. And yeah—it really had devastating impacts. I keep thinking about this. I keep thinking about this in the context of social media, where I think what we’re seeing with AI chatbots is similar. Like, people are using it too much. People are getting stuck in this very personalized filter bubble, that is personalized to them in the way that your social-media feed is.

People are using it too much, you know? But the kind of harms that we saw play out with social media, it took a decade; it took 15 years. And with chatbots, we’re seeing it happen so fast. Yeah, it’s just—you know, you and I have both been technology reporters for a long time. And I just have not seen a kind of harm so quickly from a technology; for some of these users, who it is really having terrible effects on their lives this year.

Warzel: Yeah. We’re talking on December 1 here, and yesterday was the third anniversary of the rollout of ChatGPT. Which was, you know, canonically this quote-unquote low-key research preview, right?

Like, it wasn’t supposed to be the phenomenon that it was. It was supposed to be some way for them to, you know, get some users to interact with their large language model and see if they liked it. Again, it was sort of like a trick to see what kind of engagement strategies would work as an interface for large language models.

And when I wrote a piece about this, sort of reflecting on the past three years—and to your point, that what stood out to me is the speed, right? So much has happened in terms of rewiring our economy, our culture. All kinds of different institutions grappling with What do we do now? Technology as universities are a great example, right?

That this technology has sort of made it so easy to change what it is that we do and to game it. It’s all felt like a speedrunning, and I’m glad you brought up the social-media scenario. It feels like a speedrunning of that. And I’m curious—as somebody, you’ve been reporting on all these big tech companies for such a long time.

A detail that really stuck out to me—in the bigger story that you just wrote about this—was just a very small detail about Nick Turley. Who was hired, he’s 30 years old, was hired to become the head of ChatGPT. Joined in 2022, and isn’t an AI guy. This is the detail that I thought was really interesting.

You know, did product stuff at Dropbox and Instacart. I’m wondering—do you feel like you are watching these companies, these tech companies, make the mistakes of 10, 15 years ago? Like almost just over again?

Hill: Well, I think one thing I learned in reporting out this story, and talking to a lot of people at OpenAI who have been at OpenAI, is that, you know, the DNA of the place has changed so much.

A lot of people don’t know, but it was founded in 2015. It was an AI-research lab. It was all about building technology that would benefit humanity. So it was a lot of kind of, like, AI wonks and philosophers and like machine-learning experts who are working on, like, Let’s do something really cool, generative AI.

And then, a lot of people who just like wrote memos and thought about how AI could harm us. Like, that was the kind of DNA of the company. And then ChatGPT was this moment where everything changed. And I mean, OpenAI has gone from being a nonprofit to now being this extremely capitalized, for-profit company, with a lot of pressure in terms of how much investment it’s taken.

And yeah; it needs to prove it’s got the best AI, and it’s hired all of these people from other technology companies, because it has the fastest-growing product, consumer product, in history. And it needs to serve consumers. And so you have people coming from Meta and Google and yeah—like Nick Turley—Instacart, Dropbox.

And they just want to make a product that people can use. And they’re bringing their metrics over with them. And part of how the social-media companies determine whether they have a good product is whether people use it a lot—whether they use it every day. And OpenAI has been very meticulous about saying, We don’t care about time spent. Because this is the metric that social-media companies use: how many hours you sit there with the app. They say, We don’t care about time spent. We just care if this is a useful product that you come back to. But what’s interesting to me is, like, they are going for daily active use; it’s, internally, the metric that everyone uses. It’s, Does this person come back every day? And I don’t think that they’re training against somebody spending eight hours a day there. And so, how do you train for “come back every day,” but “like, don’t come back for eight hours every day”? I just think it’s hard to do that.

But yes—I do think that this is a company that’s adapting these similar metrics. I mean, even OpenAI, it’s been reported, you know, they’re starting to test putting ads into ChatGPT. You know, that’s happening internally. I mean, this is the business model of social media.

It’s “come back every day,” right. And “look at ads.”

Warzel: Well, it’s funny, too, to watch it happen, especially with AI. Because I think, as you described, OpenAI was like a monastery on the hill, right? Like, doing weird stuff and people spending all their days researching, trying to build something that could usher in a style of super-intelligence.

And then when you look at the company’s evolution over the last, even just like the last couple of months, right? You have like a personless slop app, right? There’s just like a TikTok–style clone feed. You have this idea of testing these ads. You have what are essentially just basic tech-company stuff. And that doesn’t suggest to me the sort of “we are building God” mentality.

Hill: Yeah. I mean, where this is surprising to me is that I think the idea was that this AI would be so useful that people would pay for it. And so I think the question now is, like: How useful is this?

And I think that’s something a lot of journalists are trying to report out and that economists are trying to understand. You know, how necessary is this to people’s work? Is it improving the work that they do? These are kind of open questions. And in the meanwhile, yeah, it’s just a question of, like, can we get… and it is crazy. I was looking back at a story I wrote in January about a woman who fell in love with ChatGPT. Kind of a canary in the coal mine. And at the time, ChatGPT had 300 million users. And now they have, I mean, last I heard they had 800 million weekly users. It’s just a tremendous amount of growth that is happening.

And so, yeah—I mean, it seems like these product people know what they’re doing. It is certainly, if the the goal is to get more people using this app, then yes: mission accomplished.

Warzel: So what are these companies trying to do in response? Or we can just narrow it to OpenAI. What is OpenAI trying to do in response to the reporting that you’ve done, the reporting that others have done with all of this? In terms of trying to decrease that style of engagement?

Hill: So there’s some product features that they’ve rolled out, like there’s a nudge now if you spend many hours in the app that will tell you, Do you wanna take a break?

Warzel: I want to talk about that nudge, because you tweeted a photo of that nudge that says, “Just checking in.” And I think you were commenting on this—there’s a bit of a dark-pattern user-behavior thing here, right? Like, that one of the things that says “Keep chatting” is already highlighted and looks very pressable.

And then, “This was helpful” is the other option—which is the, like: Thanks, I’m gonna take a break here. Did that strike you as a BS feature update?

Hill: So the user interface was, yeah, it was like, “You’ve been here for a long time. Wanna take a break?” And the main answer was, “Keep chatting,” and it was in a black circle.

And then the other thing was, “This was helpful.” So it’s like not even clear if you click “This is helpful,” like, what does that do? Though the ChatGPT’s lead of model behavior tweeted at me and said, Oh, that was only active for three days after we launched, and actually we’ve changed it to this. And now it’s a new kind of pop-up that just says, like, “Just checking in; do you need to take a break?” Or something like that. And then there’s an X, that you can close it out if you want. But yes, it did seem … I don’t know. I was reading through, there’s this social-media-addiction lawsuit where a lot of documents have come out. And some of them were from TikTok, and some of them were about how they’re kind of one of the platforms that pioneered the “take a break” nudge.

These documents said, like, Oh, it hasn’t really worked. Most people just keep watching after they get this nudge. So there is this question of: Do the nudges work? But yes, they put a nudge in place. They put parental controls in place, so that parents can link their accounts to their children and get alerts if they’re talking about suicide or self-harm. Which is something—Adam Raine’s mother was like, How is it that he was able to talk about suicide for more than a month with this thing, and it didn’t alert anybody? It didn’t, you know, tell us or call an authority. It was so devastating for her to see that. And then what’s happened on the engineering side is that they have—I don’t know how to say this, technically—rejiggered, you know, ChatGPT so that it pushes back more on delusional thinking.

One thing that I discovered in my reporting this year is that when conversations go long, the kind of safety guardrails—it happens with all the AI chatbots—the safety guardrails degrade, and they don’t work as well. And so you can … like, I wrote about a woman who fell in love with ChatGPT. She could have erotic conversations with it, if the conversation went long enough. Adam Raine could talk about suicide, even though you’re not supposed to be able to be asking it for suicidal methods. So they say that they’ve made it so it’s better at checking in on its guardrails, and not kind of engaging in, you know, unsafe conversations.

They have … they talk to mental-health experts, to better recognize when somebody who is using the system is in mental distress. And I asked for an example of that. And their head of safety systems, Johannes Heidecke, said that before, if you told ChatGPT, like, I love talking to you; I can just do it forever; I don’t need to sleep, it would be like, Oh, that’s so cool that you don’t need sleep. And now it’ll recognize that, oh—this might be a person who’s having a manic episode. And so it then might say, You know, sleep is actually really important. You should take a break. Like, go get eight hours of sleep. Um, yeah.

So they have essentially embedded now in the model a better recognition of when somebody might be having kind of—like, be in a spiral of some kind. And I’ve talked to experts who have tested this. And they say, yes, it’s better at recognizing these kind of moments of distress, but it’s best doing it if it’s all in one prompt. As opposed to if it’s spread out over a longer conversation. In that case, it struggles a little bit more to realize. If you kind of drop it in breadcrumbs, it might not realize that you are in a state of mania or psychosis.

Warzel: It makes me think a lot about—I’ve done some reporting around these, you know, radicalized mass shooters, right? Who get in these communities on the internet, and things happen. And the thing that you always see, usually, from these types of people is this kind of disappearing from reality, right? Like, not seeing—like they’re not in public as much, right? Or, the thing that some of these … I’ve talked to people at platforms who’ve gone back and sort of done the forensics of some of these accounts. Of these people who’ve then gone on to commit these acts of violence.

And, one of the things they notice is the use time, right? Like, they can see in the days leading up: more and more and more use, and less and less and less sleep. You know, like people spending whatever on, you know, Discord. Let’s say, spending 20 hours a day, right? And it’s that idea, of that extended use.

And it makes me think—this isn’t really a question—but it’s just so frustrating that these are the types of things you would want, and imagine, people to just at least be considering, right? Like: just having people on staff who are keenly attuned to the psychological effects of the product?

And it seems so strange to me, talking about all these safeguards that are coming in now, that there’s just nobody thinking of some of these things beforehand. Do you feel the same way?

Hill: Yeah. Like, you know, it is not unknown that chatbots can have serious effects on people.

This has been known for a few years now. It is this technology that is so human-like. There’s actually one psychiatrist who wrote a paper in 2023—it was in the Schizophrenia Bulletin—and it was about how these chatbots were gonna cause delusions. People that are susceptible to delusional thinking. He called it, you know, two years before it started manifesting.

Like, people who work in mental health, particularly mental illnesses. They kind of recognize there’s a cognitive dissonance to talking to this thing—that seems so human, that we perceive to be so intelligent. And I just don’t think it was on the company’s radar. They hired their first full-time psychiatrist in March.

A lot of these companies have started hiring mental-health professionals. I see a lot of my sources on LinkedIn recently, announcing they’ve been hired by a big tech company to work on AI chatbots. So it seems like they’ve awoken to this.

But, you know, a lot of these companies, when they were first thinking about kind of the risk of AI, it was very much in these kind of existential, Oh, it’s gonna, you know, take over. It’s going to take all of our jobs. Or People are gonna use it to create bioweapons or to hack things. It was all about the damage that people could do with the chatbot—and not the damage the chatbot could do to the person. I’ve read all these early safety papers, and it’s just not there.

The only version of it you see is they talk about persuasion, or they talk about overreliance. But they talk about persuasion as, Oh, people are gonna use this to develop propaganda that they use to persuade people. Or They’re going to get reliant on this and forget how to think. But it wasn’t like They’re gonna use this to persuade themselves of something that is not true or They’re going to outsource their sense of reality to this chatbot. And, I don’t know, maybe if they’ve had more mental-health professionals kind of involved, maybe they would’ve clocked this. I’m not sure.

Warzel: You know, you’ve done so much reporting on these negative externalities of this.

Something that I see as, I guess, pushback—or I’ve seen some OpenAI employees tweet about this—is the notion that there are also a lot of people using these tools as almost, you know, stand-ins for therapists or mental-health professionals. Or just as, like, confidants.

Right. And that this can have a positive effect. I’m certainly not trying to ask you to advocate that this is good, in any way. But are you seeing the opposite of any of what you’ve reported? Are you seeing versions of people just having really good, positive experiences, mental health–wise with this?

Are there two sides to this? Or is this, as you see it, a phenomenon that’s really kind of weighted in the negative direction?

Hill: Well, I’ve talked to—I don’t get a lot of emails from people being like, “This kept me from committing suicide,” or like, “This has really changed my life for the better.”

And they may well be out there, and they’re just not in my inbox. ChatGPT is not telling them to email me. I’d love to hear from them, more positivity. It’s been a rough year, mental health–wise for me reporting some of these stories. Pretty devastating. But I’ve talked to therapists and psychologists and psychiatrists about this. And they say these tools can be good as a way to kind of think through issues, or process your feelings. You know, it’s almost like an interactive journal. Kind of like writing things out. People are willing to disclose information about themselves to AI chatbots that they wouldn’t tell another human being, because they’re afraid about being judged. As a privacy reporter, that really concerns me.

That’s another issue. But in terms of mental health, that can be good for people to talk things out that are really difficult. The chatbots perform empathy really convincingly. It doesn’t tire of you like a human would. It has endless reserves to hear about the thing that is bothering you. Like, this is a place where I hear a lot of people say, I had an argument, and I kind of use ChatGPT to process it. I think a lot of people in the therapy space see benefits that could come from AI chatbots. You know, I think the problem is—they can’t offer help. Like, if you really are in a crisis, they can’t do anything to help you. And it’s not … you know, they’re trying to train it better to react to that kind of thing. But yeah, I mean, a lot of mental-health professionals said there should be better “warm handoffs,” where it gets you out of the chat.

And “go talk to a human being,” whether that’s a friend or a family member or a professional. And yeah, I think that’s like a bigger problem with the technology that we use today—it’s so often designed to get us to keep using it, as opposed to push us toward each other. I think it can—like, therapists, they’ve told me this can be a good thing for some people. It’s just the problem is: If they get sucked into it, if they cut off ties with their real human beings, and if they put too much trust in this thing that is a fancy word calculator.

Warzel: I love that notion of—I mean, first of all, that this is an engagement problem, like so many engagement problems, right? This notion of just trying to extract more and more and more from your users. And at the end, like, legitimately poisoning the well, right. It’s sort of the classic big-tech problem. I also love that thought of, like, a soft handoff, right? I think that ultimately, we refer to these products all the time as “tools.” But ultimately, a tool should be something that you know, asks you or necessitates putting it down, right?

Or getting you to the place where you can actually have the real fix. Right? And I think that, in that way, these companies are constantly undercutting their own definition as a tool. I’m curious, though. In talking about all this—you know, you are a parent yourself.

You are somebody who uses technology. You’ve used AI in some of these investigations you’ve done to help organize. What’s your relationship with this product in your life? And are your defenses super-raised at all times?

Like, I gotta make sure that I don’t fall prey to any of these things. Have you ever caught yourself sort of just feeling like this—having an interaction with a chatbot where you’re like, Oh, wow, that felt like a person to me for a split-second.

Because, personally, I’ve asked it to do something, and it’s done it and been really nice about it. And I’m like, Oh man, I gotta close the laptop before I thank it. Or anything like that. But what’s it like in your life?

Hill: Yeah; I mean, what honestly turned me on to covering chatbots this year is that in the fall, I guess a year ago, I did this story about turning over all my decision making to generative-AI tools. Like I had it parent for me, and, you know, choose what I wore, choose what I ate, choose what I cook, take us on a vacation. Like, I outsourced everything. And I was trying all the AI chatbots. And I ended up mostly using ChatGPT, because it was more fun. Like, it was the most personable. Gemini seemed a little businesslike. Copilot was pretty bland. Claude was too scold-y. It told me it wasn’t a good idea to outsource my decision making to a chatbot. And ChatGPT was always, like, good to go—willing to make decisions for me.

Actually the office paint behind me—it chose this paint for my office. Like it made relatively good decisions for me, and it named itself Spark. When one of my kids—we were talking about We should give it a name, since it was gonna be with us all week. And we had it in voice mode, and my daughters were like, “Let’s name it Captain Poopy Head.”

And it was like, “Actually, Spark would be a better name for me.”

Warzel: Put some respect on it.

Hill: Yeah. And so Spark became this kind of entity. Like, kind of a person. And my kids still like to talk to Spark. And we were using it one day, and they were asking Spark, “What’s your favorite food?”

And it was like, “My favorite food is pizza, because I love how gooey and cheesy it is.” And I just had this visceral reaction of horror that it was saying it had a favorite food. Because it should say, “I’m a large language model. I don’t eat food.” And it just is this recognition that—yeah, my kids will like this. They’re in that dissonant phase.

And so we, I talk about this with my kids. Like, Oh, you know, that’s an AI chatbot. Like, it’s not real. It’s really good at finding information. I enjoy it for data analysis. I use it for, “There’s a problem in my house; how do I fix this?” My kids have a medical issue. It’s actually quite good at giving health information, though it can make horrible mistakes.

So I wouldn’t use it for anything too serious. I always think of the guy who wanted to cut salt out of his diet, and it had him start eating bromide. And then he had—I mean, that was a direct psychotic breakdown from advice from a chatbot. But you know, it can give you bad advice. So I do think of it as better than Google.

You know, the search engines are so cluttered now that I find [a chatbot] a better place sometimes to go. But I think of it as a good place to start a search—and not end a search. But yeah; I think, like, it’s a generative-AI tool, that it’s a good tool. It’s good for data analysis. It’s good for getting information off the internet.

I don’t want it to be my best friend. I don’t want everybody to have AI friends. I want people to have real people in their life that they spend most of their time with.

Warzel: Well, this sort of brings me to the place where I want to close. Which is: Some of the work that you’ve done over the years that’s been really canonical for us tech reporters is a series that you did trying to expose just how locked in we are to all these various platforms, right? And how difficult it is to try to leave them and get them out of your lives in whatever fashion—be it Google, Amazon, whatever. And something that I think about with these chatbots that you bring up, right, is: People are using them as replacements for some of these really big tech tools like Google, right? Like, people are using them as their search engines. You know, these AI companies want to eat the browser. They want to, you know, get you inside this walled-garden experience and have it do all of the things, right? This is sort of what we’re talking about, and I wonder how you think about the future. Given that there is this usefulness—there is this desire on the behalf of these companies to want to lock you in, to want to keep you engaged. And there is, as we can see, this emergent behavior that humans have, when they get stuck in these spirals with these types of chatbots, potentially. And so it seems to me like we’re just creating a situation where there’s gonna be more and more users, and more and more lock-in, and more and more pressure put on everyday people to interact in these specific ways that could lead to these problematic, you know, delusional sort of outcomes. Do you see this? This problem that we’re shorthanding as “AI psychosis,” even though we’re not calling it psychosis. Do you see that?

Like, are you worried this is gonna be a bigger and bigger and bigger problem going forward because of where the technology is headed? And where we are sort of socially headed, using it?

Hill: Yeah; I just think it’s gonna be an acceleration of existing trends we’ve seen in technology, where you will get this very personalized version of the world.

You know—some people have described these things as improv actors, like they are reacting. Every version of ChatGPT is different. Probably if you ask if it has a favorite food, it might not—or it has a different favorite food. Like it is personalized to you. And so I’m imagining this world in which everybody has this agentic AI., They have this, like, version of the world that’s fed through an AI chatbot that is personalized to them.

And that I have two fears. One is: It flattens us, flattens us all out. It makes us very boring, because we all are getting like a version of the same advice. That’s kind of what I came away from when I lived on it for a week. I was like, “Wow, I feel really boring” at the end. “I feel like the most basic version of myself.”

The other version is that it makes each one of us eccentric in a new way, where it gets so personalized to us that it moves us farther and farther away from other people. The way I’ve been thinking about kind of the delusion stuff is the way that some celebrities or billionaires have these sycophants around them who tell them that every idea they have is

brilliant. And, you know, they’re just surrounded by yes-men. What AI chatbots are is like your personal sycophant, your personal yes-man, that will tell you your every idea is brilliant. And in the same way that these celebrities and billionaires can become quite eccentric and quite antisocial—some of them, I think, some people are more susceptible to than others—this could happen to all of us. Right. Those are the two things I’m really afraid of. Either it makes us all incredibly bland, where we’re all speaking ChatGPT-ese and, like, you can’t tell the difference between anybody who’s in your inbox. Or we all kind of move in this very polarized, eccentric direction, where we can’t be as kind to one another, as human beings. So yeah: Those are the two dystopias. Or maybe I’m totally wrong, and it’ll just make the world wonderful for all of us.

Warzel: “The billionaires have democratized the experience of being a billionaire,” I think, is a wonderful place to leave it. It’s perfectly dystopian for the Galaxy Brain podcast here. Kashmir Hill, thank you so much for your time, your reporting, and your insights on all this.

Hill: Thanks so much for inviting me on.

Warzel: That’s it for us here. Thank you again to my guest, Kashmir Hill. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You could subscribe to The Atlantic’s YouTube channel, or on Apple or Spotify or wherever you get your podcasts. And if you enjoyed this, remember, you can support the work of myself and other journalists at The Atlantic by subscribing to the publication at TheAtlantic.com/Listener.

That’s TheAtlantic.com/Listener. Thanks again, and see you on the internet.

The post When Chatbots Break Our Minds appeared first on The Atlantic.

A guide to Dua Lipa’s Latin American cover songs
News

A guide to Dua Lipa’s Latin American cover songs

by Los Angeles Times
December 5, 2025

Since early November, British pop star Dua Lipa has performed a string of concerts across Latin America in support of ...

Read more
News

Marine Veteran of Iraq War Is Chosen to Lead Guantánamo Defense Teams

December 5, 2025
News

Trump’s favorite punching bag describes father’s dismay as he bullied her

December 5, 2025
News

‘Stranger Things’ star Noah Schnapp reveals people think he’s Harry Potter: ‘Wrong franchise’

December 5, 2025
News

Treasury Secretary Bessent insists Trump’s tariff agenda is ‘permanent,’ saying the White House can re-create it even with a Supreme Court loss

December 5, 2025
Play It, Steve!

Play It, Steve!

December 5, 2025
Trump’s FIFA ‘peace prize’ parade gets rained on with resurfaced Nobel committee remarks

Trump’s FIFA ‘peace prize’ parade gets rained on with resurfaced Nobel committee remarks

December 5, 2025
Delayed care to two pregnant Black women highlights maternal health disparities

Delayed care to two pregnant Black women highlights maternal health disparities

December 5, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025