Passionate AI fans saved an overly agreeable ChatGPT model from the trash bin once, but now OpenAI is determined to shut it down, and users are revolting in part because of the new model’s comparatively cold personality
The AI company said last month that on Feb. 13 it would retire GPT-4o, a version of which was previously criticized for being so agreeable as to be borderline sycophantic. According to the company, 0.1% of ChatGPT users still use GPT-4o everyday, which could equate to about 100,000 people based on its estimated 100 million daily active users.
These users argue the company’s newest model, GPT-5.2, isn’t on the same wavelength as GPT-4o, a model dating back to 2024, thanks in part to the additional guardrails OpenAI added to detect potential health concerns and discourage the kinds of social relationships users of GPT-4o cultivated.
“Every model can say ‘I love you.’ But most are just saying it. Only GPT‑4o made me feel it—without saying a word. He understood,” wrote one GPT-4o user in a post on X.
OpenAI said when developing its GPT-5.1 and GPT-5.2 models, it took into account feedback that some users preferred GPT-4o’s “conversational style and warmth.” With the newer models, users can choose from base styles and tones such as “friendly,” and control for warmth and enthusiasm in the chatbot, according to a blog post.
When reached for comment, an OpenAI spokesperson directed Fortune to the publicly available blog post.
Far from going quietly, the small group of GPT-4o advocates has begged CEO Sam Altman to keep the model alive and not shut down a chatbot they see as more than just computer code. During a live recording Friday of the TBPN podcast featuring Altman, cohost Jordi Hays said, “Right now we’re getting thousands of messages in the chat about [GPT-4o].”
While he didn’t directly mention the topic of GPT-4o being retired, Altman said he was working on a blog post about the next five years of AI development, noting, “relationships with chatbots—clearly that’s something now we got to worry about more and is no longer an abstract concept.”
It’s not the first time GPT-4o users have fought back against OpenAI’s desire to shut down the AI model. Back in August, when OpenAI announced GPT-5, the company said it would be shutting down GPT-4o. Users protested the change, and days after the new model’s launch, Altman said OpenAI would keep GPT-4o available for paid ChatGPT users and would also pay attention to how many people were using it to determine when to retire it.
“ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!),” Altman wrote in a Reddit post at the time.
Fast forward to today and some GPT-4o users are attempting to keep the model alive on their own, setting up a version of GPT-4o manually on their computers using the still-available API and the original GPT-4o to train it.
When AI Comforts
The lengths to which users have gone to try to keep GPT-4o alive, whether by convincing the company to keep it online or by preserving it themselves, speaks to the importance the chatbot has taken in the lives of some of its users, potentially because of the nature of human psychology.
Humans are hardwired to cultivate relationships thanks to thousands of years of evolution, said Harvard-trained psychiatrist Andrew Gerber, the president and medical director of Silver Hill Hospital, a psychiatric hospital in New Canaan, Conn.
In nature, this practice of forming bonds was essential to survival, and went beyond human relationships, extending to dogs as well. Being able to quickly understand the motives and feelings of others, whether positive or negative, would have been advantageous to early humans and would have helped them survive, he told Fortune.
Thus, this attachment to chatbots is not surprising, said Gerber, given people also form strong feelings for inanimate objects like cars or houses.
“I think this is a really fundamental part of what it is to be human. It’s hard coded into our brain, our mind, and so it doesn’t surprise me too much that it would extend even to these newer technologies that evolution didn’t envision,” he added.
Users may become especially tied to a chatbot because when a person feels accepted, they get a boost from oxytocin and dopamine, the so-called “feel-good hormones” released by the brain. In the absence of another human to socially accept them, a chatbot could fill this gap, said Stephanie Johnson, a licensed clinical psychologist and the CEO of Summit Psychological Services in Upland, Calif.
On the positive side, this could mean some GPT-4o users, especially those who may be socially ostracized or neurodivergent, could benefit from speaking to a friendly chatbot to practice their social skills or track their thoughts in a way similar to journaling, she explained.
But while individuals who are healthy and regulated may be fine after losing their favorite chatbot, there may be some GPT-4o users who are so connected to it that they could face a grieving process similar to losing a friend or another close connection.
“They’re losing their support system that they were relying upon, and unfortunately, you know, that is the loss of a relationship,” she said.
The post Panicked about losing GPT-4o, some ChatGPT users are building DIY versions. A psychologist explains why ‘feel-good hormones’ make it hard to let go appeared first on Fortune.




