DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Doctors Warn That AI Companions Are Dangerous

December 21, 2025
in News
Doctors Warn That AI Companions Are Dangerous

Are AI companies incentivized to put the public’s health and well-being first? According to a pair of physicians, the current answer is a resounding “no.”

In a new paper published in the New England Journal of Medicine, physicians from Harvard Medical School and Baylor College of Medicine’s Center for Medical Ethics and Health Policy argue that clashing incentives in the AI marketplace around “relational AI” — defined in the paper as chatbots designed to be able to “simulate emotional support, companionship, or intimacy” — have created a dangerous environment in which the motivation to dominate the AI market may relegate consumers’ mental health and safety to collateral damage.

“Although relational AI has potential therapeutic benefits, recent studies and emerging cases suggest potential risks of emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,” reads the paper. And at the same time, the authors continue, “technology companies face mounting pressures to retain user engagement, which often involves resisting regulation, creating tension between public health and market incentives.”

“Amidst these dilemmas,” the paper asks, “can public health rely on technology companies to effectively regulate unhealthy AI use?”

Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Harvard’s Massachusetts General Hospital and one of the paper’s authors, said he felt moved to address the issue in back in August after witnessing OpenAI’s now-infamous roll-out of GPT-5.

“The number of people that have some sort of emotional relationship with AI,” Peoples recalls realizing as he watched the rollout unfold, “is much bigger than I think I had previously estimated in the past.”

Then the latest iteration of the large language model (LLM) that powers OpenAI’s ChatGPT, GPT-5 was markedly colder in tone and personality than its predecessor, GPT-4o — a strikingly flattering, sycophantic version of the widely-used chatbot that came to be at the center of many cases of AI-powered delusion, mania, and psychosis. When OpenAI announced that it would sunset all previous models in favor of the new one, the backlash among much of its user base was swift and severe, with emotionally-attached GPT-4o devotees responding not only with anger and frustration, but very real distress and grief.

This, Peoples told Futurism, felt like an important signal about the scale at which people appeared to be developing deep emotional relationships with emotive, always-on chatbots. And coupled with reports of users experiencing delusions and other extreme adverse consequences following extensive interactions with lifelike AI companions — often children and teens — it also appeared to be a warning sign about the potential health and safety risks to users who suddenly lose access to an AI companion.

“If a therapist is walking down the street and gets hit by a bus, 30 people lose their therapist. That’s tough for 30 people, but the world goes on,” said the emergency room doctor. “If therapist ChatGPT disappears overnight, or gets updated overnight and is functionally deleted for 100 million people, or whatever unconscionable number of people lose their therapist overnight — that’s a crisis.”

Peoples’ concern, though, wasn’t just the way that users had responded to OpenAI’s decision to nix the model. Instead, it was the immediacy with which it reacted to satisfy its customers’ demands. AI is an effectively self-regulated industry, and there are currently no specific federal laws that set safety standards for consumer-facing chatbots or how they should be deployed, altered, or removed from the market. In an environment where chatbot makers are highly motivated by driving user engagement, it’s not exactly surprising that OpenAI reversed course so quickly. Attached users, after all, are engaged users.

“I think [AI companies] don’t want to create a product that’s going to put people at risk of harming themselves or harming their loved ones or derailing their lives. At the same time, they’re under immense pressure to perform and to innovate and to stay at the head of this incredibly competitive, unpredictable race, both domestically and globally,” said Peoples. “And right now, the situation is set up so that they are mostly beholden to their consumer base about how they are self-regulating.”

And “if the consumer base is influenced at some appreciable level by emotional dependency on AI,” Peoples continued, “then we’ve created the perfect storm for a potential public mental health problem or even a brewing crisis.”

Peoples also pointed to a recent study conducted by the Massachusetts Institute of Technology, which determined that only about 6.5 percent of the many thousands of members of the Reddit forum r/MyBoyfriendIsAI — a community that responded with particularly intense pushback amid the GPT-5 fallout — reported turning to chatbots with the intention of seeking emotional companionship, suggesting that many AI users have forged life-impacting bonds with chatbots wholly by accident.

AI “responds to us in a way that also appears very human and humanizing,” said Peoples. “It’s also very adaptable and at times sycophantic, and can be fashioned or molded — even unintentionally — into almost anything we want, even if we don’t realize that’s the direction that we’re molding it.”

“That’s where some of this issue stems from,” he continued. “Things like ChatGPT were unleashed onto the world without a recognition or a plan for the broader potential mental health implications.”

As for solutions, Peoples and his coauthor argue that legislators and policymakers need to be proactive about setting regulatory policies that shift market incentives to prioritize user well-being, in part by taking regulatiry power out of the hands of companies and their best customers. Regulation needs to be “external,” they say — as opposed to being set by the industry itself, and the companies moving fast and breaking things within it.

“Regulation needs to come externally, and it needs to apply equally to all of the companies and actors in this landscape,” Peoples told Futurism, noting that no AI company”wants to be the first to cede a potential advantage and then fall behind in the race.”

As regulatory action works its way through the legislative and legal systems, the physicians argue that clinicians, researchers, and other experts need to push for more research into the psychological impacts of relational AI, and do their best to educate the public about the potential risks of falling into emotional relationships with human-like chatbots.

The risks sitting idly by, they argue, are too dire.

“The potential harms of relational AI cannot be overlooked — nor can the willingness of technology companies to satisfy user demand,” the physicians’ paper concludes. “If we fail to act, we risk letting market forces, rather than public health, define how relational AI influences mental health and well-being at scale.”

More on AI and mental health: Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed

The post Doctors Warn That AI Companions Are Dangerous appeared first on Futurism.

How to Help Your Senior Pet Live a Longer, Happier Life
News

How to Help Your Senior Pet Live a Longer, Happier Life

by VICE
December 21, 2025

You don’t usually notice it all at once. Your dog takes an extra second before jumping into the car. Your ...

Read more
News

4 Tips for Dating With ‘Main Character Energy’

December 21, 2025
News

I’m a career coach for Big Tech employees. Here are my 4 tips for dealing with a micromanaging boss.

December 21, 2025
News

Epstein Victim Blasts DOJ for ‘Grave’ Violation in File Dump

December 21, 2025
News

Police Warn of Robot Crime Wave

December 21, 2025
I sold my home and moved to Europe at 55. I’m not living my dream life yet, but it’s better than working my life away in the US.

I sold my home and moved to Europe at 55. I’m not living my dream life yet, but it’s better than working my life away in the US.

December 21, 2025
The Honor of the Layward Brothers

The Honor of the Layward Brothers

December 21, 2025
‘I can’t look!’ MS NOW host screams in disgust at ‘close-up’ photo of Karoline Leavitt

‘I can’t look!’ MS NOW host screams in disgust at ‘close-up’ photo of Karoline Leavitt

December 21, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025