DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

OpenAI Plans to Add Safeguards to ChatGPT for Teens and Others in Distress

September 2, 2025
in News
OpenAI Plans to Add Safeguards to ChatGPT for Teens and Others in Distress
492
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

ChatGPT is smart, humanlike and available 24/7. That has attracted 700 million users, some of whom are leaning on it for emotional support.

But the artificially intelligent chatbot is not a therapist — it’s a very sophisticated word prediction machine, powered by math — and there have been disturbing cases in which it has been linked to delusional thinking and violent outcomes. Last week, Matt and Maria Raine of California sued OpenAI, the company behind ChatGPT, after their 16-year-old son ended his life after months in which he discussed his plans with ChatGPT.

On Tuesday, OpenAI said it planned to introduce new features intended to make its chatbot safer, including parental controls, “within the next month.” Parents, according to an OpenAI post, will be able to “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress.”

This is a feature that OpenAI’s developer community has been requesting for more than a year.

Other companies that make A.I. chatbots, including Google and Meta, have parental controls. What OpenAI described sounds more granular, similar to the parental controls introduced by Character.AI, a company with role-playing chatbots, after it was sued by a Florida mother, Megan Garcia, after her son’s suicide.

On Character.AI, teens must send an invitation to a guardian to monitor their accounts; Aditya Nag, who leads the company’s safety efforts, told The New York Times in April that use of the parental controls was not widespread.

Robbie Torney, a director of A.I. programs at Common Sense Media, a nonprofit that advocates safe media for children, said parental controls were “hard to set up, put the onus back on parents, and are very easy for teens to bypass.”

“This is not really the solution that is going to keep kids safe with A.I. in the long term,” Mr. Torney said by email. “It’s more like a Band-Aid.”

For teenagers and adults indicating signs of acute distress, OpenAI also said it would “soon begin” to route those inquiries to what it considers a safer version of its chatbot — a reasoning model called GPT-5 thinking. Unlike the default model, GPT-5, the thinking version takes longer to produce a response and is trained to align better with the company’s safety policies. It will, the company said in a different post last week, “de-escalate by grounding the person in reality.” A spokeswoman said this would happen “when users are exhibiting signs of mental or emotional distress, such as self-harm, suicide, and psychosis.”

In the post last week, OpenAI said it planned to make reaching emergency services and getting help easier for distressed users. Human reviewers already look at conversations that look like someone plans to harm others and may refer them to law enforcement.

Jared Moore, a Stanford researcher who has studied how ChatGPT responds to mental health crises, said OpenAI had not provided enough details about how these interventions will work.

“I have a lot of technical questions,” he said. “The trouble with this whole approach is that it is all vague promises with no means of evaluation.”

The easiest thing to do in the case of a disturbing conversation, Mr. Moore said, would be to just end it.

Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.

The post OpenAI Plans to Add Safeguards to ChatGPT for Teens and Others in Distress appeared first on New York Times.

Share197Tweet123Share
Texas lawmakers approve letting private citizens sue abortion pill providers, send bill to governor
News

Texas lawmakers approve letting private citizens sue abortion pill providers, send bill to governor

by Associated Press
September 3, 2025

AUSTIN, Texas (AP) — A measure that would allow Texas residents to sue out-of-state providers advanced to the desk of ...

Read more
News

Lisbon’s Gloria funicular derails: What we know about the cause and victims

September 3, 2025
News

White House Continues Cankles Cover-Up as Health Speculation Rises

September 3, 2025
News

A gondola to Dodger Stadium? How about a gondola to the Big A?

September 3, 2025
Food

Cat in San Francisco euthanized after latest bird flu infection tied to raw pet food

September 3, 2025
Health Officials Urge Masks as Coronavirus Wave Hits Newsom’s California

Health Officials Urge Masks as Coronavirus Wave Hits Newsom’s California

September 3, 2025
The Yu-Gi-Oh! x FINE CHAOS Collection Arrives Following 2025 World Championships

The Yu-Gi-Oh! x FINE CHAOS Collection Arrives Following 2025 World Championships

September 3, 2025
Too soon: Jack Osbourne tears into Roger Waters after diss of his late father Ozzy

Too soon: Jack Osbourne tears into Roger Waters after diss of his late father Ozzy

September 3, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.