OpenAI on Monday introduced parental controls to its artificial intelligence chatbot, ChatGPT, as teens increasingly turn to the platform for help with their schoolwork, daily life and mental health.
The new features came after a wrongful-death lawsuit was filed against OpenAI by the parents of Adam Raine, a 16-year-old who died in April in California. ChatGPT had supplied Adam with information about suicide methods in the final months of his life, according to his parents.
ChatGPT’s parental controls, announced in early September, were developed by OpenAI with Common Sense Media, a nonprofit advocacy that provides age-based ratings of entertainment and technology for parents.
Here’s what to know about the new features.
Parents can oversee their teens’ accounts.
To set controls, parents have to invite their child to link their ChatGPT account to a parent’s account, according to a new resource page.
Parents will then gain some controls over the child’s account, such as the option to reduce sensitive content.
Parents can set specific times when ChatGPT can be used. The bot’s voice mode, memory saving and image generation features can be turned on and off.
There is also an option to prevent ChatGPT from using its conversations with teens to improve its models.
Parents will be notified of potential self-harm.
In a statement on Monday, OpenAI said that parents would be notified by email, text message or push alert if ChatGPT recognizes “potential signs that a teen might be thinking about harming themselves,” unless the parent has opted out of such notifications. Parents would receive a warning of a safety risk without specific information about their child’s conversations.
ChatGPT has been trained to encourage general users to contact a help line if it detects signs of mental distress or self-harm. When it detects such signs in a teen, a “small team of specially trained people reviews the situation,” OpenAI said in the statement. The statement did not specify who those people were.
OpenAI added that it was working on a process to reach law enforcement and emergency services if ChatGPT detects a threat but cannot reach a parent.
“No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent,” the statement said.
Teens can bypass the controls.
OpenAI said on Monday that it was still developing an age prediction system to help ChatGPT automatically apply “teen-appropriate settings” if it thinks a user is under 18.
With the new features, a parent will be notified if a teen disconnects their account from a parent’s account. But that won’t stop a teen from using the basic version of ChatGPT without an account.
Adam Raine, the California teen who died in April, had learned to bypass ChatGPT’s safeguards by saying he would use the information to write a story.
“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” OpenAI said.
In the statement with OpenAI on Monday, Robbie Torney, senior director for AI programs at Common Sense Media, said the parental controls would “work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online.”
(The New York Times sued OpenAI and Microsoft in 2023 for copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources. If you are someone living with loss, the American Foundation for Suicide Prevention offers grief support.
Francesca Regalado is a Times reporter covering breaking news.
The post What We Know About ChatGPT’s New Parental Controls appeared first on New York Times.