The company behind ChatGPT, OpenAI, launched new parental controls for teen accounts on Monday, a month after a California family filed a lawsuit alleging that its chatbot encouraged their son to take his life.
With the new controls being rolled out starting Monday, parents can set specific hours when their kids cannot use ChatGPT, turn off voice mode, which allows users to interact with the chatbot, and prevent teens’ chat history from being saved in ChatGPT’s memory. Another option prevents teen data from being used to train OpenAI’s models.
“Parental controls allow parents to link their account with their teen’s account and customize settings for a safe, age-appropriate experience,” OpenAI said in a blog post about the new feature announcement.
Parents will have to connect their account to their children’s accounts to gain parental control.
Under these new controls, parents of dependent accounts will receive a notification if ChatGPT recognizes potential signs that their teens may harm themselves. A specialist team will review signs of trouble and contact parents by email, text message or mobile push alerts.
“No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent,” OpenAI said in a blog post.
If OpenAI is unable to reach a parent in the event of an imminent threat to life, it is working on ways to directly contact law enforcement agencies. Parents will require a ChatGPT account to access these new controls.
To set it up, a parent can send an invite to the teen’s settings from their account, which the teen must accept, after which the parent can manage the teens’ ChatGPT experience from their own account. Teens can also invite a parent to connect.
Once parents and kids connect their accounts, the teen account will automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate.
These changes come amid enhanced regulatory and public scrutiny on teen use of chatbots.
Last year, a Florida mother alleged in a federal lawsuit that another chatbot called Charachter.AI was responsible for her 14-year-old son’s suicide.
She accused the company of failing to notify her or offer help when her son expressed suicidal thoughts to virtual characters. Charachter.AI is a roleplay chatbot platform where people can create and interact with digital characters that mimic real and fictional people. More families have sued Charachter.AI this year.
In August, the parents of 16-year-old Adam Raine sued OpenAI, alleging that ChatGPT provided him with information about suicide tactics, including the one the teen used to kill himself. Adam used a paid version of ChatGPT-4o, which encouraged him to seek professional help when he expressed thoughts of self-harm. He was able to bypass the safety measures which were already in place by saying that the details were for a story he was writing.
In September, OpenAI Chief Executive Sam Altman wrote that the company prioritizes “safety ahead of privacy and freedom for teens.”
While OpenAI’s rules prohibit users under 13 from using its services, the company announced Monday that it’s also building an “age prediction system” that will predict whether a user is under 18 and automatically apply teen-appropriate settings.
California lawmakers have passed two AI chatbot safety bills that the tech industry lobbied against.
Gov. Gavin Newsom has until mid-October to approve or reject the bills, Assembly Bill 1064 and Senate Bill 243.
Advocacy groups noted that while OpenAI’s most recent changes were a step in the right direction, laws are needed for real accountability.
“They reflect a broader pattern: companies making rushed public commitments only after harm has occurred,” said Adam Billen, vice president of public policy at Encode AI, a youth-led activist coalition advocating for AI safety. “We don’t need more empty promises; we need accountability that is enshrined into law, with bills like AB 1064.”
Times staff writer Queenie Wong contributed to this report.
The post ChatGPT launches new parental controls for teens amid growing safety concerns appeared first on Los Angeles Times.