The number of kids getting hurt by AI-powered chatbots is hard to know, but it’s not zero. Yet, for nearly three years, ChatGPT has been free for all ages to access without any guardrails. That sort of changed on Monday, when OpenAI introduced a suite of parental controls, some of which are designed to prevent teen suicides — like that of Adam Raine, a 16-year-old Californian who died by suicide after talking to ChatGPT at length about how to do it. Then, on Tuesday, OpenAI launched a social network with a new app called Sora that looks a lot like TikTok, except it’s powered by “hyperreal” AI-generated videos.
It was surely no accident that OpenAI announced these parental controls alongside an ambitious move to compete with Instagram and YouTube. In a sense, the company was releasing a new app designed to get people even more hooked on AI-generated content but softening the blow by giving parents slightly more control. The new settings apply primarily to ChatGPT, although parents have the option to impose limits on what their kids see in Sora.
And the new ChatGPT controls aren’t exactly straightforward. Among other things, parents can now connect their children’s accounts to theirs and add protections against sensitive content. If at any point OpenAI’s tools determine there’s a serious safety risk, a human moderator will review it and send a notification to the parents if necessary. Parents cannot, however, read transcripts of their child’s conversations with ChatGPT, and the teen can disconnect their account from their parents at any time (OpenAI says the parent will get a notification).
We don’t yet know how all this will play out in practice, and something is bound to be better than nothing. But is OpenAI doing everything it can to keep kids safe?
Several experts I spoke to said no. In fact, OpenAI is ignoring the biggest problem of all: Chatbots that are programmed to act as companions, providing emotional support and advice to kids. Presumably, the new ChatGPT safety features could intervene in future potential tragedies, but it’s unclear how OpenAI will be able to identify when AI companions take a dark turn with young users, as they tend to do.
“We’ve seen in a lot of cases for both teens and adults that falling into dependency on AI can be accidental,” Robbie Torney, Common Sense Media’s senior director of AI programs told me. “A lot of people who have become dependent on AI didn’t set out to be dependent on AI. They started using AI for homework help or for work, and slowly slipped into using it for other purposes.”
Again, even adults have problems regulating themselves when AI chatbots offer a cheerful, sycophantic friend available to chat every hour of the day. You may have read recent reports of adults who developed increasingly intense relationships with AI chatbots before suffering psychotic breaks. This kind of synthetic relationship represents a new frontier for technology as well as the human brain.
It’s frightening to think what could happen to kids, whose prefrontal cortices have yet to fully develop, making them especially vulnerable. More than 70 percent of teens are using AI chatbots for companionship, which presents dangers to them that are “real, serious, and well documented,” according to a recent Common Sense Media survey. That’s why AI companion apps, like Character.ai, already have some restrictions by default for young users.
There’s also the broader problem that parental controls put the onus of protecting kids on parents, rather than on the tech companies themselves. It’s usually up to parents to dig into their settings and flip the switches. And then it’s still up to parents to keep track of how their kids are using these products, and in the case of ChatGPT, how dependent they’re getting on the chatbot. The situation is either confusing enough or laborious enough that most parents simply don’t use parental controls.
The real goal of the parental controls
It’s worth pointing out that OpenAI rolled out these controls and the new app as a major AI safety bill sat on California Gov. Gavin Newsom’s desk, awaiting his signature. Newsom signed the bill into law the same day as the parental control announcement. The OpenAI news was also on the heels of Senate hearings on the negative impacts of AI chatbots, during which parents urged lawmakers to impose stronger regulations on companies like OpenAI.
“The real goal of these parental tools, whether it’s ChatGPT or Instagram, is not actually to keep kids safe,” said Josh Golin, the executive director of Fairplay, a nonprofit children’s advocacy group. It is to say that self-regulation is fine, please. You know, ‘Don’t regulate us, don’t pass any laws.’” Golin went on to describe OpenAI’s failure to do anything about the trend of children developing emotional relationships with ChatGPT as “disturbing.” (I reached out to OpenAI for comment but didn’t get a response.)
One way around tasking parents with managing all of these settings would be for OpenAI to have safety guardrails on by default. And the company says it’s working on something that does a version of that. In the future, it says, after a certain amount of input, ChatGPT will be able to determine the age of a user and add safety features. For now, kids can access ChatGPT by typing in their birthday — or making one up — whenever they create an account.
You can try to interpret OpenAI’s strategy here. Whether it’s trying to push back against regulation or not, parental controls introduce some friction into teens using ChatGPT. They’re a form of content moderation, one that also impacts teen users’ privacy. The company would also, presumably, like these teens to keep using ChatGPT and Sora when they become adults, so they don’t want to degrade the experience too much. Allowing teens to do more on these apps rather than less is good for business, to a point.
This all leaves parents with a difficult situation. They need to know their kid is using ChatGPT, for starters, and then figure out which settings will be enough to keep their kids safer but not too strict that the kid just creates a burner account pretending to be an adult. There’s seemingly no way to stop kids from developing an emotional attachment to these chatbots, so parents will just have to talk to their kids and hope for the best. Then there’s whatever awaits with the Sora app, which looks designed to churn out high-quality AI slop and get kids addicted to yet another endless feed.
“There isn’t a parental control that’s going to make something completely safe,” Leslie Tyler, director of parent safety at Pinwheel, a company that makes parental control software. “Parents can’t outsource it. Parents still have to be involved.”
In a way, this moment represents a second chance for the tech industry and for policymakers. Two decades of unregulated social media apps have cooked all of our brains, and there’s growing evidence that it contributed to a mental health crisis in young people. Companies like Meta and TikTok knew their products were harming kids and, for a long time, did nothing about it for years. Meta now has Teen Accounts for Instagram, but recent research suggests the safety features just don’t work.
Whether too little or too late, OpenAI is taking its turn at keeping kids safe. Again, doing something is better than nothing.
A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!
The post We shouldn’t let kids be friends with ChatGPT appeared first on Vox.