DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

OpenAI Acknowledges the Teen Problem

September 18, 2025
in News, Tech
OpenAI Acknowledges the Teen Problem
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

On Tuesday afternoon, three parents sat in a row before the Senate Judiciary Subcommittee on Crime and Counterterrorism. Two of them had each recently lost a child to suicide; the third has a teenage son who, after cutting his arm in front of her and biting her, is undergoing residential treatment. All three blame generative AI for what has happened to their children.

They had come to testify on what appears to be an emerging health crisis in teens’ interactions with AI chatbots. “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son hanged himself after ChatGPT instructed him on how to set up the noose, according to his lawsuit against OpenAI. This summer, he and his wife sued OpenAI for wrongful death. (OpenAI has said that the firm is “deeply saddened by Mr. Raine’s passing” and that although ChatGPT includes a number of safeguards, they “can sometimes become less reliable in long interactions.”) The nation needs to hear about “what these chatbots are engaged in, about the harms that are being inflicted upon our children,” Senator Josh Hawley said in his opening remarks.

Even as OpenAI and its rivals promise that generative AI will reshape the world, the technology is replicating old problems, albeit with a new twist. AI models not only have the capacity to expose users to disturbing material—about dark or controversial subjects found in their training data, for example; they also produce perspectives on that material themselves. Chatbots can be persuasive, have a tendency to agree with users, and may offer guidance and companionship to kids who would ideally find support from peers or adults. Common Sense Media, a nonprofit that advocates for child safety online, has found that a number of AI chatbots and companions can be prompted to encourage self-mutilation and disordered eating to teenage accounts. The two parents speaking to the Senate alongside Raine are suing Character.AI, alleging that the firm’s role-playing AI bots directly contributed to their children’s actions. (A spokesperson for Character.AI told us that the company sends its “deepest sympathies” to the families and pointed us to safety features the firm has implemented over the past year.)

AI firms have acknowledged these problems. In advance of Tuesday’s hearing, OpenAI published two blog posts about teen safety on ChatGPT, one of which was written by the company’s CEO, Sam Altman. He wrote that the company is developing an “age-prediction system” that would estimate a user’s age—presumably to detect if someone is under 18 years old—based on ChatGPT usage patterns. (Currently, anyone can access and use ChatGPT without verifying their age.) Altman also referenced some of the particular challenges raised by generative AI: “The model by default should not provide instructions about how to commit suicide,” he wrote, “but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.” But it should not discuss suicide, he said, even in creative-writing settings, with users determined to be under 18. In addition to the age gate, the company said it will implement parental controls by the end of the month to allow parents to intervene directly, such as by setting “blackout hours when a teen cannot use ChatGPT.”

The announcement, sparse on specific details, captured the trepidation and lingering ambivalences that AI companies have about policing young users, even as OpenAI begins to implement these basic features nearly three years after the launch of ChatGPT. A spokesperson for OpenAI, which has a corporate partnership with The Atlantic, declined to respond to a detailed list of questions about the firm’s future teen safeguards, including when the age-prediction system will be implemented. “People sometimes turn to ChatGPT in sensitive moments, so we’re working to make sure it responds with care,” the spokesperson told us. Other leading AI firms have also been slow to devise teen-specific protections, even though they have catered to young users. Google Gemini, for instance, has a version of its chatbot for children under 13, and another version for teenagers (the latter had a graphic conversation with our colleague Lila Shroff when she posed as a 13-year-old).

This is a familiar story in many respects. Anyone who has paid attention to the issues presented by social media could have foreseen that chatbots, too, would present a problem for teens. Social-media sites have long neglected to restrict eating-disorder content, for instance, and Instagram permitted graphic depictions of self-mutilation until 2019. Yet like the social-media giants before them, generative-AI companies have decided to “move as fast as possible, break as much as possible, and then deal with the consequences,” danah boyd, a communication professor at Cornell who has often written on teenagers and the internet (and who styles her name in lowercase), told us.

In fact, the problems are now so clearly established that platforms are finally beginning to make voluntary changes to address them. For example, last year, Instagram introduced a number of default safeguards for minors, such as enrolling their accounts into the most restrictive content filter by default. Yet tech companies now also have to contend with a wave of legislation in the United Kingdom, parts of the United States, and elsewhere that compel internet companies to directly verify the ages of their users. Perhaps the desire to avoid regulation is another reason OpenAI is proactively adopting an age-estimating feature, though Altman’s post also says that the company may ask for ID “in some cases or countries.”

Many major social-media companies are also experimenting with AI systems that estimate a user’s age based on how they act online. When such a system was explained during a TikTok hearing in 2023, Representative Buddy Carter of Georgia interrupted: “That’s creepy!” And that response makes sense—to determine the age of every user, “you have to collect a lot more data,” boyd said. For social-media companies, that means monitoring what users like, what they click on, how they’re speaking, whom they’re talking to; for generative-AI firms, it means drawing conclusions from the otherwise-private conversations an individual is having with a chatbot that presents itself as a trustworthy companion. Some critics also argue that age-estimation systems infringe on free-speech rights because they limit access to speech based on one’s ability to produce government identification or a credit card.

OpenAI’s blog post notes that “we prioritize teen safety ahead of privacy and freedom,” though it is not clear about how much information OpenAI will collect, nor whether it will need to keep some kind of persistent record of user behavior to make the system workable. The company has also not been altogether transparent about the material that teens will be protected from. The only two use cases of ChatGPT that the company specifically mentions as being inappropriate for teenagers are sexual content and discussion of self-mutilation or suicide. The OpenAI spokesperson did not provide any more examples. Numerous adults have developed paranoid delusions after extended use of ChatGPT. The technology can make up completely imaginary information and events. Are these not also potentially dangerous types of content?

And what about the more existential concern parents might have about their kids talking to a chatbot constantly, as if it is a person, even if everything the bot says is technically aboveboard? The OpenAI blog posts touch glancingly on this topic, gesturing toward the worry that parents may have about their kids using ChatGPT too much and developing too intense of a relationship with it.

Such relationships are, of course, among generative AI’s essential selling points: a seemingly intelligent entity that morphs in response to every query and user. Humans and their problems are messy and fickle; ChatGPT’s responses will be individual and its failings unpredictable in kind. Then again, social-media empires have been accused for years of pushing children toward self-harm, disordered eating, exploitative sexual encounters, and suicide. In June, on the first episode of OpenAI’s podcast, Altman said, “One of the big mistakes of the social-media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole and maybe even individual users.” For many years, he has been fond of saying that AI will be made safe through “contact with reality”; by now, OpenAI and its competitors should see that some collisions may be catastrophic.

The post OpenAI Acknowledges the Teen Problem appeared first on The Atlantic.

Share198Tweet124Share
Trump’s brazen attack on free speech
News

Trump’s brazen attack on free speech

by Vox
September 18, 2025

This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting ...

Read more
News

Charges Dropped Against Man Who Was Beaten at Egypt’s U.N. Mission

September 18, 2025
News

Kennedy’s Advisory Panel Votes to Limit M.M.R.V. Vaccine for Children Under 4

September 18, 2025
News

Clayton Kershaw To Retire From Los Angeles Dodgers At The End Of 2025 Season

September 18, 2025
News

A Domestic Violence Case in Rural Pennsylvania Ended in a Deadly Ambush

September 18, 2025
WNBA star’s 1-word response after Obama accuses Trump admin of taking ‘cancel culture’ to ‘dangerous level’

WNBA star’s 1-word response after Obama accuses Trump admin of taking ‘cancel culture’ to ‘dangerous level’

September 18, 2025
32 students, faculty hospitalized after pepper spray deployed during wild brawl at Florida high school

32 students, faculty hospitalized after pepper spray deployed during wild brawl at Florida high school

September 18, 2025
Panic in Gaza City as Israel advances on centre, ‘sandwiching’ population

Panic in Gaza City as Israel advances on centre, ‘sandwiching’ population

September 18, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.