
Tomohiro Ohsumi via Getty Images
- OpenAI said that it has long-term plans to verify the ages of its users.
- By the end of the month, OpenAI will also roll out parental controls for ChatGPT.
- OpenAI CEO Sam Altman said that teen safety will trump concerns over privacy or freedom.
OpenAI CEO Sam Altman says minors “need significant protection” when using AI, which is why the company is building an age-detection system.
“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Altman said in a statement accompanying the announcement, adding that the company is “building an age-prediction system to estimate age based on how people use ChatGPT.”
OpenAI said that when ChatGPT detects a user is under 18, it will automatically direct them to a version of the chatbot with “age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.”
“If there is doubt, we’ll play it safe and default to the under-18 experience,” he wrote.
Altman said that the safeguards will come with some tradeoffs, namely that “in some cases or countries,” OpenAI may ask users to provide ID to prove their age.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he wrote.
While the age-verification system is a long-term goal, OpenAI said it will roll out parental controls by the end of the month. Those controls would allow parents to link their teens’ accounts to their own, “help guide how ChatGPT responds to their teens,” and set blackout hours when the chatbot would be inaccessible. (OpenAI’s terms of service require users to be at least 13.)
Parents would also be able to sign up for a notification “when the system detects their teen is in a moment of acute distress.” In some instances, if OpenAI could not reach a parent, the company said it “may involve law enforcement as a next step.”
The announcement comes amid concerns that ChatGPT may have contributed to deaths by suicide. Last month, the parents of 16-year-old Adam Raine sued OpenAI and Sam Altman, stating that ChatGPT had “actively helped” their son explore suicide methods before his death. An OpenAI spokesperson previously told Business Insider that it was saddened by Raine’s death and that ChatGPT includes safeguards like directing users to crisis helplines.
In a post last month, OpenAI acknowledged that sometimes its safeguards “can fall short.” The statement, which did not acknowledge the suit, also said OpenAI was exploring parental controls and safeguards that “recognize teens’ unique developmental needs.”
Congress is also monitoring the situation. Sen. Josh Hawley, a Republican from Missouri, launched an investigation into Meta after a report that its AI chatbot was allowed to engage in “sensual” conversations with children. Meta later said it made changes to provide teens with more “age-appropriate AI experiences.”
Altman outlined some tradeoffs where adults should be allowed more freedom than teens. He said ChatGPT will have a default to “not lead to much flirtatious talk,” but adults should be able to request it. He also said that if an adult were writing “a fictional story that depicts a suicide, the model should help with that request.”
“‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” he wrote.
Read the original article on Business Insider
The post OpenAI says it’s working to tell if a user is under 18 and will send them to an ‘age-appropriate’ ChatGPT appeared first on Business Insider.