Character.AI, a platform for creating and chatting with artificial intelligence chatbots, plans to start blocking minors from having “open-ended” conversations with its virtual characters.
The major change comes as the Menlo Park, Calif., company and other AI leaders face more scrutiny from parents, child safety groups and politicians about whether chatbots are harming the mental health of teens.
Character.AI said in a blog post Wednesday that it is working on a new experience that will allow teens under 18 to create videos, stories and streams with characters. However, as the company makes this transition, it will limit chats for minors to two hours per day, and that will “ramp down” before Nov. 25.
“We do not take this step of removing open-ended Character chat lightly — but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company said in a statement.
The decision shows how technology companies are responding to mental health concerns as more parents sue the platforms following the deaths of their children.
Politicians are also putting more pressure on tech companies, passing new laws aimed at making chatbots safer.
OpenAI, the maker of ChatGPT, announced new safety features after a California couple alleged in a lawsuit that its chatbot provided suicide method information, including the one their teen, Adam Raine, used to kill himself.
Last year, several parents sued Character.AI over allegations that the chatbots caused their children to harm themselves and others. The lawsuits accused the company of releasing the platform before making sure it was safe to use.
Character.AI said it takes teen safety seriously and outlined steps it took to moderate inappropriate content. The company’s rules prohibit the promotion, glorification and encouragement of suicide, self-harm and eating disorders.
Following the deaths of their teens, parents have urged lawmakers to do more to protect young people as chatbots grow in popularity. While teens are using chatbots for schoolwork, entertainment and more, some are also conversing with virtual characters for companionship or advice.
Character.AI has more than 20 million monthly active users and more than 10 million characters on its platforms. Some of the characters are fictional, while others are based on real people.
Megan Garcia, a Florida mom who sued Character.AI last year, alleges the company failed to notify her or offer help to her son who expressed suicidal thoughts to chatbots on the app.
Her son, Sewell Setzer III, died by suicide after chatting with a chatbot named after Daenerys Targaryen, a character from the fantasy television and book series “Game of Thrones.”
Garcia then testified in support of legislation this year that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content and put in guardrails, such as referring users to a suicide hotline or crisis text line.
California Gov. Gavin Newsom signed that legislation, Senate Bill 243, into law but faced pushback from the tech industry. Newsom vetoed a more controversial bill that he said could unintentionally result in the ban of AI tools used by minors.
“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in the veto message.
Character.AI said in its blog post it decided to bar minors from conversing with its AI chatbots after getting feedback from regulators, parents and safety experts. The company is also rolling out a way to assure users have the proper experience for their age and funding a new nonprofit dedicated to AI safety.
In June, Character.AI also named Karandeep Anand, who previously worked as an executive at Meta and Microsoft, as its new chief executive.
“We want to set a precedent that prioritizes teen safety while still offering young users opportunities to discover, play and create,” the company said.
The post Leading AI company to ban kids from long chats with its bots amid growing concern about the technology appeared first on Los Angeles Times.




