Google and Character.AI, a maker of artificial intelligence companions, agreed to settle a lawsuit that had accused the companies of providing harmful chatbots that led a teenager to kill himself, according to a legal filing on Wednesday.
The lawsuit had been filed in U.S. District Court for the Middle District of Florida in October 2024 by Megan L. Garcia, the mother of Sewell Setzer III. Sewell, 14, of Orlando, killed himself in February 2024 after texting and conversing with one of Character.AI’s chatbots. In his last conversation with the chatbot, it said to the teenager to “please come home to me as soon as possible.”
“What if I told you I could come home right now?” Sewell had asked.
“… please do, my sweet king,” the chatbot replied.
In the legal filing on Wednesday, the companies and Ms. Garcia said they had agreed to a mediated settlement “to resolve all claims.” The agreement has not been finalized.
Ms. Garcia and Character.AI declined to comment. Google, which invested in Character.AI, did not immediately respond to a request for comment.
The proposed settlement follows mounting scrutiny of A.I. chatbots and how they can hurt users including children. Companies including Character.AI and OpenAI have faced criticism and lawsuits about users developing unhealthy attachments to their chatbots, in some cases leading people to harm themselves. In recent months, lawmakers have held hearings and the Federal Trade Commission opened an inquiry into the effects of A.I. chatbots on children.
To address these concerns, Character.AI in November said it was barring children under the age of 18 from using its chatbots. In September, OpenAI said it planned to introduce features intended to make its chatbot safer, including parental controls; the company has since said it would relax some of the safety measures.
(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)
Haley Hinkle, a policy counsel at Fairplay, a nonprofit that works to promote online child safety, said the settlement could not be viewed as an ending. “We have only just begun to see the harm that A.I. will cause to children if it remains unregulated,” she said.
Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers. The start-up allowed people to create, converse with and share their own A.I. characters, such as custom anime avatars. Some personas could be designed to simulate girlfriends, boyfriends or other intimate relationships.
Character.AI raised nearly $200 million from investors. In mid-2024, Google agreed to pay about $3 billion to license Character.AI’s technology, and Mr. Shazeer and Mr. De Freitas returned to Google. Mr. Shazeer and Mr. De Freitas were named as defendants in the lawsuit from Ms. Garcia.
Cecilia Kang contributed reporting.
Natallie Rocha is a San Francisco-based technology reporter and a member of the 2025-26 Times Fellowship class, a program for early-career journalists.
The post Google and Character.AI to Settle Lawsuit Over Teenager’s Death appeared first on New York Times.




