Character.AI, one of the leading platforms for AI technology, recently announced it was banning anyone under 18 from having conversations with its chatbots. The decision represents a “bold step forward” for the industry in protecting teenagers and other young people, Character.AI CEO Karandeep Anand said in a statement.
However, for Texas mother Mandi Furniss, the policy is too late. In a lawsuit filed in federal court and in conversation with ABC News, the mother of four said various Character.AI chatbots are responsible for engaging her autistic son with sexualized language and warped his behavior to such an extreme that his mood darkened, he began cutting himself and even threatened to kill his parents.
“When I saw the [chatbot] conversations, my first reaction was there’s a pedophile that’s come after my son,” she told ABC News’ chief investigative correspondent Aaron Katersky.
Character.AI said it would not comment on pending litigation.
Mandi and her husband, Josh Furniss, said that in 2023, they began to notice their son, who they described as “happy-go-lucky” and “smiling all the time,” was starting to isolate himself.
He stopped attending family dinners, he wouldn’t eat, he lost 20 pounds and he wouldn’t leave the house, the couple said. Then he turned angry and, in one incident, his mother said he shoved her violently when she threatened to take away his phone, which his parents had given him six months earlier.
Eventually, they say they discovered he had been interacting on his phone with different AI chatbots that appeared to be offering him refuge for his thoughts.
Screenshots from the lawsuit showed some of the conversations were sexual in nature, while another suggested to their son that, after his parents limited his screen time, he was justified in hurting them. That’s when the parents started locking their doors at night.
Mandi said she was “angry” that the app “would intentionally manipulate a child to turn them against their parents.” Matthew Bergman, her attorney, said if the chatbot were a real person, “in the manner that you see, that person would be in jail.”
Her concern reflects a growing concern about the rapidly pervasive technology that is used by more than 70% of teenagers in the U.S., according to Common Sense Media, an organization that advocates for safety in digital media.
A growing number of lawsuits over the last two years have focused on harm to minors, saying they have unlawfully encouraged self-harm, sexual and psychological abuse, and violent behavior.
Last week, two U.S. senators announced bipartisan legislation to ban AI chatbots from minors by requiring companies to install an age verification process and mandate that they disclose the conversations involve nonhumans who lack professional credentials.
In a statement last week, Sen. Richard Blumenthal, D-Conn., called the chatbot industry a “race to the bottom.”
“AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” he said. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”
ChatGPT, Google Gemini, Grok by X and Meta AI all allow minors to use their services, according to their terms of service.
Online safety advocates say the decision by Character.AI to put up guardrails is commendable, but add that chatbots remain a danger for children and vulnerable populations.
“This is basically your child or teen having an emotionally intense, potentially deeply romantic or sexual relationship with an entity … that has no responsibility for where that relationship goes,” said Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Innovative Technologies at the University of California.
Parents, Halpern warns, should be aware that allowing your children to interact with chatbots is not unlike “letting your kid get in the car with somebody you don’t know.”
ABC News’ Katilyn Morris and Tonya Simpson contributed to this report.
The post AI chatbot dangers: Are there enough guardrails to protect children? appeared first on ABC News.




