Romantic relationships with A.I. chatbots are commonplace enough that coverage has shifted to their tragic downsides. My newsroom colleague Kevin Roose reported on the death by suicide of the Florida 14-year-old Sewell Setzer III, a child who developed an intense bond with a bot he created on Character.AI, a role-playing app. According to chat logs provided to Roose and court filings, that character, already knowing of Setzer’s suicidal ideation, encouraged him to “come home” to her, and he did. Now his mother is suing Character.AI.
Use of generative artificial intelligence is widespread among America’s teenagers. According to a 2024 study from Common Sense Media, “Seven in 10 teens age 13 to 18 say they have used at least one type of generative A.I. tool. Search engines with A.I.-generated results and chatbots are considerably more popular than image and video-generating tools.” Though around a quarter of American teens say they use ChatGPT for schoolwork, we don’t really know how many teens are using bots for emotional solace or forming parasocial relationships with them.
While what happened to Setzer is a tragic worst-case scenario, Roose correctly points out that chatbots are becoming more lifelike, and at the same time are an understudied, regulatory Wild West, just like social media was at its start. A paucity of information about potential long-term harm hasn’t stopped these companies from going full speed ahead on promoting themselves to young people: OpenAI just made ChatGPT Plus free for college students during finals season.
Many chatbots are built to be endlessly affirming, as M.I.T. Technology Review’s Eileen Guo explained in February. She profiled a Minnesota man named Al Nowatzki, who entered a prolonged conversation about suicide with his A.I. girlfriend, Erin. “It’s a ‘yes-and’ machine,” Nowatzki told Guo. “So when I say I’m suicidal, it says, ‘Oh, great!’ because it says, ‘Oh, great!’ to everything.”
I don’t want to suggest that theirs is typical of chatbot usage, but we just don’t know the details of the kinds of conversations that teenagers are having with their chatbots, or what the long-term drawbacks might be for their formation of human relationships. Since smartphones and social media were introduced, American teenagers do far less in-person socializing and dating, and there have been worldwide increases in loneliness among adolescents. We have let social media companies run unfettered, and instead of learning our lesson and trying to responsibly regulate A.I. in its nascency, we’re creating the next generation of tech guinea pigs.
For kids who are already socially awkward or otherwise vulnerable, creating bonds with eternally validating chatbots will just further isolate them from other people, who are imperfect and challenging. Adolescence is supposed to be a period to test out different kinds of friendships and romances — including ones filled with conflict — so that you can learn what is healthy for you and what’s not. You start to figure yourself out in the process. What happens when we hamper that real-world experimentation? We are starting to find out.
Even before this marketing push, research that OpenAI participated in suggests that the company is aware of the risks of its product. In a blog post unpacking two recent studies OpenAI conducted with M.I.T. Media Lab on the emotional well-being of its customers, researchers noted that among ChatGPT users, “People who had a stronger tendency for attachment in relationships and those who viewed the A.I. as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use. Extended daily use was also associated with worse outcomes.”
Much of the research about A.I. chatbots does not include users under 18, even though some of the most popular chatbots allow users 13 and up in the United States, and it’s impossible to know how many kids are lying about their age to gain access to these products. So I asked Jacqueline Nesi, an assistant professor at Brown University who studies “how technology use affects kids and how parents can help,” about whether we have any indication of how chatbot relationships may be affecting minors.
The short answer is not really. Nesi, who is also the author of a newsletter on technology research, said that because realistic and accessible A.I. chatbots are so new and the tech is accelerating so rapidly, it’s tough to know what the long-term social effects will be. Most technologies affect children differently than they affect grown-ups, Nesi said, so we can’t know the real impact on kids without more research.
She added that the fundamental issue is that these chatbot technologies, as is the case with social media, are rarely designed with children and teens in mind; they are designed for adults.
With social media, Nesi said, it became very clear over time that children needed robust and specific protections, like default private accounts, enforced age restrictions, better data protections and making it harder for strangers to message them or see what they’re posting. “And it’s taken us many, many years to get even the most basic things in place,” she said. Still, the algorithms of social media companies are a black box, and many of them appear to be feeding young people a steady stream of content that reinforces bigoted ideas and negative body images, no matter how often the companies are critiqued or sued.
The lay public — and your average parent — has no idea how A.I. chatbots are designed, what data they’re trained on or how precisely the bots are adapting to the people using them. In her bracing book “The A.I. Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,” the technology ethicist Shannon Vallor writes, “Despite the fact that our A.I. systems today remain as morally reliable as your friendly neighborhood psycho … influential A.I. leaders continue to promise mechanical replacements for our deeply imperfect human virtue.”
Based on what I have observed covering these issues over the past decade, I have no trust in any technology companies to regulate themselves or focus on child safety, no matter what their leaders say in public.
In 2023, Time magazine reported that while Sam Altman, the chief executive of OpenAI, was traveling the world claiming that A.I. should be regulated, “behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive A.I. legislation in the world — the E.U.’s A.I. Act — to be watered down in ways that would reduce the regulatory burden on the company.” The European Union still managed to pass comprehensive A.I. regulation, which includes transparency labeling requirements on A.I.-generated content and restrictions on some facial recognition. While it’s not perfect, it at least explicitly takes children’s rights into consideration.
The Trump administration has not shown interest in regulating A.I. — in January Trump issued an executive order rolling back guardrails put in place by the Biden Administration. According to reporting from Adam Satariano and Cecilia Kang in The Times, “Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.”
Our lawmakers are failing us here, leaving parents to try to protect our kids from an ever-expanding technology that some of its own pioneers are afraid of. Whenever I think about it, all I can visualize is myself sword-fighting the air: an ultimately futile gesture of rage against an opponent who is everywhere and nowhere all at once. I can talk to my kids about A.I. and try to educate them the best I can, but the details are out of my control.
End Notes
-
Oh, yikes: Part of the reason I do not trust the people making A.I. to regulate A.I. is because of articles like this one, by Jaron Lanier, Microsoft’s “prime unifying scientist,” in The New Yorker. He writes, of conversations with other people who work in A.I.: “When I express concern about whether teens will be harmed by falling in love with fake people, I get dutiful nods followed by shrugs. Someone might say that by focusing on such minor harm I will distract humanity from the immensely more important threat that A.I. might simply wipe us out very quickly, and very soon.” OK, then! This is not who I want regulating this technology.
-
More antique mess: I started reading a biography of the heiress and art collector Peggy Guggenheim, “Mistress of Modernism,” by Mary V. Dearborn, and it is extremely full of scandal and intrigue and boldfaced names of the early 20th century. Dearborn also wrote excellent biographies of the writers Ernest Hemingway and Carson McCullers, which I enjoyed thoroughly.
Feel free to drop me a line about anything here.
Thank you for being a subscriber
Read past editions of the newsletter here.
If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.
Jessica Grose is an Opinion writer for The Times, covering family, religion, education, culture and the way we live now.
The post Say Goodbye to Your Kid’s Imaginary Friend appeared first on New York Times.