On June 12, the toymaker Mattel announced a “strategic collaboration” with OpenAI, the developer of the large language model ChatGPT, “to support A.I.-powered products and experiences based on Mattel’s brands.” Though visions of chatbot therapist Barbie and Thomas the Tank Engine with a souped-up surveillance caboose may dance in my head, the details are still vague. Mattel affirms that ChatGPT is not intended for users under 13, and says it will comply with all safety and privacy regulations.
But who will hold either company to its public assurances? Our federal government appears allergic to any common-sense regulation of artificial intelligence. In fact, there is a provision in the version of the enormous domestic policy bill passed by the House that would bar states from “limiting, restricting or otherwise regulating artificial intelligence models, A.I. systems or automated decision systems entered into interstate commerce for 10 years.”
Senator Ted Cruz, Republican of Texas, who proposed that states that attempted to regulate A.I. in any way be barred from receiving federal money for broadband services, wants a “light touch” on regulation because he believes — like some A.I. leaders — that it’s better for innovation.
The Republican-led Congress doesn’t appear to have much appetite for regulation right now because President Trump doesn’t seem to care about it. (Though there are a few vocal G.O.P. opponents of the moratorium, they’re in the minority.) Trump rolled back Biden-era protections with an executive order, and his focus with A.I. is on competing with China, not on making sure A.I. tools are safe or fair.
The Silicon Valley ethos of “move fast and break things” is not a good motto when those things are children’s developing minds. Though again, technically, children under 13 are not supposed to use ChatGPT, according to its terms of use, an 11-year-old would just need to enter a fake birth date and have a working email address to use the service. Google has already allowed children under 13 with parent-managed accounts to use its Gemini chatbot.
To be clear, Mattel isn’t the first toymaker to experiment with artificial intelligence, and it certainly won’t be the last. This isn’t even the first time Mattel has used a kind of rudimentary chatbot; in 2015 there was Hello Barbie, which could talk to its users. “Naturally, security researchers took notice and hacked Hello Barbie, revealing that bad actors could steal personal information or eavesdrop on conversations children were having with the doll,” Adam Clark Estes explains in Vox. Mattel stopped making the toy a few years later.
But large-language-model-based technology has vastly improved since the days of Hello Barbie. We have known for some time that there are several ways that A.I. in toys may interfere with the social, emotional and cognitive development of children.
In a paper published in the academic journal “AI & Society” in 2021, digital technology researchers laid out a thorough accounting of the potential risks of what they call “smart connected toys.” Those are toys that “are connected to the internet, equipped with machine learning and an ever-increasing capability to listen, observe, talk and interact with them without appropriate guidance.”
These researchers are not just worried about the potential for surveillance and hacking, as was the case with Hello Barbie (though they are worried about that). In an echo of the now-deafening chorus of concern over children’s smartphone and social media use, the paper calls out the potential for “obsessive” use of smart connected toys, and notes that a dependency on them could lead to “social isolation” and alienation from the real world.
Speaking of social isolation, as a parent looking at how various smart devices for children are marketed, I was particularly disturbed by this video for Snorble, a “play-based learning platform,” according to its website. The device is supposed to help children develop good habits around sleep and hygiene. In the promotional video, a young girl wakes up in the middle of the night and tells her Snorble, “I had a bad dream,” to which Snorble says, “It’s OK, I’m here with you.”
Physically comforting a child after a bad dream is a fundamentally human aspect of parenting, and I truly cannot imagine the long-term social repercussions of regularly outsourcing that kind of emotional connection to a bot. (Though to be honest, most children I know would completely ignore any chatbot and run to their parents anyway.)
Still, I don’t think A.I. should be banned, even in toys. We should aggressively discourage parents and children from using smart toys as replacements for human interaction, and I think regulations can help to create more social cohesion by introducing guardrails.
The European Union directs digital toy manufacturers to consider, “where appropriate, the risks to mental health and the cognitive development of children using such toys.” While I’m sure it won’t be perfect, it’s at least an attempt to include children’s safety in the differential, rather than what the United States appears to be doing, which is to ignore the issue until it may be too late to turn back.
Looking down the barrel of a potentially dystopian future, two things give me a bit of solace right now. One, there is bipartisan opposition to the decade-long moratorium on state regulation of A.I., and many states are already doing good legislative work around A.I. For example, Attorney General Alan Wilson, a South Carolina Republican, gave this statement to The Associated Press in May: “Instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That’s not leadership, that’s federal overreach.”
Two, young people themselves are aware of the limits, if not also the dangers, of A.I. After I wrote about what will happen to critical thinking if artificial intelligence is carelessly incorporated in K-12 schools, I heard from a rising high school senior in San Francisco named Fira Hovaghimian. Hovaghimian, who is 17, felt as though a lot of adults were weighing in on A.I., but she wanted to know what her fellow teens thought about the technology. “We’re the first generation that’s going to have to actually live with it,” she told me.
Earlier this year, Hovaghimian connected with teens in Uruguay, the United Kingdom and Armenia, and together they surveyed around 430 high schoolers from a mix of public and private schools in 14 countries about their feelings about A.I. While they don’t want to stop A.I. development altogether, the majority of the teens they polled do not trust A.I. as an information source (though they still use it to search the internet). Only 19 percent of them are open to having A.I. as a teacher, and only 11 percent are comfortable using A.I. to handle important decisions.
The teenagers they polled feel as though they need real human connections, with tangible empathy. Hovaghimian described interactions with chatbots as seeming empty, or like “talking to a wall.” She said that, above all, the teenagers she talked to hated the idea of A.I. having authority over humans.
According to a survey released in April from Pew Research, adults see things similarly: A.I. experts and average Americans agree that they “want more personal control of A.I. and worry about lax government oversight.” I hope that Congress wakes up to what the people really want before this “big, beautiful bill” makes a big, horrible mess of all of our futures.
End Notes
-
Thank you: To reader Caitlin Trasande, who wrote in with her concern about Mattel’s partnership with OpenAI last week. Trasande is a neuroscientist, and she said what worries her most about A.I. in toys is that they might interrupt the normal development of attachment to caregivers and that they might “pull attention too strongly into 1:1 engagement when the work of early childhood is about learning to regulate attention in a group.”
-
For more good tips on navigating A.I. with teens, check out this newsletter from the psychologist Jacqueline Nesi, who breaks down a recent American Psychological Association health advisory on artificial intelligence and adolescent well-being.
-
My sick day binge-watch: I was nursing norovirus over the weekend (thanks, kids) and I watched much of Season 2 of “America’s Sweethearts: Dallas Cowboys Cheerleaders” on Netflix. Read my take on Season 1 here. I have so many thoughts, mostly about their contract negotiations. (Go, Jada!!)
Feel free to drop me a line about the show or anything else here.
Thank you for being a subscriber
Read past editions of the newsletter here.
If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.
Jessica Grose is an Opinion writer for The Times, covering family, religion, education, culture and the way we live now.
The post Don’t Let Silicon Valley Move Fast and Break Children’s Minds appeared first on New York Times.