Four seconds into this version of “Old MacDonald Had a Farm,” an animated horse with two arms and four legs hatches from an egg.
In this video, a pink elephant, an orange flamingo and other animals appear next to letters of the alphabet, performing complicated gymnastic maneuvers on tightropes.
And in this video, animals form from paint being squirted into a glass of water and inexplicably grow mermaid tails.
The New York Times reviewed these clips, along with more than 1,000 other videos recommended to young children on YouTube, and found that the algorithm pushes bizarre, often nonsensical, A.I.-generated videos from channels claiming to teach “toddlers” and “preschoolers” about the alphabet and animals.
In some videos, animals and people have warped faces or extra body parts. Often, the videos contain garbled text. Most clips have incoherent narratives, some riddled with misinformation. And none are longer than about 30 seconds, allowing little time to develop ideas, plots or any sense of repetition that is often necessary for learning.
Now produced with the help of readily available artificial intelligence tools and online tutorials, many of these videos have millions of views and counting, with channels churning out these videos at a rapid rate, sometimes even multiple times a day.
Many of the YouTube accounts producing A.I.-generated videos reviewed by The Times specifically target the youngest of viewers and their parents, marketing their channels as “educational” as opposed to entertainment. Creators are profiting off this content with little oversight from YouTube.
“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” said Dr. Jenny Radesky, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”
Dr. Radesky and others raised concerns about hyper-realistic A.I. content, especially for children who are too young to be able to distinguish fantasy from reality.
McCall Booth, a developmental psychologist and researcher at Georgetown University, said children “may have a harder time in the future identifying fake content because their mental schema had already adapted to include improbable, but aesthetically realistic character actions.”
Even on YouTube Kids, which is intended to provide a more controlled digital environment for children, these kinds of A.I. videos are easy to find. Last summer, videos of A.I.-generated animals diving into pools was even a TikTok trend.
Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, pointed out that this pool diving video in particular contains a lot of conflicting information for young children who may have a hard time deciphering what is real.
“The animal could be real. The pool could be real, but again, it’s a mismatch between what should happen in the real world between those two things. So that is going to place a lot of this cognitive load on the child to try and map those things together,” Dr. Barr said.
“It may seem like it’s innocuous,” she added. “But that is not going to help them learn either about swimming or giraffes or ‘G’.
Dr. Radesky explained that well-crafted media serves as a mirror and helps reflect the world that children already know, back to them. Shows like “Mister Rogers’ Neighborhood” or “Sesame Street,” for example, intentionally try to help make sense of the world — not only through letters and numbers, but also through emotions and learning about interpersonal relationships.
The American Academy of Pediatrics issued a guide for parents on how to select media content for their young children, telling parents to avoid content that is either A.I.-generated or highly sensationalized. The guidance also cautioned against consuming short-form videos.
While there aren’t many studies yet on how short-form media affects young children, Dr. Barr said that for children under the age of 5 whose attention systems are still developing, the videos move too rapidly, and usually aren’t long enough to include any meaningful context or story plot.
The Times focused primarily on YouTube Shorts when conducting its analysis of these A.I. videos, as most A.I. tools default to short-form video and offer vertical formatting options.
Over the course of several weeks, The Times watched videos from popular children’s channels on YouTube like CoComelon, “Bluey” or Ms. Rachel from a private browser at different times throughout the day. Then we scrolled through the platform’s recommended YouTube Short videos in 15-minute intervals in order to better understand how the algorithm floods the feed with this content.
In one 15-minute session, after watching CoComelon’s “Wheels on the Bus” video, more than 40 percent of the videos watched appeared to contain A.I.-generated visuals. The Times manually reviewed each of the videos, some of which clearly featured YouTube’s label for “altered or synthetic content,” while others displayed visual errors or other distortions in the background.
The A.I.-generated content wasn’t always obviously flawed, and some videos were sufficiently seamless to evade casual detection by the human eye. To further vet the videos, The Times used an A.I. detector to determine with high probability that the videos, and in some cases the music and voices, were A.I.-generated.
The Times also found that the same A.I. videos or channels, like this channel that uploaded a video of an animated character driving a fiery bus down a park slide, tended to pop up repeatedly in multiple sessions.
Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, further questioned the addictive nature of these videos.
“These do strike me as something that are made to really get in your head,” Dr. Prinstein said. “It may even be harmful, but we need more data.”
Dr. Prinstein explained that due to the dramatic proliferation of A.I. content in just the last year alone, it’s hard to keep up with the research findings.
While the jury is still out when it comes to definitive long-term health impacts, and low-quality videos aimed at children existed on the platform long before the rise of A.I., experts fear that the sheer volume of these videos now may cause displacement, in which children lose out on opportunities to engage with media content or other activities like reading and interacting with others that could bring them more benefits.
The vast quantity of A.I. content is already upending the feeds of all kinds of social media users. Elsewhere on YouTube, older children can easily find disturbing videos depicting abusive and violent scenes featuring popular children’s characters. Facebook pages are uploading altered images that misrepresent historical events. A.I. avatars in the form of “doctors” on Instagram are pushing bogus wellness advice and products. Last November, TikTok said it had labeled over 1.3 billion videos as A.I.-generated.
Some platforms have begun to tighten their rules around the use of these tools. Pinterest has features where users can select how much of this kind of content they want to see. TikTok also said it was testing different ways that would enable people to reduce the amount of A.I. content in their feeds. Last month, YouTube announced new controls that allow parents to set time limits on YouTube Shorts.
The Times requested comment from YouTube on its policy around A.I. videos for children, and shared five different channels as examples. In response, YouTube suspended all five accounts from the YouTube Partner Program, meaning they are ineligible to earn ad revenue on YouTube and are blocked from appearing on YouTube Kids. The Times also sent three examples of hyper-realistic A.I. videos on YouTube Kids, which YouTube then removed from the app.
YouTube also stated it removed one video The Times shared for violating child safety policies. The A.I. video showed animals being chased and turning different colors once inoculated with a syringe. However, similar videos can still be found on the channel.
“We require creators to disclose when they’ve used A.I. to create realistic content, meaning things a viewer could easily mistake for a real person, place, or event,” said Boot Bullwinkle, a YouTube spokesperson, in an email to The Times.
But our review found that creators are not consistently disclosing if videos contain synthetic visuals to make more realistic-looking content. And when it comes to animated A.I. videos for children, YouTube does not require these to be labeled at all.
This means that much of the burden of identifying A.I. content is falling to parents — a task that is daunting even for experts as the tools that make this content are rapidly improving.
Some parents have turned to Reddit looking for tips to filter out A.I. videos on YouTube. Other commentators on the platform advise fellow parents to create their own playlist of vetted content, while some parents are arguing to boycott the platform altogether.
Allison Sims, 34, has two children and lives in Texas. She often turns on her own YouTube account to keep her 2-year-old occupied while she’s making dinner. Her daughter watches Ms. Rachel, The Wiggles and other channels that play nursery rhymes. But it wasn’t long before she figured out how to scroll through YouTube Shorts.
After coming across several shorts that she found disturbing in her daughter’s watch history, Ms. Sims said she removed the app from the iPad. She shared some of the videos her daughter watched with the Times, which included A.I.-generated videos.
“Because A.I. is so new and as a parent, I wouldn’t know what to look out for except for when they’re very obvious that I stop and look at it,” said Ms. Sims. “But I feel like it’s something that as parents we should kind of know and be aware of.”
Ms. Sims also questioned the motive of the creators behind the videos. “Is it that they’re actually wanting to help or is it they’re trying to grab your kids’ attention?”
Many of the YouTube accounts uploading A.I. content for children are largely anonymous with no contact information or identifiable details as to who is behind the account.
But one creator, Syeda Jaria Hassan, spoke to The Times and explained how she taught herself how to make A.I. videos using tools like Google’s Whisk and Runway. She said that creating A.I. content for children has become her full-time job.
Ms. Hassan, who lives in the city of Sargodha in Punjab, Pakistan, said she decided to focus on making content for children after teaching at Montessori school for children between 4 and 8. Her account, Suno Kids TV which is described as a channel to educate and entertain children, features animated A.I. videos of animals and sing-along songs.
The videos with the most views on her channel are specifically about Halloween. With more than 370 million views, one of her YouTube Shorts features spooky animals covered in bloody wounds with haunting green eyes.
Ms. Hassan, 29, declined to say how much revenue this particular video generated or how much the channel makes overall, but noted that if videos “get nice views, it will give you a nice living.”
She even showed some of the videos she created to her former students.
“They loved it,” she said. “They picked up very fast from the videos. They learned the sounds. They learned the spellings. They learned the letters,” Ms. Hassan said.
When asked about how children can be distracted by these kinds of effects, Ms. Hassan responded that TV channels and other YouTube channels for children also rely heavily on visual effects, and that she’s just following a model of children’s programming that has been around for years.
However, when it comes to learning, experts say children benefit most from watching media that has a clear narrative with a beginning, middle and end, along with characters that children can attach to and scenes that relate to their real life. Dr. Barr noted that storybooks and other well-structured content aligns with a familiar format, which is following a character throughout a journey. Media that illustrates relatable scenes, like going to the park, ultimately help children understand and connect back to their own world.
Simple language and short phrases are also helpful when it comes to cognitive development. Programming that teaches children about concepts like problem-solving or feature intentional repetition can help with memory recall.
One example is PBS Kids’ “Daniel Tiger’s Neighborhood,” a modern spinoff of “Mister Rogers’ Neighborhood,” which follows a young animated tiger who teaches life skills and social strategies. The show’s creators said they work with child development experts when crafting their stories.
Ellen Doherty, chief creative officer at Fred Rogers Productions, explained that they developed a structural pattern for the show, specifically in the format of two separate short stories in every episode with songs that strategically help reinforce the themes within the episode that parents and children can both sing and remember. This music also helps move the story along, but at a controlled speed.
“Everything happens in a pace that a young child who does not have cinematic language yet can follow and can actually literally process what’s happening,” said Ms. Doherty.
In one story, Daniel Tiger teaches children about brushing their teeth through song, making sure to interact with young viewers and taking long pauses.
“That spark of human connection is everything,” said Ms. Doherty.
But just because a video contains A.I. elements, does it mean it can’t foster human connection?
Some researchers like Ying Xu, an assistant professor of education at the Harvard Graduate School of Education, say that well designed A.I. can actually serve to support children’s learning by satisfying children’s curiosity and helping answer their questions.
Ms. Xu focuses her research on designing A.I. that supports language and literacy development, and collaborates closely with producers of the animated PBS Kids’ shows “Elinor Wonders Why” and “Lyla in the Loop.”
For Ms. Xu’s research, an interactive Elinor was developed to allow children to directly respond to the character’s questions, who offered feedback based on the children’s responses. Ms. Xu found that the conversational videos helped children better understand STEM concepts.
“I don’t agree that adults should actually use A.I. to monetize, to mass produce low-quality videos, but I do think that it actually offers a tool for children to express themselves,” Ms. Xu said, adding the caveat that navigating certain A.I. tools to help children engage in storytelling by creating their own multimedia content should always be guided by teachers and parents.
Juliana Castro Varón contributed reporting. Amogh Vaz contributed video production.
The post How A.I.-Generated Videos Are Distorting Your Child’s YouTube Feed appeared first on New York Times.




