DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

December 22, 2025
in News
People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

On October 7, a TikTok account named @fujitiva48 posed a provocative question alongside their latest video. “What are your thoughts on this new toy for little kids?” they asked over 2,000 viewers, who had stumbled upon what appeared to be a TV commercial parody. The response was clear. “Hey so this isn’t funny,” wrote one person. “Whoever made this should be investigated.”

It’s easy to see why the video elicited such a strong reaction. The fake commercial opens with a photorealistic young girl holding a toy—pink, sparkling, a bumblebee adorning the handle. It’s a pen, we are told, as the girl and two others scribble away on some paper while an adult male voiceover narrates. But it’s evident that the object’s floral design, ability to buzz, and name—the Vibro Rose—look and sound very much like a sex toy. An “add yours” button—the feature on TikTok encouraging people to share the video on their feeds—with the words, “I’m using my rose toy,” removes even the smallest slither of doubt. (WIRED reached out to the @fujitiva48 account for comment, but received no response.)

The unsavory clip was created using Sora 2, OpenAI’s latest video generator, which was initially released by invitation only in the US on September 30. Within the span of just one week, videos like the Vibro Rose clip had migrated from Sora and arrived onto TikTok’s For You Page. Some other fake ads were even more explicit, with WIRED discovering several accounts posting similar Sora 2-generated videos featuring rose or mushroom-shaped water toys and cake decorators that squirted “sticky milk,” “white foam,” or “goo” onto lifelike images of children.

The above would, in many countries, be grounds for investigation if these were real children rather than digital amalgamations. But the laws on AI-generated fetish content involving minors remain blurry. New 2025 data from the Internet Watch Foundation in the UK notes that reports of AI-generated child sexual abuse material, or CSAM, have doubled in the span of one year from 199 between January-October 2024 to 426 in the same period of 2025. Fifty-six percent of this content falls into Category A—the UK’s most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. 94 percent of illegal AI images tracked by IWF were of girls. (Sora does not appear to be generating any Category A content.)

“Often, we see real children’s likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online,” Kerry Smith, chief executive officer of the IWF, tells WIRED.

This influx of harmful AI-generated material has incited the UK to introduce a new amendment to its Crime and Policing Bill, which will allow “authorized testers” to check that artificial intelligence tools are not capable of generating CSAM. As the BBC has reported, this amendment would ensure models would have safeguards around specific images, including extreme pornography and non-consensual intimate images in particular. In the US, 45 states have implemented laws to criminalize AI-generated CSAM, most within the last two years, as AI-generators continue to evolve.

OpenAI, Sora 2’s creator, has implemented measures that prevent young people from having their faces plastered onto pornographic deepfakes. The app’s feature where users record their likeness to embed into generated videos—previously called Cameo, but now temporarily renamed—works on a consent basis and can be revoked at any time. There is also a rule that ensures adult profiles cannot message teens. The app outright bans CSAM, and OpenAI’s policies state that their platforms “must never be used to exploit, endanger, or sexualize anyone under 18 years old,” with OpenAI reporting any child sexual abuse material and child endangerment to the National Center for Missing and Exploited Children.

But what about in the case of something like the rose toys? These videos are conspicuous enough in their signposting for TikTok commenters, YouTubers, and content creators to invite serious discourse over their dangers, yet creators seem to be able to make them by circumventing OpenAI’s guardrails. While not hardcore pornography or deepfakes targeting real children, when uploaded in conjunction with leading statements, they show an apparent intent to farm for predators.

Other clips that creators are grouping in the same category hover even more between sexualization and commentary. Fake commercials for recalled toys, like “Epstein’s Island Getaway” and “Diddy’s Mansion Party,” where AI-generated children play with figurines of older men, young women, and a baby oil fountain, have also come under scrutiny. In one named “Harv’s couch”—seemingly a reference to convicted sex criminal Harvey Weinstein—which features “a real locking door, soft touch couch, and 3 hopeful actress dolls,” a child’s voice asks, “Is this how you get famous?”

As inappropriate as they are, clips like these and others showing playsets parodying 9/11 and the death of Princess Diana suggest that the videos’ creators could be motivated more by the desire to be an edgelord than anything else. Yet they often appear side by side on the same compilation accounts, leading those seeking out dark jokes to stumble upon more questionable material accidentally.

Lewd fake Pixar trailers for concepts like “2 Girls One Cup” also fall into this category, but some venture into murkier territory by featuring animations of young female characters with ambiguous ages. Others include a character called Incredible Gassy—a flatulent, obese parody version of Mr. Incredible that in one fake toy commercial (in this iteration called Incredible Leaky) “blasts goo from his hero bits.” Originating from a NSFW Patreon commission in 2021, the Incredible Gassy character has become a meme in recent months as a nod towards, and commentary on, the huge wave of inflation, pregnancy, vore, and obesity fetish content that has taken off on Sora 2 and the wider internet, since the app’s release. These videos, too, often feature AI-generated minors.

Last month, British YouTuber D0l1face3 brought attention to one clip, this time made with Veo, in which a coach inspects a team of overweight young boys in a locker room, touching their stomachs and praising them for their weight gain. Although the content isn’t explicitly pornographic, in her view its intention becomes clear when viewing the video’s comments section, where several accounts had shared requests for people to add them on Telegram, which law enforcement has claimed is a hub for pedophile networks. WIRED found that the account was still active at the time of recording, while also encountering similar videos featuring young overweight boys that appeared to be subtly catering to a predatory audience, all without being flagged as violating TikTok’s policies.

A Google spokesperson told WIRED the company has clear policies surrounding the AI generation of minors. Regarding the videos of the young overweight boys, the spokesperson said the comments and intent to post the videos on other platforms are what’s concerning.

It is this contextual nuance—which forms a large part of how CSAM operates—that apps like Sora 2 are ultimately unable to police.

Mike Stabile, public policy director at the nonprofit Free Speech Coalition, has over 20 years of experience working in the adult industry and understands how adult sites operate and moderate content. He believes that OpenAI and similar companies must implement a similar level of nuance in their moderation practices to fully ensure CSAM and fetish content featuring children doesn’t make it onto the platform. This could involve banning or limiting certain words that might be associated with fetish content, and improving moderation teams with more diversity and training.

“We already see this struggle with platforms like Facebook. How do they differentiate between a parent sharing a picture of their kid playing in a pool or the bath, versus somebody who’s sharing something that’s meant to be child sex abuse material?” Stabile told WIRED.

“Anytime you’re dealing with kink or fetish, there will be things that people who are not familiar are going to miss,” he says.

Following WIRED’s request for comment, OpenAI said it banned several accounts that were creating videos like the vibrating rose toys. Niko Felix, spokesperson from OpenAI, told WIRED: “OpenAI strictly prohibits any use of our models to create or distribute content that exploits or harms children. We design our systems to refuse these requests, look for attempts to get around our policies, and take action when violations occur.”

WIRED also reached out to TikTok about the influx of fetish material made using Sora 2 arriving on the platform, flagging over 30 instances of inappropriate toy ads, inflation, and obesity fetish content. Many of these, including the rose toy commercials, have been removed, but others, including other fake toy commercials and the Incredible Leaky figurine, remain online at the time of reporting.

A TikTok spokesperson said: “We have removed videos and banned accounts that uploaded content created on other AI platforms which violated TikTok’s strict minor safety policies.”

Likewise, IWF urges platforms to consider these safeguards in their initial design and put them at the forefront of their priorities—preventing harmful content from being made to begin with.

“We want to see products and platforms which are safe by design, and encourage AI companies to do as much as they can to make sure their products can not be abused to create child sexual abuse imagery,” Smith tells WIRED.

The post People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids appeared first on Wired.

Larry Ellison pledges $40 billion personal guarantee for Paramount’s Warner Bros. bid
News

Larry Ellison pledges $40 billion personal guarantee for Paramount’s Warner Bros. bid

by Los Angeles Times
December 22, 2025

Paramount Skydance Corp. amended its bid for Warner Bros. Discovery Inc., including by offering a personal financial guarantee by Oracle ...

Read more
News

At Santa school, novice Clauses find community and joy

December 22, 2025
News

There’s Several Different Versions of ‘The Chanukah Song’ (And a Famous Singer Made One)

December 22, 2025
News

In Congress and at Home, Omar Faces Trump’s Anti-Somali Attacks

December 22, 2025
News

Meet the Chanel chief who hires for personality over talent or skills—and the 3 red flag traits she rejects

December 22, 2025
The Post’s State of the Year

The state of the year? Here is the first winner.

December 22, 2025
Kara Swisher Says CBS Boss Censored ‘60 Minutes’ to ‘Please Trump’

Kara Swisher Says CBS Boss Censored ‘60 Minutes’ to ‘Please Trump’

December 22, 2025
Taiwan’s drone push needs its powerhouse industries, former US commerce official says

Taiwan’s drone push needs its powerhouse industries, former US commerce official says

December 22, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025