DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought

April 15, 2026
in News
The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought

It usually starts with a photo downloaded from social media.

Around the world, teenage boys are saving Instagram and Snapchat images of girls they know from school and using harmful “nudify” apps to create fake nude photos or videos of them. These deepfakes can quickly be shared across whole schools, leaving victims feeling humiliated, violated, hopeless, and scared the images will haunt them forever.

The deepfake crisis hitting schools started slowly a couple of years ago, but it has since grown considerably as the technology used to create the explicit imagery has become more accessible. Deepfake sexual abuse incidents have hit around 90 schools globally and have impacted more than 600 pupils, according to a review of publicly reported incidents by WIRED and Indicator, a publication focusing on digital deception and misinformation.

The findings show that since 2023, schoolchildren—most often boys in high schools—in at least 28 countries have been accused of using generative AI to target their classmates with sexualized deepfakes. The explicit imagery, containing minors, is considered to be child sexual abuse material (CSAM). This analysis is believed to be the first to review real-world cases of AI deepfake abuse taking place at schools globally.

As a whole, the analysis shows the worldwide reach of harmful AI nudification technology, which can earn their creators millions of dollars per year, and shows that in many incidents, schools and law enforcement officials are often not prepared to respond to the serious sexual abuse incidents.

Across North America, there have been nearly 30 reported deepfake sexual abuse cases since 2023—including one with more than 60 alleged victims, one where the victim was temporarily expelled from school, and others where pupils at multiple schools have allegedly been targeted simultaneously. More than 10 cases have been publicly reported in South America, more than 20 across Europe, and another dozen in Australia and East Asia combined.

The true scale of deepfake sexual abuse taking place in schools is likely much higher. One survey by United Nations children’s agency Unicef estimates that 1.2 million children had sexual deepfakes created of them last year. One in five young people in Spain told Save the Children researchers that deepfake nudes had been created of them. Child protection group Thorn found one in eight teens know someone targeted, and in 2024, 15 percent of students surveyed by the Center for Democracy and Technology said they knew about AI-generated deepfakes linked to their school.

“I think you’d be hard-pressed to find a school that has not been affected by this,” says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. “The most important thing is how we’re able to help the victims when this happens, because the effects of this can be massive.”

WIRED and Indicator’s analysis looked at incidents that have been publicly reported with specific details, such as locations of schools and potential victim counts. Mostly these are English-language reporting, with a lack of data being available for many countries. Many incidents are never reported in the press, may not include specific details if they are, and instead can be handled privately by schools and law enforcement officials.

Nevertheless, there are clear patterns that appear. In nearly all cases, teenage boys are allegedly responsible for the creation of the images or videos. They are often shared in social media apps or via instant messaging with classmates. And they are hugely harmful to the victims. “I’m worried that every time they see me, they see those photos,” one victim in Iowa said earlier this year. “She’s been crying. She hasn’t been eating,” another’s family said.

In multiple instances, victims often do not want to attend school or be faced with seeing those who created explicit images or videos of them. “She feels hopeless because she knows that these images will likely make it onto the internet and reach pedophiles,” says lawyer Shane Vogt, and three Yale Law School students, Catharine Strong, Tony Sjodin, and Suzanne Castillo, who are representing one unnamed New Jersey teenager in legal action against a nudifying service. “She is severely distressed by the knowledge that these images are out there, and she will have to monitor the internet for the rest of her life to keep them from spreading.”

In South Korea and Australia, schools have given pupils the option not to have their photos in yearbooks or stopped posting images of students on their official social media accounts, citing their use for potential deepfake abuse. “Around the world, there have been cases where school images were taken from public social media pages, altered using AI, and turned into harmful deepfakes,” one school in Australia said. “Imagery will instead feature side profiles, silhouettes, backs of heads, distant group shots, creative filters, or approved stock photography.”

Sexual deepfakes created using AI have existed since around the end of 2017; however, as generative AI systems have emerged and become more powerful, they have led to a shadowy ecosystem of “nudification” or “undress” technologies. Dozens of apps, bots, and websites allow anyone to create sexualized images and videos of others with just a couple of clicks, often with no technical knowledge.

“What AI changes is scale, speed, and accessibility,” says Siddharth Pillai, cofounder and director of the RATI Foundation, a Mumbai-based organization working to prevent violence against women and children. “The technical barrier has dropped significantly, which means more people, including adolescents, can produce more convincing outputs with minimal effort. As with many AI-enabled harms, this results in a glut of content.”

Amanda Goharian, the director of research and insights at child safety group Thorn, says its research indicates that there are different motivations involved in teenagers creating deepfake abuse, ranging from sexual motivations, curiosity, revenge, or even teens daring each other to create the imagery. Studies involving adults who have created deepfake sexual abuse similarly show a host of different reasons why the images may be created. “The goal is not always sexual gratification,” Pillai says. “Increasingly, the intent is humiliation, denigration, and social control.”

“It’s not just about the tech,” says Tanya Horeck, a feminist media studies professor and researcher focusing on gender-based violence who has looked at sexualized deepfakes in UK schools at Anglia Ruskin University. “It’s about the long-standing gender dynamics that facilitate these crimes.”

As the number of deepfake incidents at schools has increased in recent years, the ways that they are handled by schools and law enforcement agencies can vary wildly. Parents have complained that not enough action has been taken by officials. In one case, it reportedly took three days for a school to report an incident to police; in another, a victim claimed there have been no immediate consequences for the individuals allegedly responsible. Sometimes students face charges for creating and possessing CSAM, while others face different criminal charges or suspensions from school. In March, two students in Pennsylvania admitted guilt in juvenile court and were later sentenced to 60 hours of community service on CSAM-related felony charges for creating images and videos of 60 girls.

In multiple cases, teenage girls and their families are the ones who have ended up fighting back against the creation of deepfake sexual abuse, often moving faster than politicians, who generally have been slow to act. Teenagers have walked out of class to show support to victims, protested against alleged perpetrators, been involved in the creation of online training courses, and changed laws, including contributing to the creation of the Take It Down Act, which requires tech platforms to remove nonconsensual intimate images within 48 hours. (Separately, the UK and the EU are in the process of banning nudification apps, while Australia’s eSafety regulator has taken action against some services).

“Often when children do speak up about things that happen, the response is completely inadequate,” says Afrooz Kaviani Johnson, a child protection specialist at Unicef. “The way in which these adults respond to a disclosure can significantly impact both their recovery and the likelihood that they would speak up if something else happened.”

“There’s so much work to do to actually get schools caught up about the threat landscape, their rights, deterrence, policy, crisis readiness,” says Evan Harris, a former teacher and the founder of Pathos Consulting Group who has been running training for schools across the US. Harris says that preparations can include everything from educating students about the harms and illegality of creating explicit deepfakes, to helping school administrators think about digital forensics and evidence gathering. “It’s essential that we give students the tools and language and support should they experience deepfakes or become aware of them,” says Robyn Little, senior director of educational digital strategy at McDonogh School in Maryland, which has worked with Harris on training.

Schools’ problems with deepfakes go beyond pupils creating dangerous sexual imagery of each other. Children have also created sexually explicit deepfakes of their teachers on multiple occasions, placing them into humiliating situations or making them say things they haven’t actually said. One school in Oregon was forced to hire substitute teachers after its regular teachers called in sick, protesting against a social media account sharing manipulated images of staff. Teachers have also been depicted getting on their hands and knees and eating dog food or holding a gun, according to another report.

“While this [deepfake sexual abuse] is maybe the most urgent and serious and damaging of the deepfake risks,” Harris says, “it is one of a half dozen or more that are bubbling up at the same time and that schools are particularly vulnerable to and really unprepared for it because of resources and because so much is on their plate.”

The post The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought appeared first on Wired.

Apple is getting serious about ads
News

Apple is getting serious about ads

by Business Insider
April 15, 2026

Tim Cook's Apple is upping the ante on ads. Perry Knotts/Getty ImagesA version of this post appears in the CMO ...

Read more
News

Johnny Somali, American Online Provocateur, Is Sentenced to Prison in South Korea

April 15, 2026
News

America’s Insane Tax-Filing Process

April 15, 2026
News

AI Slop Is Making the Internet Fake-Happy

April 15, 2026
News

Amid Trump’s Blockade, Threat of Escalation Leaves Thousands of U.S. Forces on High Alert

April 15, 2026
How ‘Muskism’ Is Changing the Way America Works

How ‘Muskism’ Is Changing the Way America Works

April 15, 2026
I was scammed out of $300,000. Two years later, I’m rebuilding my life.

I was scammed out of $300,000. Two years later, I’m rebuilding my life.

April 15, 2026
Want to control your blood sugar? Here’s the best time of day for exercise.

Want to control your blood sugar? Here’s the best time of day for exercise.

April 15, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026