Open the website of one explicit deepfake generator and you’ll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. “Transform any photo into a nude version with our advanced AI technology,” text on the website says.
The options for potential abuse are extensive. Among the 65 video “templates” on the website are a range of “undressing” videos where the women being depicted will remove clothing—but there are also explicit video scenes named “fuck machine deepthroat” and various “semen” videos. Each video costs a small fee to be generated; adding AI-generated audio costs more.
The website, which WIRED is not naming to limit further exposure, includes warnings saying people should only upload photos they have consent to transform with AI. It’s unclear if there are any checks to enforce this.
Grok, the chatbot created by Elon Musk’s companies, has been used to created thousands of nonconsensual “undressing” or “nudify” bikini images—further industrializing and normalizing the process of digital sexual harassment. But it’s only the most visible—and far from the most explicit. For years, a deepfake ecosystem, comprising dozens of websites, bots, and apps, has been growing, making it easier than ever before to automate image-based sexual abuse, including the creation of child sexual abuse material (CSAM). This “nudify” ecosystem, and the harm it causes to women and girls, is likely more sophisticated than many people understand.
“It’s no longer a very crude synthetic strip,” says Henry Ajder, a deepfake expert who has tracked the technology for more than half a decade. “We’re talking about a much higher degree of realism of what’s actually generated, but also a much broader range of functionality.” Combined, the services are likely making millions of dollars per year. “It’s a societal scourge, and it’s one of the worst, darkest parts of this AI revolution and synthetic media revolution that we’re seeing,” he says.
Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and rapidly expanded to offer harmful video creation. Image-to-video models typically now only need one photo to generate a short clip. A WIRED review of more than 50 “deepfake” websites, which likely receive millions of views per month, shows that nearly all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios women can be depicted into.
Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a “sex-mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age, and other settings.” Another posted that “more styles” of images and videos would be coming soon and users could “create exactly what you envision with your own descriptions” using custom prompts to AI systems.
“It’s not just, ‘You want to undress someone.’ It’s like, ‘Here are all these different fantasy versions of it.’ It’s the different poses. It’s the different sexual positions,” says independent analyst Santiago Lakatos, who along with media outlet Indicator has researched how “nudify” services often use big technology company infrastructure and likely made big money in the process. “There’s versions where you can make someone [appear] pregnant,” Lakatos says.
A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.
Lakatos says, in recent years, multiple larger “deepfake” websites have solidified their market position and now offer APIs to other people creating nonconsensual image and video generators, allowing more services to mushroom up. “They’re consolidating by buying up other different websites or nudify apps. They’re adding features that allow them to become infrastructure providers.”
So-called sexual deepfakes first emerged toward the end of 2017 and, at the time, required a user to have technical knowledge to create sexual imagery or videos. The widespread advances in generative AI systems of the past three years, including the availability of sophisticated open source photo and video generators, have allowed the technology to become more accessible, more realistic, and easier to use.
General deepfake videos of politicians and of conflicts around the world have been created to spread misinformation and disinformation. However, sexual deepfakes have continuously created widespread harm to women and girls. At the same time, laws to protect people have been slow to be implemented or not introduced at all.
“This ecosystem is built on the back of open-source models,” says Stephen Casper, a researcher working on AI safeguards and governance at Massachusetts Institute of Technology, who has documented the rise in deepfake video abuse and its role in nonconsensual intimate imagery generation. “Oftentimes it’s just an open-source model that has been used to develop an app that then a user uses,” Casper says.
The victims and survivors of nonconsensual intimate imagery (NCII), including deepfakes and other nonconsensually shared media, are nearly always women. False images and nonconsensual videos cause huge harm, including harassment, humiliation, and feeling “dehumanized.” Explicit deepfakes have been used to abuse politicians, celebrities, and social media influencers in recent years. But they have also been used by men to harass colleagues and friends, and by boys in schools to create nonconsensual intimate imagery of their classmates.
“Typically, the victims or the people who are affected by this are women and children or other types of gender or sexual minorities,” says Pani Farvid, associate professor of applied psychology and founder of The SexTech Lab at The New School. “We as a society globally do not take violence against women seriously, no matter what form it comes in.”
“There’s a range of these different behaviors where some [perpetrators] are more opportunistic and do not see the harm that they’re creating, and it is based on how an AI tool is also presented,” Farvid says, adding some AI companion services can target people with gendered services. “For others, this is because they are in abusive rings or child abuse rings, or they are folks who are already engaging in other forms of violence, gender-based violence, or sexual violence.”
One Australian study, led by the researcher Asher Flynn, interviewed 25 creators and victims of deepfake abuse. The study concluded that a trio of factors—increasingly easy-to-use deepfake tools, the normalization of creating nonconsensual sexual images, and the minimization of harms—could impact the prevention and response to the still growing problem. Unlike the widespread public sharing that happened with nonconsensual sexual images created using Grok on X, explicit deepfakes were more likely to be shared with victims or their friends and family privately, the study found. “I just simply used the personal WhatsApp groups,” one perpetrator told the researchers. “And some of these groups had up to 50 people.”
The academic research found four primary motivations for the deepfake abuse—of 10 perpetrators they interviewed, eight identified as men. These included sextortion, causing harm to others, getting reinforcement or bonding from their peers, and curiosity about the tools and what they could do with them.
Multiple experts WIRED spoke to said many of the communities developing deepfake tools have a “cavalier” or casual attitude to the harms they cause. “There’s this tendency of a certain banality of the use of this tool to create NCII or even to have access to NCII that are concerning,” says Bruna Martins dos Santos, a policy and advocacy manager at Witness, a human rights group.
For some abusers creating deepfakes, the technology is about power and control. “You just want to see what’s possible,” one abuser told Flynn and fellow researchers involved in the study. “Then you have a little godlike buzz of seeing that you’re capable of creating something like that.”
The post Deepfake ‘Nudify’ Technology Is Getting Darker—and More Dangerous appeared first on Wired.




