You are a teenage girl in 2026. You’re going hiking. You’re at the beach. You’re getting glam for a homecoming dance, posing with your friends, enjoying the kinds of moments that high school kids have been memorializing without incident for decades.
These are the kinds of wholesome, keepsake memories that have been forever ruined for the three Jane Does in Tennessee who are part of a class-action lawsuit filed in March against xAI, Elon Musk’s A.I. company.
A person known to at least one of these three girls used xAI’s assistant, Grok, to generate sexually explicit images appearing to be them based on real, clothed photos. Of Plaintiff Jane Doe 1, the lawsuit asserts that doctored images “showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude.” These scenes were created, in part, using this teenager’s face from her yearbook photo.
It gets worse. The perpetrator allegedly circulated altered pictures of at least 18 underage girls to Discord, a popular messaging platform. Their first names and the name of their school appear to be attached to the images, making them identifiable.
All three Jane Does have experienced extreme stress because of this victimization. Jane Doe 1, according to the lawsuit, “feels acute anxiety about who has viewed these files online and feels a complete lack of control over the ongoing dissemination of the files.”
The suit describes lives narrowed by this injury. Two of the Jane Does fear engaging in normal activities like going to class as a result of this abuse, and all three say that their reputations are damaged when people believe these images are real. While this suit focuses on female victims, teenage boys have also been harmed by harassment and extortion using A.I.-generated deepfakes.
In January, I noted that public opinion was turning against social media and A.I. companies, in part because of the illegal sexual content that Grok appeared to be generating at scale. According to research from the Center for Countering Digital Hate, an online safety advocacy group, over an 11-day period spanning late December and early January, Grok “is estimated to have generated approximately three million sexualized images, including 23,000 that appear to depict children.”
Grok subsequently limited its image-generation abilities to paying customers and added other guardrails, like preventing the image generator’s X account from putting real people in bikinis. Still, many critics thought this did not solve the sexual abuse problem but rather merely allowed the company to make money off a P.R. nightmare while appearing to put a Band-Aid on the situation.
The creation of child sex abuse material, or CSAM, by individuals is already illegal, but in March a jury in New Mexico found Meta liable to the tune of $375 million for misleading users about its safety practices and failing to protect its young users from child predators. Social media companies were previously able to avoid accountability for their role in enabling the sharing of these images by leaning on Section 230 of the Communications Decency Act of 1996, which, as my newsroom colleague Cecilia Kang has explained, “protects them from liability for what their users post.”
Congress has not gotten it together to reform this law, so lawyers have had to file suits in state courts that try out innovative strategies to get justice for children. New Mexico’s attorney general, Raúl Torrez, identified the algorithms that were built by the social media companies, which are separate from what users are individually posting.
“What is not covered by Section 230 are the design features themselves that are built into the product that make that product inherently dangerous,” Torrez said. He added, “The platforms are really good at connecting people with the things that they are interested in, and if you have an interest in little girls, the platform will be good at connecting you with little girls.”
The class-action lawsuit filed on behalf of the Tennessee Jane Does argues that Grok is a co-creator of the images, because they would not exist without the tool, said Vanessa Baehr-Jones, one of the lawyers for the girls. Annika Martin, another of their lawyers, hopes that the industry will root out the problem — individual cases are not enough.
I have been writing about this issue — the creation of deepfake nudes of minors — for two years. It’s arguably much worse now that A.I. image generation tools are ubiquitous, and the images they create are even more realistic. While social media companies may not be able to fully stamp out CSAM on their platforms, they could be doing a far better job of prioritizing the problem, said Arturo Béjar, a whistle-blower and former engineering director on Facebook’s protect and care team. Béjar, who testified for the state in the New Mexico case against Meta, told me that social media companies can prevent a lot of CSAM from being created and shared.
For starters, Béjar said that “You should not be able to generate erotic content in the likeness of anybody unless they have positive proof that that person has very thoroughly given consent in a million demonstrable ways.” This should apply to users of all ages. Béjar noted that social media companies have incredibly sophisticated tools for labeling images, and he thinks that these companies need to use these to properly label sexually explicit images. Doing so would essentially “kill the distribution knob” that incentivizes sharing them, even if it might also cut down on engagement.
If you’re a parent, at minimum, I would have a conversation with your children about the fact that A.I. with nudifying capabilities exists. I would stress that there is an open door of communication between parent and child and they do not have to be ashamed if they are victimized. If you suspect your child or another child is a victim, you can get help from the National Center for Missing and Exploited Children. The organization has a CyberTipline, and it also has tools to get images removed from circulation and for making reports to law enforcement.
It really should not be the responsibility of individual parents to patrol the entire internet. As I was listening to Béjar talk about the incredible engineering capabilities of all these social media companies, I kept thinking about how so many A.I. executives boast about the profound, earthshaking power of their technology to change humanity, superpower work and possibly cure cancer. But somehow, when you ask them to stop the tidal wave of sexual abuse materials, their hands appear to be tied.
If Big Tech does not fix itself, too many predators will continue to skate by without consequences, and too many minors will experience permanent damage for merely existing in public. Accessories like A.I. glasses allow anybody to take photographs and videos undetected by their subjects, which may lead to an even greater scale of harassment than we’re already seeing.
You don’t even need to find a yearbook photo to permanently humiliate a teenager who has the audacity to walk down the street, minding her own business. We should all want a better world for children.
End Notes
-
I need a shower and a palate cleanse after writing this newsletter. I suggest watching the new YouTube special from Beth Stelling, who is among my favorite comedians, which is cheekily titled, “We’re Looking for People Who Do Huge Numbers on Social Media.”
Feel free to drop me a line about anything here.
If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here.
The post Deepfake Nudes Are Haunting America’s Teens appeared first on New York Times.



