Grok, the flagship chatbot created by the Elon Musk-founded AI venture xAI and infused into X-formerly-Twitter — a platform also owned by Elon Musk — continues to be used by trollish misogynists, pedophiles, and other freaks of the digital gutters to non-consensually undress images of women and, even more horrifyingly, underage girls.
The women and girls targeted in these images range from celebrities and public figures to many non-famous private citizens who are often just average web users. As Futurism reported, some of the AI images generated by Grok and automatically published to X were specifically altered to depict real women in violent scenarios, including scenes of sexual abuse, humiliation, physical injury, kidnapping and insinuated murder.
Because Grok is integrated into X, this growing pile of nonconsensual and seemingly illegal images are automatically published directly to the social media platform — and thus are disseminated to the open web, in plain view, visible to pretty much anyone. As it stands, X and xAI have yet to take any meaningful action to stem the tide.
Below is a timeline of how this story has so far unfolded, and which we’ll continue to update as we follow whether X and xAI take action against this flood of harmful content.
- January 5, 2026: An online creator who was targeted by nonconsensual sexual deepfakes told The Cut that being targeted by Grok-generated harassment was “scary,” and that “it was uncomfortable to have that power asserted over you.” She added that it felt like a “digital version” of a “sexual assault.”
- January 5, 2026: Conservative social media commentator Ashley St. Clair, a mother of one of Musk’s many children, told outlets including The Guardian and NBC News that she’s been aggressively targeted by nonconsensual sexual deepfakes of her. One photo that was taken when she was 14, she said, was edited to depict her undressed and in a bikini.
- January 5, 2026: A spokesperson for the European Commission, the European Union’s executive body, said during a press conference that the organization is “very seriously looking into this matter,” calling the content “illegal” and “disgusting,” CNBC reported.
- January 5, 2026: The independent British media regulator Ofcom said that it was “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualized images of children” and had “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”
- January 3, 2026: Musk changed his tune, saying that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” He didn’t elaborate on whether X or xAI will take action against bad actors, or if it’s instead up to victims to figure out if an oft-anonymous creep somewhere on the web used Grok to make deepfakes of her. (Deepfakes have historically been difficult for individual victims to counter legally.)
- January 3, 2026: The Malaysian Communications and Multimedia Commission declared that it would investigate X over the content, Rest of World reported.
- January 2, 2026: French prosecutors vowed to investigate the flood of Grok-generated explicit deepfakes on X, Politico reported.
- January 2, 2026: India’s IT ministry demanded that X take action against the proliferation of “obscene” content on X, TechCrunch reported. The country’s order reportedly gave the platform 72 hours to provide a report describing how it had countered the generation of “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law.”
- January 2, 2026: Musk weighed in on the issue for the first time — with a laughing emoji. X users, meanwhile, continued to use Grok to generate generate child sexual abuse material (CSAM), unwanted nude images, and images depicting real women being sexually abused, humiliated, and killed.
- December 28 – 31, 2025: This is when the trend of X users asking Grok to undress women and girls, often by first asking the AI to put a woman or girl in “a tiny bikini,” really started to take off. Many of the incidents occurred in long threads that got lewder and more explicitly pornographic as they went on.
- December 24, 2025: Musk announced that X had rolled out a new feature allowing users to use Grok to edit images and videos. The update allowed for X users to alter images and videos without the permission or knowledge of the original poster.
- December 20-22, 2025: According to a Garbage Day analysis, this is when a growing number of users started finding success “generating scantily clad images using Grok, then immediately demanding it make the clothes transparent.”
A normal company, upon realizing that its platform-embedded AI chatbot was being used at scale to CSAM and unwanted deepfake porn of real people and spew it into the open web, would likely move quickly to disconnect the chatbot from its platform until a problem of such scale and severity could be resolved. But these days, X is not a normal company, and Grok is the same chatbot infamous for scandals including — but not limited to — calling itself “MechaHitler” and spouting antisemitic bile.
The story here isn’t just that Grok was doing this in the first place. It’s also that X, as a platform, appears to be a safe haven for the mass-generation of CSAM and nonconsensual sexual imagery of real women — content that has largely been treated by the losers creating this stuff like it’s all just one big meme. We’ll continue to follow whether X makes meaningful changes — or if it continues to choose inaction.
More on Musk’s reaction to Grok Deepfakes: Elon Musk After His Grok AI Did Disgusting Things to Literal Children: “Way Funnier”
The post Live Coverage: Is Grok Still Being Used to Create Nonconsensual Sexual Images of Women and Girls? appeared first on Futurism.




