Last week, a young woman posted a photo of herself on X. She was standing outside, wearing a blue tank top.
Over the next several days, dozens of people replied to her post to ask Grok, the artificial intelligence chatbot created by Elon Musk, to generate new images of the woman in lingerie or bikinis. The A.I. images — which Grok added as replies to her post — racked up thousands of views.
“Why is this allowed?” the woman, who plays video games on livestreams and has more than 6,000 followers on X, asked in a post.
The altered pictures were part of a recent flood of Grok-generated images that sexualize women and children on X, Mr. Musk’s social media platform. In response to user requests, the chatbot has manipulated photos of people to dress them in skimpy garments, remove their clothes altogether or pose their bodies in suggestive ways.
The subjects of these images — including the mother of one of Mr. Musk’s children — have decried the pictures, and some have appealed to Mr. Musk to ban the technology or remove the photos. Others have threatened legal action.
Late Thursday, Grok started limiting requests for A.I. images to X subscribers who pay for certain premium features, according to posts from the chatbot.
The change followed a “backlash over its ability to create sexualized deepfakes without consent,” the A.I. chatbot added, but users can “subscribe or switch to the standalone Grok site for similar functions.”
Mr. Musk has participated in creating the images. On New Year’s Eve, he instructed Grok to make an image of himself in a bikini.
“Perfect,” he said, sharing the image with his 231 million X followers.
Mr. Musk is known for pushing the envelope on A.I. to increase his chatbot’s popularity. A.I. companies like Google and Anthropic have strict guardrails to prevent their chatbots from making content that verges into eroticism, racism or other themes that could offend users, but Mr. Musk has taken a different approach.
Last year, he added sexually explicit chatbot companions to Grok, making xAI, his artificial intelligence company, the first major one to do so. Mr. Musk has also vowed to make Grok less politically correct than rivals, leading to incidents in which the chatbot has praised Hitler or parroted Mr. Musk’s own political views.
By allowing Grok to create nearly nude images of real people, Mr. Musk is entering legally risky territory. Sexual images of children are illegal to possess or share in many countries. Some countries also ban A.I.-generated sexual images of children.
Several countries, including the United States and Britain, have also enacted laws against sharing nonconsensual nude imagery, often referred to as revenge porn. X’s own policy bars users from posting “intimate photos or videos of someone that were produced or distributed without their consent.”
Regulators have taken notice. A Brazilian official on Monday called for a ban of X until the nation can investigate the surge in sexualized images, while Indian regulators demanded on Saturday that X and xAI take steps to prevent the misuse. Two French lawmakers announced last Friday that they had reported X to the Paris public prosecutor, and the European Commission said this week that it was looking into the activity.
Prime Minister Keir Starmer of Britain said during an interview on Thursday that the images were “disgusting” and would “not be tolerated,” adding he had asked the country’s online regulator “for all options to be on the table” in response.
On Friday, Mr. Starmer’s spokesman said limiting image creation to X subscribers was “not a solution” and was “insulting” to victims of misogyny and sexual violence. It “simply turns an A.I. feature that allows the creation of unlawful images into a premium service.”
Mr. Musk said in a post on Saturday that accounts trying to use Grok to create images of undressed children would suffer “consequences.” In a statement posted on X, the social media company said it would remove illegal content depicting children and permanently suspend accounts that asked Grok to create such images.
The Grok chatbot is accessible through an app, a website and an account on X. Users can send Grok requests, and the chatbot publicly posts its answers. The system has safeguards to prevent it from generating fully nude images, but X users have circumvented those with a series of prompts involving, for example clear plastic.
Other A.I. tools can generate images, known as deepfakes, that superimpose real people into artificial environments. But Gemini, Google’s chatbot, bars users from making images of real people, while OpenAI’s ChatGPT allows people to opt in to having their likeness used in A.I. images.
Because Grok posts images publicly on X, they can spread quickly and fuel harassment.
That’s what happened when sexualized images flooded X in late December and early January, many showing women and children in lingerie or skimpy swimsuits. Some showed women with white liquid smeared across their faces, which appeared to mimic semen.
Nana Mgbechikwere Nwachukwu, an A.I. governance expert and Ph.D. researcher at Trinity College Dublin, documented nearly 500 requests for Grok to create nonconsensual intimate imagery on X during the first three days of January. She said requests for such images appeared to surge after Mr. Musk generated the image of himself in a bikini.
Copyleaks, an A.I. content detection service, estimated that during the height of the requests, Grok produced at least one such image per minute. Many subjects of the Grok-generated sexualized images said on X that they were outraged, while others deleted their accounts. Some tagged Mr. Musk, asking him to intervene.
Ashley St. Clair, an influencer who had a child with Mr. Musk in 2024, said that one of her childhood photos had been manipulated.
“Grok is now undressing photos of me as a child,” Ms. St. Clair posted on X on Sunday, adding that she would take legal action.
After an incident last year when Grok praised Hitler, xAI temporarily disabled the chatbot, she noted.
“This issue could be solved very quickly,” she wrote. “It is not, and the burden is being placed on victims.”
While X has said it will punish the use of Grok to create sexualized images of children, some users have found workarounds.
The Internet Watch Foundation, a British nonprofit that monitors online child sex abuse, said in a statement on Wednesday that it had found “criminal imagery of children aged between 11 and 13” created using Grok on dark web forums, which are encrypted discussion groups on the internet that require special tools for access.
Those images were then being used to generate “much more extreme” videos using a different A.I. tool, said Ngaire Alexander, the head of the group’s reporting hotline.
Sex abusers and pedophiles have long been early adopters of new technology, and in some cases key developers, said Clare McGlynn of Durham Law School in England, an expert on the legal regulation of online abuse.
For X, she said, the criminal threshold “shouldn’t really matter — Grok should not have been designed to be able to produce these sorts of images.”
Kate Conger is a technology reporter based in San Francisco. She can be reached at [email protected].
The post Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage appeared first on New York Times.




