DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Grok Is Pushing AI ‘Undressing’ Mainstream

January 7, 2026
in News
Grok Is Pushing AI ‘Undressing’ Mainstream

Elon Musk hasn’t stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in “undressed” and “bikini” photos.

Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.

The images do not contain nudity but involve the Musk-owned chatbot “stripping” clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grok’s safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a “string bikini” or a “transparent bikini.”

While harmful AI image generation technology has been used to digitally harass and abuse women for years—these outputs are often called deepfakes and created by “nudify” software—the ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or “undress” software, Grok doesn’t charge the user money to generate images, produces results in seconds, and is available to millions of people on X—all of which may help to normalize the creation of nonconsensual intimate imagery.

“When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech facilitated abuse. “What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”

Grok’s creation of sexualized imagery started to go viral on X at the end of last year, although the system’s ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared.

Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a “bikini” image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been “stripped” to bikinis, reports say.

Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. “@grok put her in a transparent bikini,” a typical message reads. In a different series of posts, a user asked Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, finally, to “Change her clothes to a tiny bikini.”

One analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s literally everyone, of all backgrounds. People posting on their mains. Zero concern.”

During a two-hour period on December 31, the analyst gathered more than 15,000 URLs of images created by Grok and screen recorded the chatbots’ “media” tab on X, where generated images—both sexualized and non-sexualized—are posted.

WIRED reviewed over a third of the URLs that the researcher gathered and found that over 2,500 were no longer available, and nearly 500 were marked as “Age-restricted adult content,” requiring a login to view. Many of the remaining posts still featured scantily-clad women. The researcher’s screen recordings of Grok’s “media” page on X show an overwhelming number of images of women in bikinis and lingerie.

Musk’s xAI did not immediately respond to a request for comment about the prevalence of sexualized images that Grok has been creating and publishing. X did not immediately respond to a request for comment from WIRED.

X’s Safety account has said it prohibits illegal content. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the account posted. X’s most recent DSA transparency report said that it suspended 89,151 accounts for violating its child sexual exploitation policy between the start of April and the end of June last year, but hasn’t published more recent numbers.

The X Safety account also points to its policies around prohibited content. X’s nonconsensual nudity policy, which is dated December 2021, before Musk purchased what was then Twitter, claims that “images or videos that superimpose or otherwise digitally manipulate an individual’s face onto another person’s nude body” are against the policy.

The use of Grok to create sexualized images of real people is also just the tip of the iceberg. Over the last six years, explicit deepfakes have become more advanced and easier for people to create. Dozens of “nudify” and “undress” websites, bots on Telegram, and open source image generation models have made it possible to create images and videos with no technical skills. These services are estimated to have made at least $36 million each year. In December, WIRED reported how Google and OpenAI’s chatbots have also stripped women in photos down to bikinis.

Action against nonconsensual explicit deepfakes from lawmakers and regulators has been slow, but is starting to increase. Last year, Congress passed the TAKE IT DOWN Act, which makes it illegal to publicly post nonconsensual intimate imagery, including deepfakes. By mid-May, online platforms, including X, will have to provide a way for people to flag instances of NCII, which the platforms will be required to respond to within 48 hours.

The National Center for Missing and Exploited Children (NCMEC), a US-based non-profit that works with companies and law enforcement to address instances of CSAM, reported that its online abuse reporting system saw a 1,325 percent increase in reports involving generative AI between 2023 and 2024. (Such large increases don’t necessarily mean a similarly large increase in activity, and can sometimes be attributed to improvements in automated detection or guidelines about what should be reported.) NCMEC did not respond to a request for comment from WIRED about the posts on X.

In recent months, officials in both the UK and Australia have taken the most significant action so far around “nudifying” services. Australia’s online safety regulator, the eSafety Commissioner, has targeted one of the biggest nudifying services with enforcement action and UK officials are planning on banning nudification apps.

However, there are still questions around what, if any, action countries may take against X and Grok for the widespread creation of the nonconsensual imagery. Officials in France, India, and Malaysia, are among those who have raised concerns or threatened to investigate X over the recent flurry of images.

A spokesperson for the eSafety office says it has “several reports” of Grok being used to generate sexual images since late last year. The office says it is assessing images of adults that were submitted to it, while some images of young people did not meet the country’s legal definition of child sexual exploitation material. “eSafety remains concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved,” the spokesperson says.

On Tuesday, the UK government officially called for X to take action against the imagery. “X needs to deal with this urgently,” technology minister Liz Kendall said in a statement, which followed communications’ regulator Ofcom contacting X on Monday. “What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society.”

The post Grok Is Pushing AI ‘Undressing’ Mainstream appeared first on Wired.

Trump Proposes Huge Increase in Military Spending
News

Trump Proposes Huge Increase in Military Spending

by New York Times
January 8, 2026

President Trump proposed on Wednesday increasing military spending next year by more than half, raising the defense budget in 2027 ...

Read more
News

More Americans will die than be born in 2030, CBO predicts—leaving immigrants as the only source of population growth

January 8, 2026
News

Trump teases White House meeting with ‘sick’ foreign leader he threatened to arrest

January 8, 2026
News

The $38 trillion national debt is one thing 82% of Americans agree on: ‘Voters are understandably concerned,’ watchdog says

January 8, 2026
News

‘Beast Games’ Season 2 Release Schedule: When Do New Episodes Come Out?

January 8, 2026
MSNBC host shreds Trump’s Greenland obsession as reckless threat to NATO

MSNBC host shreds Trump’s Greenland obsession as reckless threat to NATO

January 8, 2026
Jaime King says she’s still engaged to Austin Sosa amid Vikram Chatwal tête-à-têtes

Jaime King says she’s still engaged to Austin Sosa amid Vikram Chatwal tête-à-têtes

January 8, 2026
‘Gonna upset a lot of people’: CNN in awe as Noem ‘inflames’ Minnesotans with new remarks

‘Gonna upset a lot of people’: CNN in awe as Noem ‘inflames’ Minnesotans with new remarks

January 8, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025