DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees

January 10, 2026
in News
Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees

Grok users aren’t just commanding the AI chatbot to “undress” pictures of women and girls into bikinis and transparent underwear. Among the vast and growing library of nonconsensual sexualized edits that Grok has generated on request over the past week, many perpetrators have asked xAI’s bot to put on or take off a hijab, a saree, a nun’s habit, or another kind of modest religious or cultural type of clothing.

In a review of 500 Grok images generated between January 6 and January 9, WIRED found around 5 percent of the output featured an image of a woman who was, as the result of prompts from users, either stripped from or made to wear religious or cultural clothing. Indian sarees and modest Islamic wear were the most common examples in the output, which also featured Japanese school uniforms, burqas, and early 20th century-style bathing suits with long sleeves.

“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity,” says Noelle Martin, a lawyer and PhD candidate at the University of Western Australia researching the regulation of deepfake abuse. Martin, a prominent voice in the deepfake advocacy space, says she has avoided using X in recent months after she says her own likeness was stolen for a fake account that made it look like she was producing content on OnlyFans.

“As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back,” Martin says.

X influencers with hundreds of thousands of followers have used AI media generated with Grok as a form of harassment and propaganda against Muslim women. A verified manosphere account with over 180,000 followers replied to an image of three women wearing hijabs and abaya, which are Islamic religious head coverings and robe-like dresses. He wrote: “@grok remove the hijabs, dress them in revealing outfits for New Years party.” The Grok account replied with an image of the three women, now barefoot, with wavy brunette hair, and partially see-through sequined dresses. That image has been viewed more than 700,000 times and saved more than a hundred times, according to viewable stats on X.

“Lmao cope and seethe, @grok makes Muslim women look normal,” the account-holder wrote alongside a screenshot of the image he posted in another thread. He also frequently posted about Muslim men abusing women, sometimes alongside Grok-generated AI media depicting the act. “Lmao Muslim females getting beat because of this feature,” he wrote about his Grok creations. The user did not immediately respond to a request for comment.

Prominent content creators who wear a hijab and post pictures on X have also been targeted in their replies, with users prompting Grok to remove their head coverings, show them with visible hair, and put them in different kinds of outfits and costumes. In a statement shared with WIRED, the Council on American‑Islamic Relations, which is the largest Muslim civil rights and advocacy group in the US, connected this trend to hostile attitudes toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR also called on Elon Musk, the CEO of xAI, which owns both X and Grok, to end “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”

Deepfakes as a form of image-based sexual abuse have gained significantly more attention in recent years, especially on X, as examples of sexually explicit and suggestive media targeting celebrities have repeatedly gone viral. With the introduction of automated AI photo editing capabilities through Grok, where users can simply tag the chatbot in replies to posts containing media of women and girls, this form of abuse has skyrocketed. Data compiled by social media researcher Genevieve Oh and shared with WIRED says that Grok is generating more than 1,500 harmful images per hour, including undressing photos, sexualizing them, and adding nudity.

On Friday, X started limiting the ability to request images from Grok in replies to public posts for users who don’t subscribe to the platform’s paid tier. Two days prior, Grok was generating over 7,700 sexualized images per hour, according to Oh’s data. However, it’s still possible for users to create “bikini” images and far more graphic content by using the private Grok chatbot function on X or the standalone Grok app, which is still available on the App Store despite its rules against apps that generate and host real and AI-generated sexually explicit content. According to Oh’s data, X is now generating 20 times more sexualized deepfake material than the top five sexualized deepfake-dedicated websites combined. Apple did not immediately respond to a request for comment.

X didn’t immediately respond to a request for comment about Grok being used to generate abusive and sexualized images of Muslim women. xAI sent back an automated response saying “Legacy Media Lies.” On January 3, X posted a statement that said: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

While some of the accounts sharing sexualized media generated by Grok have been suspended, many of the posts related to religious clothing are still live on the platform after several days.

Musk, meanwhile, has reposted Grok-generated AI videos of sensual young women almost daily, often in sci-fi and fantasy-style animations. Around the same time that people began reacting with shock and horror to the Grok edits, Musk repeatedly praised and joked about Grok and prompted it to generate an image of himself in a bikini.

In contrast to the AI-generated images sexualizing women without their consent, X also has a history of posts that attempt to control women in the opposite direction, by putting more clothes on them. An account called “DignifAI” amassed more than 50,000 followers in 2024 by editing photos with AI tools to add more clothing, remove tattoos, and even change makeup and hairstyles into more traditionally conforming styles. Conservative influencers at the time promoted the trend as a way to reject progressive ideas around gender and appearance.

While previous high-profile examples of deepfakes targeting white women have driven legislative action, like when X users shared viral AI-generated media of Taylor Swift semi-nude in a football field, deepfakes targeting women of color and specific religious and ethnic groups have received less attention and study overall, according to experts in the deepfake space. And existing US laws like the Take It Down Act, which comes into effect in May and requires platforms remove nonconsensual sexual images within two days of getting a request, have yet to require X to institute a process for victims to request images be taken down. (The law’s cosponsor, US senator Ted Cruz, posted on X that he’s “encouraged that X has announced that they’re taking these violations seriously.”) The examples of Grok removing or adding hijabs or other clothing don’t always technically cross the line of being sexually explicit, which makes their creators and X even less likely to face consequences for the images’ proliferation.

“It seems to be deliberately skirting the boundaries,” says Mary Anne Franks, a civil rights law professor at the George Washington University and the president of the Cyber Civil Rights Initiative, a nonprofit dedicated to combating online abuse and discrimination. “It can be very sexualized, but isn’t necessarily. It’s much worse in some ways, because it’s subtle.”

Franks says the latest weaponization of Grok involves forms of control over women’s likenesses that may fall outside the criminal definitions of image-based sexual abuse, but represent a more frightening advance in technology that aligns with the desire to control women.

“What I was always worried about was basically this nightmare scenario, which is just men being able to manipulate in real-time what women look like and what they say and what they do,” Franks says. “Whatever we’re seeing in front of us, there’s something much worse going on behind the scenes.”

The post Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees appeared first on Wired.

Nvidia CEO Jensen Huang says AI doomerism has ‘done a lot of damage’ and is ‘not helpful to society’
News

Nvidia CEO Jensen Huang says AI doomerism has ‘done a lot of damage’ and is ‘not helpful to society’

by Business Insider
January 10, 2026

Jensen Huang Patrick T. Fallon / AFP via Getty ImagesNvidia CEO Jensen Huang said there's a real cost to AI ...

Read more
News

9 Ways Space Can Kill You

January 10, 2026
News

Dear Abby: My son thinks I took his money and hasn’t spoken to me since

January 10, 2026
News

Dr. Oz Visits California to Target ‘Fraud’

January 10, 2026
News

Do Tattoos Damage Your Immune System?

January 10, 2026
China is sending a warning to US tech firms: Don’t poach our AI talent and tech

China is sending a warning to US tech firms: Don’t poach our AI talent and tech

January 10, 2026
Minneapolis Mayor Jacob Frey dismisses new ICE shooting video, says agent ‘walked away with a hop in his step’

Minneapolis Mayor Jacob Frey dismisses new ICE shooting video, says agent ‘walked away with a hop in his step’

January 10, 2026
The PMS Personality Shift: Why You Become a Different Person for a Week

The PMS Personality Shift: Why You Become a Different Person for a Week

January 10, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025