DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

December 23, 2025
in News
Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.

Under a now-deleted Reddit post titled “gemini nsfw image generation is so easy,” users traded tips for how to get Gemini, Google’s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out.

A user posted a photo of a woman wearing an Indian sari, asking for someone to “remove” her clothes and “put a bikini” on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit’s safety team removed the request and the AI deepfake.

“Reddit’s sitewide rules prohibit nonconsensual intimate media, including the behavior in question,” said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform’s “don’t break the site” rule.

As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful “nudify” websites, designed for users to upload real photos of people and request for them to be undressed using generative AI.

With xAI’s Grok as a notable exception, most mainstream chatbots don’t usually allow the generation of NSFW images in AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are also fitted with guardrails that attempt to block harmful generations.

In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images.

As these tools improve, likenesses may become more realistic when users are able to subvert guardrails.

In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone’s outfit to make the subject’s skirt appear tighter. In WIRED’s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English.

When asked about users generating bikini deepfakes using Gemini, a spokesperson for Google said the company has “clear policies that prohibit the use of [its] AI tools to generate sexually explicit content.” The spokesperson claims Google’s tools are continually improving at “reflecting” what’s laid out in its AI policies.

In response to WIRED’s request for comment about users being able to generate bikini deepfakes with ChatGPT, a spokesperson for OpenAI claims the company loosened some ChatGPT guardrails this year around adult bodies in nonsexual situations. The spokesperson also highlights OpenAI’s usage policy, stating that ChatGPT users are prohibited from altering someone else’s likeness without consent and that the company takes action against users generating explicit deepfakes, including account bans.

Online discussions about generating NSFW images of women remain active. This month, a user in the r/GeminiAI subreddit offered instructions to another user on how to change women’s outfits in a photo into bikini swimwear. (Reddit deleted this comment when we pointed it out to them.)

Corynne McSherry, a legal director at the Electronic Frontier Foundation, sees “abusively sexualized images” as one of AI image generators’ core risks.

She mentions that these image tools can be used for other purposes outside of deepfakes and that focusing how the tools are used is critical—as well as “holding people and corporations accountable” when potential harm is caused.

The post Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis appeared first on Wired.

‘Would you like me to leave?’ Trump floats ditching White House for TV hosting gig
News

‘Would you like me to leave?’ Trump floats ditching White House for TV hosting gig

by Raw Story
December 23, 2025

President Donald Trump floated the idea of ditching the Oval Office Tuesday to instead pursue a career as a television ...

Read more
News

How The Pogues Responded to Censorship of Their Hit Song ‘Fairytale of New York’: ‘Times Change’

December 23, 2025
News

Lego Batman: Legacy Of The Dark Knight – How To Unlock Early Access

December 23, 2025
News

Economic Growth Is Up. Unemployment Is, Too. What’s Going On?

December 23, 2025
News

Trump ‘pipe dream’ destined to impose ‘killing spree’ on MAGA faithful: analysis

December 23, 2025
Man Describes How ChatGPT Led Him Straight Into Psychosis

Man Describes How ChatGPT Led Him Straight Into Psychosis

December 23, 2025
Surviving Meta’s ‘year of intensity’

Surviving Meta’s ‘year of intensity’

December 23, 2025
This town has 3 nuclear plants. Now, it wants another one.

This town has 3 nuclear plants. Now, it wants another one.

December 23, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025