DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

The State-Led Crackdown on Grok and xAI Has Begun

January 27, 2026
in News
The State-Led Crackdown on Grok and xAI Has Begun

At least 37 attorneys general for US states and territories are taking action against xAI after people used its chatbot, Grok, to generate a flood of sexualized images earlier this year.

On Friday, a bipartisan group of 35 attorneys general published an open letter to xAI demanding it “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of (non-consensual intimate images).” (In addition to those who signed the letter, attorneys general from California and Florida tell WIRED they have also taken action.)

The letter comes amid an international wave of regulator attention on Grok users creating intimate deepfake images of people without their consent, as well as sexualized images of children.

A recent report from the Center for Countering Digital Hate estimates that during an 11-day period starting on December 29, Grok’s account on X generated around 3 million photorealistic sexualized images, including around 23,000 sexualized images of children. In addition to using Grok’s X account to create these photos, people were generating far more explicit videos using the Grok Imagine model available on the Grok website, WIRED previously reported. And unlike X, the Grok site did not appear to require any sort of age verification before allowing people to view content.

X did not respond to a request for comment. xAI responded to WIRED’s questions with, “Legacy Media Lies.”

The open letter from attorneys general, which cited WIRED’s reporting, said that Grok’s ability to create nonconsensual sexual imagery has been used as a “selling point” by xAI.

While xAI claims it has stopped Grok’s X account from undressing people, the letter stated the company hasn’t removed nonconsensually created content, “despite the fact that you will soon be obligated to do so by federal law.” It also calls on xAI to remove Grok’s ability to depict people in revealing clothing or suggestive poses, suspend offending users and report them to authorities, and give users the ability to control whether their content can be edited by Grok.

“In addition to investigations and prosecutions in this area, we have called on payment processors and search engines to mitigate the creation of nonconsensual intimate images and advocated for legislation to prevent AI-powered child exploitation,” the letter says.

The inundation of nonconsensual intimate deepfakes on X and Grok.com comes at a time when half the country has already passed age verification laws, requiring people looking at pornography to provide proof that they are not a minor.

WIRED contacted the offices of attorneys general in the 25 states that have passed age verification laws to ask how they were responding to the influx of nonconsensual sexualized images on Grok and X. WIRED also reached out to the sponsors of the age verification bills in each of those states to ask if they felt X or xAI should be doing more to prevent children from viewing explicit content on X and Grok.com.

Some of the AGs that WIRED reached out to say they were investigating or in discussions with either X or xAI specifically about concerns that Grok was being used to generate child sexual abuse material, or CSAM. According to the child advocacy group Enough Abuse, 45 states prohibit AI-generated or computer-edited CSAM.

Richie Taylor, communications director for Arizona attorney general Kris Mayes, tells WIRED that Mayes opened an investigation into Grok on January 15. In a news release about the investigation, which cited WIRED’s reporting, Mayes said the reports about the imagery being created are “deeply disturbing.” Mayes was one of the signatories on Friday’s joint letter.

“Technology companies do not get a free pass to create powerful artificial intelligence tools and then look the other way when those programs are used to create child sexual abuse material. My office is opening an investigation to determine whether Arizona law has been violated,” she said in the news release, calling on Arizonans who believe they’ve been victimized by Grok to contact her office.

California attorney general Rob Bonta sent a cease and desist letter to Elon Musk on January 16, demanding that xAI take immediate action to stop the creation and distribution of CSAM or nonconsensual intimate images, including via both the Grok account on X and the standalone Grok app. Elissa Perez, the press secretary for the California Department of Justice, says that xAI had formally responded to the AG’s letter, and that now “we have reason to believe, subject to additional verification, that Grok is not currently being used to generate any sexual images of children or images that violate California law.” California’s investigation is still ongoing.

Jae Williams, the press secretary for the Florida Attorney General’s Office, tells WIRED that the office is “currently in discussions with X to ensure that protections for children are in place and prevent its platform from being used to generate CSAM.”

Stephanie Whitaker, director of communications for the Missouri Attorney General’s Office, says the state “has a duty to ensure X and other social media companies comply with state law. Companies profiting off of an oasis for criminal activity may find themselves culpable.”

The intersection of child safety and AI has been an area of ongoing interest for state lawmakers. In early December, 42 attorneys general cosigned a letter to AI companies, including xAI, asking that the companies “adopt additional safeguards to protect children.” On January 14, a working group with representatives from the offices of many of the same state AGs met to discuss emerging issues related to AI.

During the meeting, North Carolina attorney general Jeff Jackson said AI-generated CSAM “should be an early priority.” Jackson was one of the signatories on the Friday letter.

Arizona state representative Nick Kupper, a Republican, tells WIRED he recently filed a bill that would require sites posting explicit content—including AI-generated imagery—to verify the ages and consent of performers before doing so; the state criminalized AI-generated CSAM last year.

In a statement to WIRED, Georgia Senate majority leader Jason Anavitarte, says the state is “actively working to put additional protections in place alongside existing statutes” with respect to consensual sexual imagery generated by AI.

“Legislation will be introduced this session so that obscene material, including AI-generated sexual material involving minors, can be criminally prosecuted in Georgia, with a specific focus on those who create the imagery using AI tools,” he says.

Meanwhile, states that have age verification laws are grappling with how they might apply to sites like X, which were not the intended target.

Almost every state with age verification has followed the lead of Louisiana—which enacted its law in 2022—requiring more than one third of the content on a given site to be considered pornographic or harmful to minors before the restrictions kick in.

But how does one decide what constitutes a piece of content—or whether or not something is considered pornographic?

“It’s mostly a counting question in terms of ‘does the law apply’” Alan Butler, the executive director of Electronic Privacy Information Center, previously told WIRED.

Kupper, who sponsored Arizona’s age verification law, tells WIRED he stuck to the one-third threshold because it’s been previously upheld by the United States Supreme Court. He says he’s heard estimates that 15 to 25 percent of accounts on X are at least somewhat pornographic, but he’s not sure how accurate that is, nor does he think it’s “feasible” to analyze such ratios on every website. X did not respond to questions about what percentage of the platform it considers pornographic.

“I don’t think you should have a threshold. It should be: Do you have pornographic material on your site? OK. I’m not saying you have to age verify for your entire site, but for any of the pornographic material, you should have to age verify,” Kupper says. Posts on X that are marked as “age-restricted adult content” can only be viewed by users who are logged in and over the age of 18, though X generally expects users uploading restricted content to mark it as explicit themselves. WIRED wasn’t able to find similar restrictions for pornographic links on the Grok website.

Kupper says in Arizona’s case, individuals would need to bring forth a complaint—for example, if their child was harmed by pornographic material on X—and the court would then have to make X prove that less than one third of its content is pornographic.

Nebraska state senator Dave Murman, who spearheaded age verification legislation there, tells WIRED he isn’t sure about Grok’s independent site but that “X does not have at least 1/3 of its content sexually inappropriate or harmful to minors.” However, when asked if the state had measured that, he says it hadn’t—and isn’t aware of any state that had.

“While I would of course prefer a system where every single possible piece of pornographic content is behind an age gate, passing legislation to do so without implicating other the valid free speech rights of social media sites seemed logistically impossible,” he says. “While I don’t know if there is a legislative solution to getting pornography off of social media sites like X, I do hope the company takes action.”

Pornhub, one of the biggest porn sites in the world, has blocked itself from most states with age verification, arguing that there are too many noncompliant sites and that people don’t want to give their ID and personal information to a third-party site to have their ages verified. It will also block itself to new UK users next week on account of the country’s age verification laws, which kicked in last July.

On Tuesday, Solomon Friedman, vice president of compliance for the private equity firm Ethical Partners Capital (ECP), which owns Pornhub’s parent company, Aylo, told WIRED both the methodology and scope of age verification legislation is “fatally flawed.”

He said Google Images, for example, has “thumbnails of every single porn image cached available online.” Friedman and Pornhub want Google, Apple, and Microsoft to enact device-based age verification so that people’s data can stay stored in their phones or laptops.

“That’s also the solution to adult content on quote non-porn sites and platforms. It can be used to filter either explicit tweets or posts on X or explicit use of AI chat bots.”

WIRED reached out to Google, Microsoft, and Apple about whether they would be open to device-based age verification but has not yet received a response.

The post The State-Led Crackdown on Grok and xAI Has Begun appeared first on Wired.

A Shift for NOAA’s Surveys: From Science to Mining
News

A Shift for NOAA’s Surveys: From Science to Mining

by New York Times
January 27, 2026

The National Oceanic and Atmospheric Administration is shifting from science to prospecting as it surveys a region of the Pacific ...

Read more
News

A Meta deal just turned this 175-year-old company into a linchpin of the AI data center boom

January 27, 2026
News

U.S. Population Growth Slows Sharply as Immigration Numbers Plunge

January 27, 2026
News

Jack Evans, who left D.C. Council amid scandal, to run for chairman

January 27, 2026
News

Scott Bessent’s ‘thuggish’ talk sent the economy into freefall: analysis

January 27, 2026
Bucking a Global Trend, Spain Offers Undocumented Migrants a Legal Way to Stay

Bucking a Global Trend, Spain Offers Undocumented Migrants a Legal Way to Stay

January 27, 2026
This Guy Got So Angry at an AI Art Exhibit That He Ate It

This Guy Got So Angry at an AI Art Exhibit That He Ate It

January 27, 2026
Parents of Alex Pretti, Nurse Killed by CBP, Break Silence

Slain ICU Nurse Broke Rib in ICE Encounter Days Before Death

January 27, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025