Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. We’re publishing these editions both as stories on Time.com and as emails. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?
What to Know: The Deepfake Crisis on X
A worrying trend— In the past few weeks, many tech leaders have made bold predictions about what AI will achieve in 2026, from mastering the field of biology to surpassing human intelligence outright. But in 2026’s first week, the most visible use of AI has been X users employing Grok to digitally disrobe women.
[time-brightcove not-tgx=”true”]
Elon Musk’s platform X is currently flooded with nonconsensual AI-created images, requested by users, of unclothed or scantily-clad women, men and children, sometimes in sexual positions. An analyst working with Wired gathered more than 15,000 sexualized AI-generated images created over a two-hour period on December 31.
Musk introduced Grok’s “Spicy Mode” for generating adult content last summer, and then rolled out an image editing feature for users last month, which has precipitated this crisis. X’s Safety account says it prohibits illegal content, including Child Sexual Abuse Material (CSAM). And in some instances, Grok later took the images down and “apologized” for creating them.
But the platform remains rife with abuse. Ashley St. Clair, the mother of one of Musk’s children, told NBC News this week that Grok has produced “countless” explicit images of her, including some based on photos of her when she was 14.
Government response— The fact that what was once a mainstream social media platform is now perhaps the biggest digital spreader of nonconsensual explicit imagery has put Musk in the crosshairs of governments across the world. X is now being investigated by authorities in Europe, India, France, Malaysia, and other countries. The U.K.’s tech secretary called the trend “absolutely appalling.” A request for comment sent to X’s press email address did not immediately receive a response.
In the U.S., an incoming law may force X to tighten up its protections against this sort of image sharing. The Take It Down Act, which passed last year and will take effect in May, criminalizes the sharing of illicit images, and forces platforms to take down flagged instances of nonconsensual intimate imagery within 48 hours.
Fighting back— It is yet unclear how much of a deterrent the Take It Down law will be, as it places a heavy onus on individuals to report violations. Elliston Berry, a 16-year-old deepfake victim whose activism inspired the Take It Down Act, writes to TIME that this moment should be a wake-up call both for tech leaders and young social media users. “We have to be willing to get involved and report incidents in order to further stop this targeted violation. We must not be afraid or ashamed if we find ourselves a victim,” she says. “We are looking to Elon Musk to take the first initiatives to make this a top priority to protect X users.”
What We’re Reading
“Data Centers are Lousy for the Planet. Should We Move Them to Space?” by Jeffrey Kluger in TIME
My colleague Jeff, one of the foremost space experts in the world, just published a feature on the efforts to build data centers in space. The effort theoretically mitigates many of the problems of data centers on Earth, including their power and water usage and heat generation. But the expense to launch them into orbit is massive, and risks lurk above the atmosphere.
AI in Action
CES 2026, one of the largest tech trade shows in the world, is currently unfolding in Vegas. Unsurprisingly, the conference is full of AI-related products, including an eerie new humanoid robot model from Boston Dynamics with Gemini intelligence; Razer’s Project Ava, an anime hologram friend in a jar; and a robot from LG designed to unload your dishwasher and fold your laundry.
Nvidia also had a big week at the conference, unveiling its new Vera Rubin chip, which is designed to do more computing with less power.
Who to Know: Paul Kedrosky
Paul Kedrosky, an investor and research fellow at MIT, has carved out a position as one of the world’s leading thinkers on AI’s potential impacts on labor and the economy. One of his key points is that AI is both a legitimately transformative technology and also massively overhyped. “We had a dramatic bubble during the global financial crisis that nearly took down the global economy. But that doesn’t mean I think people should stop living in houses,” he told me in late November.
But Kedrosky is deeply worried about the financial structures underpinning the industry. He sees AI taking away capital from other areas of investment, including manufacturing, and the industry deploying questionable circular financing. In fact, he sees all the hallmarks of a classic bubble rolled into one: overhyped technology, loose credit, ambitious real estate purchases, and euphoric government messaging. “This is literally the first moment in modern financial history that has combined all the raw ingredients of every other bubble in one piece,” he says.
The post Grok’s deepfake crisis, explained appeared first on TIME.




