There’s a centuries-old expression that “a lie can travel halfway around the world while the truth is still putting on its shoes.” Today, a realistic deepfake — an A.I.-generated video that shows someone doing or saying something they never did — can circle the globe and land in the phones of millions while the truth is still stuck on a landline. That’s why it is urgent for Congress to immediately pass laws to protect Americans by preventing their likenesses from being used to do harm. I learned that lesson in a visceral way over the last month when a fake video of me — opining on, of all things, the actress Sydney Sweeney’s jeans — went viral.
On July 30, Senator Marsha Blackburn and I led a Senate Judiciary subcommittee hearing on data privacy. We’ve both been leaders in the tech and privacy space and have the legislative scars to show for it. The hearing featured a wide-reaching discussion with five experts about the need for a strong federal data privacy law. It was cordial and even-keeled, no partisan flare-ups. So I was surprised later that week when I noticed a clip of me from that hearing circulating widely on X, to the tune of more than a million views. I clicked to see what was getting so much attention.
That’s when I heard my voice — but certainly not me — spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney. The A.I. deepfake featured me using the phrase “perfect titties” and lamenting that Democrats were “too fat to wear jeans or too ugly to go outside.” Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real.
As anyone would, I wanted the video taken down or at least labeled “digitally altered content.” It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real. Studies have shown that people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake.
X refused to take it down or label it, even though its policy says users are prohibited from sharing “inauthentic content on X that may deceive people,” including “manipulated or out-of-context media that may result in widespread confusion on public issues.” As the video spread to other platforms, TikTok took it down, and Meta labeled it as A.I. However, X’s response was that I should try to get a “community note” to say it was a fake — something the company would not help add.
For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down. But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now. Why should tech companies’ profits rule over our rights to our own images and voices? Why do their shareholders and C.E.O.s get to make more money with the spread of viral content at the expense of our privacy and reputations? And why are there no consequences for the people who make the unauthorized deepfakes and spread the lies?
This particular video does not in any way represent the gravest threat posed by deepfakes. In July it was revealed that an impostor had used A.I. to pretend to be Secretary of State Marco Rubio and contacted at least three foreign ministers, a member of Congress and a governor. And this technology can turn the lives of just about anyone completely upside down. Last year someone used A.I. to clone the voice of a high school principal in Maryland and create audio of him making racist and antisemitic comments. By the time the audio was proved to be fake, the principal had already been placed on administrative leave, and families and students were left deeply hurt.
There is no way to quantify the chaos that could take place without legal checks. Imagine a deepfake of a bank C.E.O. that triggers a bank run, a deepfake of an influencer telling children to use drugs or a deepfake of a U.S. president starting a war that triggers attacks on our troops. The possibilities are endless. With A.I., the technology has gotten ahead of the law, and we can’t let it go any further without rules of the road.
As complicated as this technology is, some solutions are within reach. This year, President Trump signed the Take It Down Act, which Senator Ted Cruz and I pushed to create legal protections for victims when intimate images, including deepfakes, are shared without their consent. This law addresses the rise in cases of predators using A.I. tools to create nude images of victims to humiliate or extort them. We know the consequences of this can be deadly; at least 20 children have died by suicide recently because of the threat of explicit images being shared without their consent.
That bill was only the first step. That is why I am again working across the aisle on a bill to give all Americans more control over how deepfakes of our voices and visual likenesses are used. The proposed bipartisan No Fakes Act — cosponsored by Senators Chris Coons and Thom Tillis, Ms. Blackburn and me — would give people the right to demand that social media companies remove deepfakes of their voice and likeness while making exceptions for speech protected by the First Amendment.
The United States is not alone in rising to this challenge. The European Union’s A.I. Act, adopted in 2024, mandates that A.I.-generated content be clearly labeled and watermarked. And in Denmark, legislation is being considered to give all citizens copyright over their faces and voices, forcing platforms to remove unauthorized deepfakes just as they would pull down copyrighted music.
In the United States and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.
We are clearly at just the tip of the iceberg. Deepfakes like the one made of me in that hearing are going to become more common, not less — and harder for anyone to identify as A.I. The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren’t going to stop at Sydney Sweeney’s jeans.
We can love the technology, and we can use the technology, but we can’t cede all the power over our own images and our privacy. It is time for members of Congress to stand up for their constituents, stop currying favor with the tech companies and set the record straight. In a democracy, we do that by enacting laws. And it is long past time to pass one.
Amy Klobuchar, a Democrat, is a U.S. senator from Minnesota.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Amy Klobuchar: What I Didn’t Say About Sydney Sweeney appeared first on New York Times.