DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Amy Klobuchar: I Knew Deepfakes Were a Problem. Then I Saw One of Myself.

August 20, 2025
in News
Amy Klobuchar: I Knew Deepfakes Were a Problem. Then I Saw One of Myself.
509
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter

There’s a centuries-old expression that “a lie can travel halfway around the world while the truth is still putting on its shoes.” Today, a realistic deepfake — an A.I.-generated video that shows someone doing or saying something they never did — can circle the globe and land in the phones of millions while the truth is still stuck on a landline. That’s why it is urgent for Congress to immediately pass new laws to protect Americans by preventing their likenesses from being used to do harm. I learned that lesson in a visceral way over the last month when a fake video of me — opining on, of all things, the actress Sydney Sweeney’s jeans — went viral.

On Jul. 30, Senator Marsha Blackburn and I led a Senate Judiciary subcommittee hearing on data privacy. We’ve both been leaders in the tech and privacy space and have the legislative scars to show for it. The hearing featured a wide-reaching discussion with five experts about the need for a strong federal data privacy law. It was cordial and even-keeled, no partisan flare-ups. So I was surprised later that week when I noticed a clip of me from that hearing circulating widely on X, to the tune of more than a million views. I clicked to see what was getting so much attention.

That’s when I heard my voice — but certainly not me — spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney. The A.I. deepfake featured me using the phrase “perfect titties” and lamenting that Democrats were “too fat to wear jeans or too ugly to go outside.” Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real.

As anyone would, I wanted the video taken down or at least labeled “digitally altered content.” It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real. Studies have shown that people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake.

X refused to take it down or label it, even though its own policy says users are prohibited from sharing “inauthentic content on X that may deceive people,” including “manipulated, or out-of-context media that may result in widespread confusion on public issues.” As the video spread to other platforms, TikTok took it down and Meta labeled it as A.I. However, X’s response was that I should try to get a “Community Note” to say it was a fake, something the company would not help add.

For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down. But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now. Why should tech companies’ profits rule over our rights to our own images and voices? Why do their shareholders and C.E.O.s get to make more money with the spread of viral content at the expense of our privacy and reputations? And why are there no consequences for the people who actually make the unauthorized deepfakes and spread the lies?


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

The post Amy Klobuchar: I Knew Deepfakes Were a Problem. Then I Saw One of Myself. appeared first on New York Times.

Share204Tweet127Share
Senate Adds Guardrails in an Effort to Force Trump to Obey Spending Bills
News

Senate Adds Guardrails in an Effort to Force Trump to Obey Spending Bills

by New York Times
August 20, 2025

Top Republicans and Democrats in the Senate, alarmed by President’s Trump’s moves to withhold funding approved by Congress, have teamed ...

Read more
News

A 1990 Measles Outbreak Shows How the Disease Can Roar Back

August 20, 2025
News

Fox News Star’s Crazed 2020 Election Denial Revealed

August 20, 2025
News

OpenAI CFO says these 3 things will help your company stay competitive in the AI era

August 20, 2025
News

Judge Halts Texas Law Mandating the Ten Commandments in School

August 20, 2025
Russia Demands Role in Guaranteeing Ukraine’s Postwar Security

Russia Demands Role in Guaranteeing Ukraine’s Postwar Security

August 20, 2025
Aubrey Plaza opens up about grief after death of husband: “A daily struggle”

Aubrey Plaza opens up about grief after death of husband: “A daily struggle”

August 20, 2025
Pro-Putin conductor canceled by Italy after backlash

Poland blames Russia for drone crash 

August 20, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.