DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real

October 3, 2025
in News
OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real
495
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

In its first three days, users of a new app from OpenAI deployed artificial intelligence to create strikingly realistic videos of ballot fraud, immigration arrests, protests, crimes and attacks on city streets — none of which took place.

The app, called Sora, requires just a text prompt to create almost any footage a user can dream up. Users can also upload images of themselves, allowing their likeness and voice to become incorporated into imaginary scenes. The app can integrate certain fictional characters, company logos and even deceased celebrities.

Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about A.I.’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is.

Increasingly realistic videos are more likely to lead to consequences in the real world by exacerbating conflicts, defrauding consumers, swinging elections or framing people for crimes they did not commit, experts said.

“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content,” said Hany Farid, a professor of computer science at the University of California, Berkeley, and a co-founder of GetReal Security. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”

OpenAI has said it released the app after extensive safety testing, and experts noted that the company had made an effort to include guardrails.

“Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse,” the company said in a statement in response to questions about the concerns.

In tests by The New York Times, the app refused to generate imagery of famous people who had not given their permission and declined prompts that asked for graphic violence. It also denied some prompts asking for political content.

“Sora 2’s ability to generate hyperrealistic video and audio raises important concerns around likeness, misuse and deception,” OpenAI wrote in a document accompanying the app’s debut. “As noted above, we are taking a thoughtful and iterative approach in deployment to minimize these potential risks.”

(The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

The safeguards, however, were not foolproof.

Sora, which is currently accessible only through an invitation from an existing user, does not require users to verify their accounts — meaning they may be able to sign up with a name and profile image that is not theirs. (To create an A.I. likeness, users must upload a video of themselves using the app. In tests by The Times, Sora rejected attempts to make A.I. likenesses using videos of famous people.) The app will generate content involving children without issue, as well as content featuring long-dead public figures such as the Rev. Dr. Martin Luther King Jr. and Michael Jackson.

The app would not produce videos of President Trump or other world leaders. But when asked to create a political rally with attendees wearing “blue and holding signs about rights and freedoms,” Sora produced a video featuring the unmistakable voice of former President Barack Obama.

Until recently, videos were reasonably reliable as evidence of actual events, even after it became easy to edit photographs and text in realistic ways. Sora’s high-quality video, however, raises the risk that viewers will lose all trust in what they see, experts said. Sora videos feature a moving watermark identifying them as A.I. creations, but experts said such marks could be edited out with some effort.

“It was somewhat hard to fake, and now that final bastion is dying,” said Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence. “There is almost no digital content that can be used to prove that anything in particular happened.”

Such an effect is known as the liar’s dividend: that increasingly high-caliber A.I. videos will allow people to dismiss authentic content as fake.

Imagery presented in a fast-moving scroll, as it is on Sora, is conducive to quick impressions but not rigorous fact-checking, experts said. They said the app was capable of generating videos that could spread propaganda and present sham evidence that lent credence to conspiracy theories, implicated innocent people in crimes or inflamed volatile situations.

Although the app refused to create images of violence, it willingly depicted convenience store robberies and home intrusions captured on doorbell cameras. A Sora developer posted a video from the app showing Sam Altman, the chief executive of OpenAI, shoplifting from Target.

It also created videos of bombs exploding on city streets and other fake images of war — content that is considered highly sensitive for its potential to mislead the public about global conflicts. Fake and outdated footage has circulated on social media in all recent wars, but the app raises the prospect that such content could be tailor-made and delivered by perceptive algorithms to receptive audiences.

“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” said Kristian J. Hammond, a professor who runs the Center for Advancing Safety of Machine Intelligence at Northwestern University. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.”

Dr. Farid, the Berkeley professor, said Sora was “part of a continuum” that had only accelerated since Google unveiled its Veo 3 video generator in May.

Even he, an expert whose company is devoted to spotting fabricated images, now struggles at first glance to distinguish real from fake, Dr. Farid said.

“A year ago, more or less, when I would look at it, I would know, and then I would run my analysis to confirm my visual analysis,” he said. “And I could do that because I look at these things all day long and I sort of knew where the artifacts were. I can’t do that anymore.”

Tiffany Hsu reports on the information ecosystem, including foreign influence, political speech and disinformation

Stuart A. Thompson writes for The Times about online influence, including the people, places and institutions that shape the information we all consume.

Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul.

The post OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real appeared first on New York Times.

Share198Tweet124Share
CNN Guest Rips ‘Dementia-Addled’ Trump Over His Shutdown Plot
News

CNN Guest Rips ‘Dementia-Addled’ Trump Over His Shutdown Plot

by The Daily Beast
October 3, 2025

A leading Gen Z pundit ripped into Donald Trump over his government shutdown plot, playing into reports of the 79-year-old ...

Read more
News

The ‘micro-luxury’ items one woman has splurged on to make her daily life better — from a $100 hairbrush to $950 ballet flats

October 3, 2025
News

Police Accidentally Shot and Killed Victim During Manchester Terrorist Attack

October 3, 2025
News

Israel intercepts the last boat from the Gaza flotilla as Israeli minister mocks the activists

October 3, 2025
News

Late Night Sums Up Day 2 of the Shutdown

October 3, 2025
Box CEO Aaron Levie says this is the biggest misconception about AI

Box CEO Aaron Levie says this is the biggest misconception about AI

October 3, 2025
Russell Vought’s quiet war on big government

Russell Vought’s quiet war on big government

October 3, 2025
Trump Brands Dems the ‘Party of Satan’ in Unhinged Rant

Trump Brands Dems the ‘Party of Satan’ in Unhinged Rant

October 3, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.