Scrolling through the Sora app can feel a bit like entering a real-life multiverse.
Michael Jackson performs standup; the alien from the “Predator” movies flips burgers at McDonald’s; a home security camera captures a moose crashing through the glass door; Queen Elizabeth dives from the top of a table at a pub.
Such improbable realities, fantastical futures, and absurdist videos are the mainstay of the Sora app, a new short video app released by ChatGPT maker OpenAI.
The continuous stream of hyperreal, short-form videos made by artificial intelligence is mind-bending and mesmerizing at first. But it quickly triggers a new need to second-guess every piece of content as real or fake.
“The biggest risk with Sora is that it makes plausible deniability impossible to overcome, and that it erodes confidence in our ability to discern authentic from synthetic,” said Sam Gregory, an expert on deepfakes and executive director at WITNESS, a human rights organization. “Individual fakes matter, but the real damage is a fog of doubt settling over everything we see,”
All videos on the Sora app are entirely AI-generated, and there is no option to share real footage. But from the first week of its launch, users were sharing their Sora videos across all types of social media.
Less than a week after its launch Sept. 30, the Sora app crossed a million downloads, outpacing the initial growth of ChatGPT. Sora also reached the top of the App Store in the U.S. For now, the Sora app is available only to iOS users in the United States, and people cannot access it unless they have an invitation code.
To use the app, people have to scan their faces and read out three numbers displayed on screen for the system to capture a voice signature. Once that’s done, users can type a custom text prompt and create hyperreal 10-second videos complete with background sound and dialogue.
Through a feature called “Cameos,” users can superimpose their face or a friend’s face into any existing video. Though all outputs carry a visible watermark, numerous websites now offer watermark removal for Sora videos.
At launch, OpenAI took a lax approach to enforcing copyright restrictions and allowed the re-creation of copyrighted material by default, unless the owners opted out.
Users began generating AI video featuring characters from such titles as “SpongeBob SquarePants,” “South Park,” and “Breaking Bad,” and videos styled after the game show “The Price Is Right,” and the ‘90s sitcom “Friends.”
Then came the re-creation of dead celebrities, including Tupac Shakur roaming the streets in Cuba, Hitler facing off with Michael Jackson, and remixes of the Rev. Martin Luther King Jr. delivering his iconic “I Have A Dream” speech — but calling for freeing the disgraced rapper Diddy.
“Please, just stop sending me AI videos of Dad,” Zelda Williams, daughter of late comedian Robin Williams, posted on Instagram. “You’re not making art, you’re making disgusting, over-processed hot dogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else’s throat, hoping they’ll give you a little thumbs up and like it. Gross.”
Other dead celebrity re-creations, including Kobe Bryant, Stephen Hawking and President Kennedy, created on Sora have been cross-posted on social media websites, garnering millions of views.
Christina Gorski, director of communications at Fred Rogers Productions, said that Rogers’ family was “frustrated by the AI videos misrepresenting Mister Rogers being circulated online.”
Videos of Mr. Rogers holding a gun, greeting rapper Tupac, and other satirical fake situations have been shared widely on Sora.
“The videos are in direct contradiction to the careful intentionality and adherence to core child development principles that Fred Rogers brought to every episode of Mister Rogers’ Neighborhood. We have contacted OpenAI to request that the voice and likeness of Mister Rogers be blocked for use on the Sora platform, and we would expect them and other AI platforms to respect personal identities in the future,” Gorski said in a statement to The Times.
Hollywood talent agencies and unions, including SAG-AFTRA, have started to accuse OpenAI of improper use of likenesses. The central tension boils down to control over the use of the likenesses of actors and licensed characters — and fair compensation for use in AI videos.
In the aftermath of Hollywood’s concerns over copyright, Sam Altman shared a blog post, promising greater control for rights-holders to specify how their characters can be used in AI videos — and is exploring ways to share revenue with rights-holders.
He also said that studios could now “opt-in” for their characters to be used in AI re-creations, a reversal from OpenAI’s original stance of an opt-out regime.
The future, according to Altman, is heading toward creating personalized content for an audience of a few — or an audience of one.
“Creativity could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase,” Altman wrote, calling this genre of engagement “interactive fan fiction.”
The estates of dead actors, however, are racing to protect their likeness in the age of AI.
CMG Worldwide, which represents the estates of deceased celebrities, struck a partnership with deepfake detection company Loti AI to protect CMG’s rosters of actors and estates from unauthorized digital use.
Loti AI will constantly monitor for AI impersonations of 20 personalities represented by CMG, including Burt Reynolds, Christopher Reeve, Mark Twain and Rosa Parks.
“Since the launch of Sora 2, for example, our signups have increased roughly 30x as people search for ways to regain control over their digital likeness,” said Luke Arrigoni, co-founder and CEO of Loti AI.
Since January, Loti AI said it has removed thousands of instances of unauthorized content as new AI tools made it easier than ever to create and spread deepfakes.
After numerous “disrespectful depictions” of Martin Luther King Jr., OpenAI said it was pausing the generation of videos in the civil rights icon’s image on Sora, at the request of King’s estate. While there are strong free-speech interests in depicting historical figures, public figures and their families should ultimately have control over how their likeness is used, OpenAI said in a post.
Now, authorized representatives or estate owners can request that their likenesses not be used in Sora cameos.
As legal pressure mounts, Sora has become more strict about when it will allow the re-creation of copyrighted characters. It increasingly puts up content policy violations notices.
Now, creating Disney characters or other images triggers a content policy violation warning. Users who aren’t fans of the restrictions have started creating video memes about the content policy violation warnings.
There’s a growing virality to what has been dubbed “AI slop.”
Last week featured ring camera footage of a grandmother chasing a crocodile at the door, and a series of “fat olympics” videos where obese people participate in athletic events such as pole vault, swimming and track events.
Dedicated slop factories have turned the engagement into a money spinner, generating a constant stream of videos that are hard to look away from. One pithy tech commentator dubbed it “Cocomelon for adults.”
Even with increasing protections for celebrity likenesses, critics warn that the casual “likeness appropriation” of any common person or situation could lead to public confusion, enhance misinformation and erode public trust.
Meanwhile, even as the technology is being used by bad actors and even some governments for propaganda and promotion of certain political views, people in power can hide behind the flood of fake news by claiming that even real proof was generated by AI, said Gregory of WITNESS.
“I’m concerned about the ability to fabricate protest footage, stage false atrocities, or insert real people with words placed in their mouths into compromising scenarios,” he said.
The post Sora app’s hyperreal AI videos ignite online trust crisis as downloads surge appeared first on Los Angeles Times.




