This week, we — the two authors of this article — spent hours scrolling through a feed of short-form videos that featured ourselves in different scenarios.
In one hyper-realistic nine-second video, we were shown skydiving (and grinning) with pizzas as parachutes.
In another, Eli hit a game-winning home run in a baseball stadium full of robots.
In yet another, Mike was caught in a “Matrix”-style duel against Ronald McDonald, using cheeseburgers as weapons.
“I’m genuinely blown away,” Eli messaged Mike about the cheeseburger video, before liking the content. Mike kept sending videos — which included him ballroom dancing with his dog and sitting on a throne of rats — to other New York Times colleagues (all of whom found the clips slightly disturbing).
The app we used was not TikTok, Instagram Reels or YouTube Shorts, the current leaders of short-form video. It was Sora, a smartphone app made by OpenAI that lets people create such videos entirely from artificial intelligence. Sora’s underlying technology debuted last year, but its latest version — which is faster, more powerful and can incorporate your likeness if you upload images of your face — was released on an invitation-only basis this week.
After spending less than a day with the app, what became clear to us was that Sora had gone beyond being an A.I.-video generation app. Instead, it is, in effect, a social network in disguise; a clone of TikTok down to its user interface, algorithmic video suggestions and ability to follow and interact with friends. The powerful A.I. model that Sora is built on makes it simpler to produce clips, giving people an almost unlimited ability to generate as many A.I. videos as they want.
It was also disconcerting.
Almost instantly, Sora’s early-access users were spinning up videos made with copyrighted material plucked from pop culture. (We saw more “Rick and Morty” and Pikachu videos than we would have liked.) And when Mike posted one Sora video to his personal Instagram page, a half-dozen friends asked if it was him in the video, raising questions about whether we might lose touch with reality.
Worse still, being able to quickly and easily generate video likenesses of people could pour gasoline on disinformation, creating clips of fake events that look so real that they might spur people into real-world action. While some of this was already possible with other A.I. video generators, Sora could turbocharge it.
It is early days, and there is no guarantee Sora will have legs. But OpenAI appears to have created the type of product that companies like Meta and X have sought to build: a way to bring A.I. to the masses that people can share, enticing one another to create posts and regularly use their apps and services.
The race to create similar apps is heating up. Last week, Meta released a social media feed in its dedicated A.I. app called Vibes, which uses an A.I. video generator from the start-up Midjourney. Google hosts Veo, its version of a similar product.
With the social internet moving from people sharing text messages to posting photos and now to watching billions of hours of video, tech executives say that A.I. video tools will be formative to the next generation of social media.
“We felt that the best way to bring this technology to the masses is through something that is somewhat social,” Rohan Sahai, OpenAI’s product lead for Sora, said in an interview. “When you have such drastically shifting technology — a new form factor, even — our goal and philosophy as a company is to get it out there.”
(The New York Times has sued OpenAI and Microsoft claiming copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
The new Sora looks just like TikTok and has the same “For You” name for its social feed. Users can scan their faces to appear as avatars in the videos, or use images of other people, such as OpenAI’s chief executive, Sam Altman. OpenAI calls this feature “Cameos,” the same name as the popular app where people buy custom videos from their favorite celebrities.
Some safety experts said Sora and the Cameos feature specifically could lead to new kinds of misinformation and scams. One Sora video that went viral is of an artificial Mr. Altman stealing a graphics processing unit from a department store, as if recorded by a security camera.
“It makes it really easy to create a believable deepfake in a way that we haven’t quite seen yet,” said Rachel Tobac, the chief executive of SocialProof Security, a cybersecurity start-up.
Sora has restrictions on some sexual and copyrighted content. You can make a video with the characters of the television show “South Park,” for instance, but not Batman or Superman figures. Rights holders must opt out of their work being used in Sora through a copyright disputes form on a case-by-case basis. Public figures may choose to grant permission for their likenesses to be used by Sora.
“We’ll work with rights holders to block characters from Sora at their request and respond to takedown requests,” Varun Shetty, OpenAI’s head of media partnerships, said in a statement.
As Sora clips began circulating on X, TikTok and other social platforms this week, they were received with surprise, delight and disgust. One fear is that Sora will add to what has become known as “slop,” a disparaging term for the fast-growing number of nonsensical A.I.-generated videos flooding social networks.
In July, an A.I. clip of a baby piloting a 747 was one of the most watched videos on YouTube. In recent weeks, an A.I.-generated video of an elderly woman holding a boulder and crashing through a glass bridge over a canyon went viral on Facebook and X, spawning dozens of look-alike clips.
Mr. Sahai, the OpenAI product leader, said that just as other social networks had democratized tools for creators, a significant amount of content, across the quality spectrum, would be produced with Sora, but the highest quality work would rise to the top. He noted that some jokes shared between friends might seem like nonsense to outsiders, but be fun and relevant to small groups.
“One man’s slop is another man’s gold,” Mr. Sahai said.
Hollywood has spent the past 36 hours concerned over how Sora could make it simple for users to rip off their likenesses with no compensation. A day after the app’s release, executives at the talent agency WME sent a memo to agents saying they would fight to defend their clients’ work, according to a copy viewed by The Times.
“There is a strong need for real protections for artists and creatives as they encounter A.I. models using their intellectual property, as well as their name, image and likeness,” the memo said. WME said it told OpenAI that all of its clients were opting out of having their likenesses or intellectual property included in Sora’s videos.
Still, Sora’s broad appeal was immediately clear. Nether of us know the first thing about creating videos, yet all it took was a kernel of an idea, two or three minutes of processing time and a boatload of computing power to spit out a video of Mike arm wrestling Eli for the title of “best tech reporter.” (Eli won.)
Not everyone was charmed. After Mike showed his partner an eerily realistic Sora video of himself playing the psychopathic character Anton Chigurh from the 2007 film adaptation of the book “No Country for Old Men,” she had a simple request.
“Please never, ever show me this kind of video again,” she said.
Nicole Sperling contributed reporting from Los Angeles.
Mike Isaac is The Times’s Silicon Valley correspondent, based in San Francisco. He covers the world’s most consequential tech companies, and how they shape culture both online and offline.
Eli Tan covers the technology industry for The Times from San Francisco.
The post OpenAI’s New Video App Is Jaw-Dropping (for Better and Worse) appeared first on New York Times.