DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Can You Believe the Documentary You’re Watching?

November 18, 2025
in News
Can You Believe the Documentary You’re Watching?

Like a surging viral outbreak, A.I.-generated video has suddenly become inescapable. It’s infiltrated our social feeds and wormed its way into political discourse. But documentarians have been bracing for impact since before most of us even knew what the technology could do.

Documentaries fundamentally traffic in issues of truth, transparency and trust. If they use so-called synthetic materials but present them as if they’re “real,” it’s not just a betrayal of the tacit contract between filmmaker and audience. The implications are far broader, and far more serious: a century of shared history is in jeopardy.

At a time when the idea of facts and shared reality is assaulted from every side, the turning point has arrived. The stakes couldn’t be higher. And we all need to pay attention.

WHISPERS THAT DOCUMENTARIANS were using materials created with generative A.I. started surfacing several years ago. “Roadrunner,” the 2021 film about Anthony Bourdain, set off controversy when it failed to disclose that several lines of his “voice-over” had been generated with software trained on existing samples. That one made the news. But in other productions, sometimes cloaked by nondisclosure agreements, more was going on than many audiences knew.

In 2023, a group of documentary producers formed the Archival Producers Alliance and published an open letter to their industry calling for greater transparency, listing ways generative A.I had been used without disclosure.

You might be shocked by what they pointed out: artificially created historical voices, which lead audiences “to believe they are hearing authentic primary sources when they are not”; “A.I.-generated ‘historical’ images”; “fake newspaper articles”; and “nonexistent historical artifacts.”

In other words, you may have watched a documentary in the last few years and thought what you were seeing was real — but it wasn’t.

Of course we’re all aware that what we see in a video or a movie isn’t necessarily “real.” We know about C.G.I. and camera trickery and the ability to manipulate images. But until very recently, it took a fair amount of skill, or at least time and money, to make realistic fake videos. You would need the resources of a Hollywood studio, and even then it might look a little janky.

But with documentaries, there’s also a kind of social contract. We believe that what they show us happened, with some exceptions. Re-enactments have become more common in recent years, but filmmakers have developed a visual vocabulary for those staged scenes: they’re dreamy, a little blurry, usually faceless. You immediately know what you’re looking at, because it doesn’t look “real.”

These conventions exist to preserve our trust in what we’re viewing. If you’re watching historical images of a civil rights march or scenes from a family birthday party, you believe that a producer dug up that footage.

But finding and shooting footage is expensive and takes time. And that’s exactly what today’s documentarians are often not afforded — especially when they’re working on ripped-from-the-headlines films for the bottomless content pits that are streaming platforms. The temptation is clearly there to plug a prompt into A.I.; sometimes that’s even the mandate from on high. Who will know, right?

When OpenAI demonstrated the first version of its Sora video generator in early 2024, the company used “historical footage of California during the Gold Rush” as a demo prompt. (The New York Times sued OpenAI and Microsoft in 2023 for copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

Members of the Archival Producers Alliance, seasoned producers and directors, saw exactly how damaging this trend could be, not just for documentaries but also for the shared public record. In 2024, they announced a comprehensive set of best practices. The guiding principal is transparency and trust: The audience should know when any A.I. tool is used — perhaps to enhance a damaged photo, sharpen an audio clip or create a voice from written text and recordings.

But this may be even more important: The guidelines are concerned with protecting history itself — with not “muddying the historical record.” If enhanced or generated material is used, the alliance suggests, the disclosure should be made onscreen, not saved for the credits. Why? Every film that’s streamed can be sliced, diced and clipped. Any piece then becomes part of “the archive” — discoverable online and divorced from the larger context of the film, making it easy for someone to assume the clip is real.

That’s scary enough. But believe it or not, A.I. raises an even bigger issue for documentaries — and, in turn, for all of us.

IN SEPTEMBER, A CLIP SURFACED in which a black garbage bag was apparently tossed from a White House window. The White House press office said a contractor was doing routine maintenance. But President Trump falsely declared the clip was A.I. anyway, and made some revealing remarks.

“One of the problems we have with A.I., it’s both good and bad,” he said. “If something happens, really bad, just blame A.I. But also they create things. You know, it works both ways.”

That phenomenon has a name: “liar’s dividend,” a term coined by two law professors in 2019. The idea is simple. We’re becoming more aware of how easy it is to create convincing fake videos, which means people who claim real videos are fake are becoming more persuasive, too. If they’re caught on video but claim the video is A.I., we’re more likely to believe them. Or at least we might feel pangs of doubt.

With the release of OpenAI’s video generator Sora 2 in September, the world irrevocably changed. Once the software is widely available, it will be possible for anyone to make a video of pretty much anything, and fast.

Most people can understand the obvious consequence of this earthshaking moment. Every video is now Schrodinger’s video: it’s both real and not real. We can take the liar’s dividend one frightening step further. In this brave new world, no claim that a video is real will ever be fully persuasive.

This raises an issue that documentarians have been worried about for years. Since cameras became a reliable way to film news, we’ve depended on video archives to preserve our shared history. Thanks to portable camera technology, hundreds of thousands of hours of history exist on film and hard drives, in libraries and archives, showing us what really happened. At times, that footage shocks us and contradicts the stories we like to tell ourselves. History is written by the victors, but those accounts could be undercut by the tape.

That’s one reason documentaries are so important: For many, they’re the only way to discover those histories and, sometimes, learn the truth. A history book can teach you a lot; a moving image gets inside your soul.

As recording equipment has become more accessible, this has proved only more true. A major line of defense against excessive force by the police has been videos filmed by bystanders. Documentaries about human rights atrocities now routinely depend on video shot by civilians and activists. Audiences have had their eyes opened and inherited narratives disrupted.

BUT WE ARE LIVING IN DANGEROUS TIMES. Technologies like Sora 2 have made it ridiculously simple for an unscrupulous producer (or a racist troll) to whip up footage in a matter of moments. Because the software generates video based on the data set it was trained on, it is filled with biases and blind spots; attempts to “fix” the bias can have unexpected ramifications, as when Google’s Gemini created photorealistic images of Nazis as people of color. A.I. makes images that, used irresponsibly, literally rewrite history.

Any of these videos could end up in the archive, adrift on the internet to be discovered in 15 years by some schoolchild. Or now by well-meaning assistants working on a low-budget streaming documentary they have to finish so they can pay rent.

Theoretically, fixes exist. Sora videos carry a tiny label, as well as an invisible digital fingerprint and “extra guardrails,” though OpenAI has not specified what those might be. Other companies could do the same, and software could be developed to detect A.I.-generated video. Governments could regulate software or require these measures.

But at the speed that these technologies are unleashed, staying ahead is virtually impossible. Hackers have figured out how to remove the Sora label. Governments have to be willing to regulate companies and enforce those laws. And given the potential for misinformation, there’s no reason to imagine state actors won’t circumvent guardrails.

Then there’s this problem: A.I. video generators are trained on a large data set that includes documentaries. They can ably imitate the visual vocabulary of the form — talking heads, sweeping shots and other elements we expect from authoritative films. I could prompt a generator to create a film in the style of, say, Ken Burns about aliens that would look very “real.” I could post it to YouTube, and by the time Burns disavowed the film, it could have gone viral.

While muddying the historical record is a huge problem, let’s not forget the liar’s dividend. In a world where we are primed to doubt all video evidence, documentaries — which depend entirely on trust — are swept into the net. So now, a documentary made up of footage that challenges power or makes a viewer feel uncomfortable — like this year’s “2000 Meters to Andriivka” (with Ukrainian soldiers’ helmet camera scenes) and “The Perfect Neighbor” (using body camera material) — is much easier to dismiss altogether. Even a film with a hot-mic confession like Robert Durst’s in “The Jinx” (2015) is suddenly far more questionable.

DOCUMENTARIANS HAVE BEEN FIGHTING BACK. The newly formed Trust in Archives Initiative, for instance, is working on ways to authenticate and protect genuine archival materials. The Coalition for Content Provenance and Authenticity is developing an open technical standard that can certify the source of online content, and representatives from the tech giants Google, Amazon, Meta and OpenAI are on the steering committee. Organizations like Witness are working with people documenting human rights violations, equipping them with resources for collecting video that is harder to discredit and helping authenticate material in real time.

But in my conversations with documentarians, it’s clear that A.I. video creation tools — and companies’ seemingly lax attitude toward their implications — have created a sense of not just urgency but desperation. A responsible approach, documentarians say, would involve “future-proofing”: companies developing technology with stakeholders who can speak to the human cost of the product and recommend steps to mitigate the damage. On the whole, that’s not happening.

Which means it falls to us — and filmmakers and studios like Netflix and Amazon — to find ways to keep alive values like trust and authenticity. This probably means the documentaries we watch will change shape a little, and using the Archival Producers Alliance guidelines is a good start. Developing voluntary industrywide certifications that help audiences identify responsibly made films, like the “Certified Organic” sticker on produce, might also help.

More documentarians might start inserting themselves into their movies, explaining to viewers how they were made. A filmmaker can lie, of course, but the added layer of human connection may instill confidence.

The current market for documentaries is dismal. The demand from streaming platforms for movies about crimes, cults and celebrities leads to slapdash aesthetics and rushed timelines. In that market, the temptation to use A.I.-generated footage is high, and it’s often rewarded by viewers. In a real sense, we are part of the problem. Just going to the theater to see a documentary, or paying a few bucks to digitally rent one with high artistic standards, can go a long way toward reviving the market.

This sounds bleak, because it is. But that doesn’t mean it’s entirely hopeless. Our shared history deserves protection, and we all need to be part of the solution. Perhaps documentarians are the ones best suited to help us rethink what trust, transparency and authenticity really look like when we can’t believe our eyes.

Alissa Wilkinson is a Times movie critic. She’s been writing about movies since 2005.

The post Can You Believe the Documentary You’re Watching? appeared first on New York Times.

Bondi Beach gunmen went to Philippines before attack, authorities say
News

Bondi Beach gunmen went to Philippines before attack, authorities say

by Washington Post
December 16, 2025

The father and son accused of carrying out Sunday’s terrorist attack in Bondi Beach traveled to Philippines ahead of the ...

Read more
News

‘Beyond redemption’: Trump’s niece flags ‘most fitting punishment’ after Trump’s outburst

December 16, 2025
News

Beloved TikTok star Tucker Genal dead by suicide at 31: ‘Don’t even know where to begin’

December 16, 2025
News

Trump’s ‘Peace President’ brand just became more ‘mendacious’ after Reiner rant: column

December 16, 2025
News

Hegseth’s office ‘escalating’ probe of Sen. Mark Kelly, Pentagon says

December 16, 2025
Gunther Showered With Boos on WWE Raw: ‘John Cena Tapped Out Like a Little B***h’

Gunther Showered With Boos on WWE Raw: ‘John Cena Tapped Out Like a Little B***h’

December 16, 2025
Dog who disappeared from California home in 2021 found tied to Michigan fence: ‘Christmas miracle’

Dog who disappeared from California home in 2021 found tied to Michigan fence: ‘Christmas miracle’

December 16, 2025
Donald Trump Jr. and Bettina Anderson engaged

Donald Trump Jr. and Bettina Anderson engaged

December 16, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025