You don’t need advanced technology to dupe people online. We showed over 3,000 high schoolers a grainy video of poll workers dumping ballots to rig an election. A slapped-on caption, blaring in red font and caps lock, was enough to hoodwink students into believing U.S. voter fraud—even though the footage was from Russia. Only three students figured that out.
We’ve long warned that cheap fakes were more dangerous than deepfakes: nearly as effective but far easier to make. This past election, even with AI tools available to the masses, it was old-school videos spliced with digital duct tape that fueled debates about President Joe Biden’s fitness to serve.
Now, the era of cheap fakes is ending. Viral deep fakes made with new video tools mark an even more treacherous informational terrain. Thanks to products such as Google’s Veo 3, OpenAI’s Sora 2, and Meta’s Vibes, AI slop is now so easy to produce that it is metastasizing across our screens, aided by platforms’ wholesale retreat from fact-checking. To navigate today’s internet, we need guidance from ancient wisdom: Muslim, Jewish, Buddhist, and other faiths’ age-old emphasis on the centrality of reputation.
Devout Muslims trace back the sayings of Muhammad through a “chain of narration,” or “isnad.” Religious Jews interpret Talmudic teachings in the context of the rabbi who intoned them. Tibetan Buddhists orally transmit tenets in a lineage from the Buddha to the present. All of these traditions encourage us to reason about information, but only after we trace it back to where it comes from and assess the reputation of the sages who stood behind it.
Reputation matters in secular contexts as well: it’s the mechanism we use to make decisions when we lack knowledge and expertise. We rely on reputation when choosing a therapist or a plumber, a restaurant to go to or a hotel to book. We ask people we trust and consult reviews because we recognize that no one is likely to disclose their flaws or ulterior motives.
Reputation is crucial in so many areas of our lives. Why, then, on the internet, do most people ignore it?
Our research group has tested thousands of young people’s ability to evaluate information online. Again and again, we’ve seen them judge content while disregarding where it comes from. One student from rural Ohio trusted the voter fraud video because they thought their naked eye was able to detect “fraud in multiple different states.” A student from Pennsylvania wrote that the video clearly “showed people entering fake votes into boxes.”
The same pattern gets supercharged when it comes to AI. A teacher who shared their experience with 404 Media recounted asking a student how they knew if information from ChatGPT was accurate. The student shoved the phone in the teacher’s face: “Look, it says it right here!’” Our pilot studies in high school and college classrooms point to a similar trend: many students put their trust in AI chatbots, even when those chatbots omit context about where information comes from.
Too many internet users fail to consider reputation or mistake Google or ChatGPT for vetted sources rather than flawed aggregators. When people try to evaluate reputation, they’re swayed by easily-gamed signals provided by the source itself: a dot.org domain, official-sounding language on the “about” page, the quantity of data irrespective of its quality, or gut feelings about how something looks.
These features glitter like fool’s gold. Anyone can get a dot-org domain, including hate groups. Holocaust denial sites claim in their about pages to “provide factual information.” Posts with fancy charts can contain noxious misinformation. And reports suggest that AI is so realistic that it forces us to doubt our own senses: from clones that sound like our parents to hyper-realistic fakes of a blaze engulfing Seattle’s Space Needle.
This information landscape presents a no-win choice between submission and solipsism: not caring what’s true, or insisting that nothing is. The former leaves us vulnerable to bad actors who weaponize realistic clips. The latter leaves us devoid of good information. Both options erode informed citizenship at a time when it’s in short supply.
Here’s what we can do: instead of focusing on the content itself, first ask who’s behind it, much as faith traditions consider teachings in the context of who said them.
And when wielded skillfully, the very tools that mislead us can help us out of this conundrum. Not by outsourcing our thinking to technology—but by using technology to establish reputation and sharpen our thinking.
The three students who figured out that the voter fraud video was from Russia didn’t engage in any kind of technical wizardry. They just opened up a new tab, entered a few choice keywords, and found articles from credible sources such as the BBC and Snopes debunking it. And with a few canny pointers on how LLMs work and how to effectively structure prompts, AI can actually aid us in verifying posts on social media and offer missing context.
Major AI tools include throwaway disclaimers telling users to verify information. “Gemini can make mistakes, so double-check it,” Google says. “ChatGPT can make mistakes. Check important info,” advises OpenAI. But from Generation Alpha to Baby Boomers, almost everyone struggles at verifying the information they encounter.
The good news is that all of us can get better. Even a few hours of instruction on how to gauge reputation can move the needle—as we saw in studies we’ve conducted everywhere from high school classrooms in Nebraska and California to college courses in Georgia and Texas. Before, students trusted their eyes to figure out if something was reliable. After, they learned to get a bead on the source’s reputation. Studies in Canada, Germany, India, and elsewhere have found similar positive results.
When we can no longer distinguish real from AI-generated content, it can feel downright futile trying to decide what to trust. But we can better cope with today’s knowledge ecosystem by doubling down on an ancient lesson: the importance of reputation.
The post The Ancient Principle That Can Help Us Spot AI Fakes appeared first on TIME.