DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Don’t Believe What AI Told You I Said

August 15, 2025
in News, Tech
Don’t Believe What AI Told You I Said
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

John Scalzi is a voluble man. He is the author of several New York Times best sellers and has been nominated for nearly every major award that the science-fiction industry has to offer—some of which he’s won multiple times. Over the course of his career, he has written millions of words, filling dozens of books and 27 years’ worth of posts on his personal blog. All of this is to say that if one wants to cite Scalzi, there is no shortage of material. But this month, the author noticed something odd: He was being quoted as saying things he’d never said.

“The universe is a joke,” reads a meme featuring his face. “A bad one.” The lines are credited to Scalzi and were posted, atop different pictures of him, to two Facebook communities boasting almost 1 million collective members. But Scalzi never wrote or said those words. He also never posed for the pictures that appeared with them online. The quote and the images that accompanied them were all “pretty clearly” AI generated, Scalzi wrote on his blog. “The whole vibe was off,” Scalzi told me. Although the material bore a superficial similarity to something he might have said—“it’s talking about the universe, it’s vaguely philosophical, I’m a science-fiction writer”—it was not something he agreed with. “I know what I sound like; I live with me all the time,” he noted.

Bogus quotations on the internet are not new, but AI chatbots and their hallucinations have multiplied the problem at scale, misleading many more people, and misrepresenting the beliefs not just of big names such as Albert Einstein but also of lesser known individuals. In fact, Scalzi’s experience caught my eye because a similar thing had happened to me. In June, a blog post appeared on the Times of Israel website, written by a self-described “tech bro” working in the online public-relations industry. Just about anyone can start a blog at the Times of Israel—the publication generally does not edit or commission the contents—which is probably why no one noticed that this post featured a fake quote, sourced to me and The Atlantic. “There’s nothing inherently nefarious about advocating for your people’s survival,” it read. “The problem isn’t that Israel makes its case. It’s that so many don’t want it made.”

As with Scalzi, the words attributed to me were ostensibly adjacent to my area of expertise. I’ve covered the Middle East for more than a decade, including countless controversies involving Israel, most recently the corrupt political bargain driving Prime Minister Benjamin Netanyahu’s actions in Gaza. But like Scalzi, I’d never said, and never would say, something so mawkish about the subject. I wrote to the Times of Israel, and an editor promptly apologized and took the article down. (Miriam Herschlag, the opinion and blogs editor at the paper, later told me that its blogging platform “does not have an explicit policy on AI-generated content.”)

Getting the post removed solved my immediate problem. But I realized that if this sort of thing was happening to me—a little-known literary figure in the grand scheme of things—it was undoubtedly happening to many more people. And though professional writers such as Scalzi and myself have platforms and connections to correct falsehoods attributed to us, most people are not so lucky. Last May, my colleagues Damon Beres and Charlie Warzel reported on “Heat Index,” a magazine-style summer guide that was distributed by the Chicago Sun-Times and The Philadelphia Inquirer. The insert included a reading list with fake books attributed to real authors, and quoted one Mark Ellison, a nature guide, not a professional writer, who never said the words credited to him. When contacted, the author of “Heat Index” admitted to using ChatGPT to generate the material. Had The Atlantic never investigated, there likely would have been no one to speak up for Ellison.

The negative consequences of this content go well beyond the individuals misquoted. Today, chatbots have replaced Google and other search engines as many people’s primary source of online information. Everyday users are employing these tools to inform important life decisions and to make sense of politics, history, and the world around them. And they are being deceived by fabricated content that can leave them worse off than when they started.

This phenomenon is obviously bad for readers, but it’s also bad for writers, Gabriel Yoran told me. A German entrepreneur and author, Yoran recently published a book about the degradation of modern consumer technology called The Junkification of the World. Ironically, he soon became an object lesson in a different technological failure. Yoran’s book made the Der Spiegel best-seller list, and many people began reviewing and quoting it—and also, Yoran soon noticed, misquoting it.

An influencer’s review on XING, the German equivalent of LinkedIn, included a passage that Yoran never wrote. “There’s quotes from the book that are mine, and then there is at least one quote that is not in the book,” he recalled. “It could have been. It’s kind of on brand. The tone of voice is fitting. But it’s not in the book.” After this and other instances in which he received error-ridden AI-generated feedback on his work, Yoran told me that he “felt betrayed in a way.” He worries that in the long run, the use of AI in this manner will degrade the quality of writing by demotivating those who produce it. If material is just going to be fed into a machine that will then regurgitate a sloppy summary, “why weigh every word and think about every comma?”

Like other online innovations such as social media, large language models do not so much create problems as supercharge preexisting ones. The internet has long been awash with fake quotations attributed to prominent personalities. As Abraham Lincoln once said, “You can’t trust every witticism superimposed over the image of a famous person on the internet.” But the advent of AI interfaces churning out millions of replies to hundreds of millions of people—ChatGPT and Google’s Gemini have more than 1 billion active users combined—has turned what was once a manageable chronic condition into an acute infection that is metastasizing beyond all containment.

The process by which this happens is simple. Many people do not know when LLMs are lying to them, which is unsurprising given that the chatbots are very convincing fabulists, serving up slop with unflappable confidence to their unsuspecting audience. That compromised content is then pumped at scale by real people into their own online interactions. The result: Meretricious material from chatbots is polluting our public discourse with Potemkin pontification, derailing debates with made-up appeals to authority and precedent, and in some cases, defaming living people by attributing things to them that they never said and do not agree with.

More and more people are having the eerie experience of knowing that they have been manipulated or misled, but not being sure by whom. As with many aspects of our digital lives, responsibility is too diffuse for accountability. AI companies can chide users for trusting the outputs they receive; users can blame the companies for providing a service—and charging for it—that regularly lies. And because LLMs are rarely credited for the writing that they help produce, victims of chatbot calumny struggle to pinpoint which model did the deed after the fact.

You don’t have to be a science-fiction writer to game out the ill effects of this progression, but it doesn’t hurt. “It is going to become harder and harder for us to understand what things are genuine and what things are not,” Scalzi told me. “All that AI does is make this machinery of artifice so much more automated,” especially because the temptation for many people is “to find something online that you agree with and immediately share it with your entire Facebook crowd” without checking to see if it’s authentic. In this way, Scalzi said, everyday people uncritically using chatbots risk becoming a “willing route of misinformation.”

The good news is that some AI executives are beginning to take the problems with their products seriously. “I think that if a company is claiming that their model can do something,” OpenAI CEO Sam Altman told Congress in May 2023, “and it can’t, or if they’re claiming it’s safe and it’s not, I think they should be liable for that.” The bad news is that Altman never actually said this. Google’s Gemini just told me that he did.

The post Don’t Believe What AI Told You I Said appeared first on The Atlantic.

Share198Tweet124Share
Rapper Sean Kingston sentenced to 3.5 years in prison for $1 million fraud scheme
Entertainment

Rapper Sean Kingston sentenced to 3.5 years in prison for $1 million fraud scheme

by WHNT
August 15, 2025

FORT LAUDERDALE, Fla. (AP) — Rapper Sean Kingston was sentenced on Friday to three and a half years in prison ...

Read more
News

3 Indie Games on Switch 2 You Should Be Playing Right Now

August 15, 2025
News

Opinion: Why Trump’s Maxwell Hypocrisy Is Laid Bare by D.C. Power Grab

August 15, 2025
News

Social Security Warning Over Changes Issued by Bernie Sanders

August 15, 2025
News

Warriors’ Steph Curry’s Latest Comments Turn Heads Before NBA Season

August 15, 2025
Queen Camilla Breaks Down as Veteran Mentions King Charles’ Cancer

Queen Camilla Breaks Down as Veteran Mentions King Charles’ Cancer

August 15, 2025
Socialist Mamdani promises to ‘Trump-proof’ New York City, expel ICE

Socialist Mamdani promises to ‘Trump-proof’ New York City, expel ICE

August 15, 2025
Culture, Latitude, and Time of Day Can Influence Our Music Choices

Culture, Latitude, and Time of Day Can Influence Our Music Choices

August 15, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.