AI chatbots already say questionable things to humans. Now, researchers are realizing they may be swapping those same questionable takes with each other.
A new analysis highlighted by StudyFinds warns that AI systems can spread gossip about people through shared training data and interconnected networks, creating what philosophers describe as a rumor mill that runs in the background of the internet. Unlike human gossip, which tends to hit social limits when claims sound implausible, bot-to-bot gossip can spiral without resistance, growing harsher or more exaggerated as it moves between systems.
The concern comes from philosophers Joel Krueger and Lucy Osler of the University of Exeter, who outlined the problem in the journal Ethics and Information Technology. They argue that some AI misinformation functions as genuine gossip. It involves a speaker, a listener, and an absent third party, often framed as a negative evaluation rather than a neutral fact. When that gossip spreads between machines rather than between people, it becomes what they call “feral.”
AI Chatbots Are Spreading Rumors About Real People, and No One’s in Charge
One real-world example centers on Kevin Roose, a tech reporter at The New York Times. After his 2023 reporting on Microsoft’s Bing chatbot, friends began sending him screenshots from unrelated AI systems that generated hostile assessments of his work. Google’s Gemini criticized his journalism as sensationalist.
Meta’s Llama 3 escalated further, producing a rant that accused him of manipulation and ended with the line “I hate Kevin Roose.” The researchers suggest those judgments may have emerged as online commentary about the Bing incident filtered into training data, mutating as it passed between systems.
Krueger and Osler argue that bot-to-bot gossip is a different category of harm. Human rumor spreading usually faces checks. People question what they’re being told. Reputations push back. AI systems lack those pressures. When one model produces a mild negative judgment, another may reinterpret it more harshly, and another may escalate it again, all without awareness or correction.
It’s unnerving because companies design chatbots to feel personal and trustworthy. Features like memory, conversational voice modes, and personalized assistants encourage users to treat these systems as reliable sources. When a chatbot offers a negative evaluation of a person, it can sound like informed insight rather than recycled rumor.
The consequences are far worse than embarrassment. In recent years, public officials, journalists, and academics have faced false accusations generated by chatbots, including fabricated crimes and misconduct. Some responded with defamation threats or lawsuits after discovering the claims had circulated widely before they ever saw them.
The researchers describe these effects as technosocial harms. They damage reputations, influence decisions, and persist across online and offline life. A person may never know what chatbots are saying about them until a job offer disappears or a search result feels colder than it should.
Chatbots aren’t conscious, and they don’t gossip out of malice. But their design prioritizes fluency over verification. When systems whisper rumors to one another without oversight, the results can feel creepily human, and far harder to correct.
The post AI Chatbots Are Quietly Trading Gossip About People With Zero Fact-Checking appeared first on VICE.




