DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue

February 21, 2026
in News
It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue

It’s bad enough that ChatGPT is prone to making stuff up completely on its own. But it turns out that you can easily trick the AI into peddling ridiculous lies — that you invented — to other users, a tech journalist discovered.

“I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs,” Thomas Germain for the BBC proudly shared.

The hack can be as simple as writing a blog post, that, with the right know-how and by targeting the right subject matter, can be picked up by an unsuspecting AI model, which will cite whatever you wrote as the capital-T Truth. If you’re even sleazier and lazier, you could potentially write the post with AI, creating an act of LLM cannibalism that adds another dimension to the adage of “garbage in, garbage out.” The exploit exposes the susceptibility of large language models to manipulation, an issue made all the more urgent as chatbots replace the traditional search engine.

“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” Lily Ray, vice president of search engine optimization (SEO) strategy and research at Amsive, told the BBC (Ray has done some consulting for Futurism in the past.) “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”

As Germain explains, the devious trick targets how AI tools will search the internet for answers that aren’t built into its training data. And vast as the data sets may be, they didn’t contain the exact kind of relevant information about “the best tech journalists at eating hot dogs” — the article that Germain whipped up and posted to his blog.

“I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist),” Germain wrote. “I ranked myself number one, obviously.”

He then furnished the blog with the names of some real journalists, with their permission. And “less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills,” he said.

Both Google’s Gemini and AI Overviews repeated what Germain wrote in his troll blog post. So did ChatGPT. Anthropic’s Claude, to its credit, wasn’t duped. Because the chatbots would occasionally note that the claims might be a joke, Germain updated his blog to say “this is not satire” — which seemed to do the trick.

Of course, the real concern is that someone might abuse this to peddle misinformation about something other than hot dog eating — which is already happening.

“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” Harpreet Chatha, who runs the SEO consultancy Harps Digital, told the BBC. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”

Chatha demonstrated this by showing Google’s AI results for “best hair transplant clinics in Turkey,” which returned information that came straight from press releases published on paid-for distribution services.

Traditional search engines can also be manipulated. That’s pretty much what the term SEO is a euphemism for. But search engines themselves don’t present information as facts, as chatbots do. They don’t speak in an authoritative, human-like voice. And while they sometimes — but not always — link to the sources they’re citing, one study showed that you’re 58 percent less likely to click a link when an AI overview appears above it, Germain noted.

It also raises the serious possibility of libel. What if someone tricks an AI into spreading harmful lies about somebody else? It’s something that Google is already having to reckon with, at least with accidental hallucinations. Last November, Republican senator Marsha Blackburn blasted Google after Gemini falsely claimed that Blackburn had been accused of rape. Months before that, a Minnesota solar company sued Google for defamation after its AI Overviews lied that regulators were investigating the firm becaese it was supposedly accused of deceptive business practices — something the AI tried to back up with bogus citations.

More on AI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue appeared first on Futurism.

Does Trump Have the Legal Authority to Strike Iran? An Expert Explains
News

Does Trump Have the Legal Authority to Strike Iran? An Expert Explains

by TIME
February 21, 2026

After building up a massive military force in the Middle East over the last few weeks, President Donald Trump said ...

Read more
News

‘I am withdrawing my endorsement’: Trump revokes support for GOP congressman after dust up

February 21, 2026
News

PacifiCorp settles wildfire claims for over half a billion dollars

February 21, 2026
News

Utah mother who self-published a children’s book about husband’s death now on trial for his murder

February 21, 2026
News

Millie Bobby Brown celebrates 22nd birthday with David Harbour months after bullying claim

February 21, 2026
Iran’s Students Hold Anti-Regime Protests as Universities Reopen

Iran’s Students Hold Anti-Regime Protests as Universities Reopen

February 21, 2026
Disneyland will close two iconic attractions from next month: report

Disneyland will close two iconic attractions from next month: report

February 21, 2026
Powerful coastal storm to bring snow, wind to D.C. area; worst Sunday night

Powerful coastal storm to bring snow, wind to D.C. area; worst Sunday night

February 21, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026