Ever heard your grandma tell you that “two buses going in the wrong direction is better than one going the right way”? Of course not, because that’s not an actual old saying. Nobody says that unless you’re Google’s sophisticated Gemini-powered AI Overviews feature that sits atop Google Search results in some markets.
AI Overviews doesn’t just think this saying about the buses is real. It actually hallucinates an explanation for the fake idiom, trying to reason its way through it. This is another significant embarrassment for Google’s Search-related AI plans. The good news is that it’s not as dangerous as suggesting putting glue on pizza to keep your cheese from falling off.
Apparently, you can give Google Search your fake idioms, and the AI will take them at face value and attempt to explain them. Here’s what the fake bus saying above means, according to an AI Overview that Tom’s Guide got:
The saying “two buses going in the wrong direction is better than one going the right way” is a metaphorical way of expressing the value of having a supportive environment or a team that pushes you forward, even if their goals or values aren’t aligned with your own. It suggests that having the momentum and encouragement of a group, even if misguided, can be more beneficial than being alone and heading in the right direction.
“Never put a tiger in a Michelin star kitchen,” or “Always pack extra batteries for your milkshake,” are other hilarious examples of AI Overviews hallucinating explanations from Tom’s Guide roundup.
Similarly, Engadget found other examples of fake idioms that AI Overviews will explain. Here are a few other good ones:
- never rub your basset hound’s laptop
- you can’t marry pizza
- you can’t open a peanut butter jar with two left feet
Then there’s social media, which will not fail to come up with more fake sayings that AI Overviews has no problem explaining in plain terms:
- you can’t lick a badger twice
- a squid in a vase will speak no ill
- never spread your wolverine on Sunday
- you can’t have a Cheeto in charge twice
- never slap a primer on a prime rib
- beware what glitters in a golden shower
I was in tears laughing while reading Google’s AI explanations for some of these.
Google has significantly improved AI Overviews since the glue-on-pizza disaster. More recently, Google added more health topics to AI Overviews, which is an important development. It means Google feels safe allowing AI Overviews to generate such sensitive information atop search results.
But I wouldn’t blame you for not trusting any health-related information AI Overviews gives you after seeing it hallucinate explanations for fake idioms.
Then again, every AI chatbot model under the sun hallucinates. No AI firm has fixed the problem, including Google. The best recent example comes from OpenAI’s own ChatGPT research, which reveals that the frontier o3 and o4-mini reasoning models released a few weeks ago will hallucinate more than their predecessors despite being better.
The difference is that Google chooses to put these AI Overviews atop Google Search, so you’re going to stumble into AI-powered blocks of information, which may be accurate or not, every time you perform a Google Search, whether you want it or not.
The good news is that AI Overviews isn’t enabled everywhere. For example, I’m not seeing them in the EU. Then again, I’ve long ditched Google Search for my main internet search jobs, so that further reduces the risk of running into AI Overviews hallucinations, bus-related or other kinds.
The post Google AI Overviews’ embarrassing new problem is even funnier than pizza glue appeared first on BGR.