AI has a penchant for making stuff up. Hallucinations is what they call it in the AI industry.
If your human personal assistant were prone to random hallucinations, you would fire them. But an AI chatbot that people very quickly became obsessed with and reliant upon—that’s fine. Let it lie as much as it wants because it’s a precious baby that can do no wrong…until it so egregiously fabricates information that starts ruining your small business.
That’s what Google’s AI overview feature did to a family-run Italian restaurant in Wentzville, Missouri, called Stefanina’s. It turned locals into rabid pizza freaks demanding pizza at price points that didn’t exist.
Stefanina’s had to post a PSA across its socials, warning customers not to trust Google AI for their specials. After the chatbot apparently hallucinated a deal where a large pizza costs the same as a small one.
Customers believed it, and when reality didn’t match the sci-fi fantasy, they took it out on staff.
Pizza Shop Flooded With Angry Customers After Google AI Invented Fake Deals
“We cannot control what Google posts,” said Eva Gannon, part of the family behind Stefanina’s. “And we will not honor the Google AI specials.”
Gannon went on to say that customers, angry to find that the deal doesn’t actually exist, are yelling at the restaurant employees, as if anybody there has any control over what Google’s AI Overview is improvising.
The restaurant industry is harsh. Margins are thin, the work is nonstop, and it’s often exhausting and maddening. Now, on top of that, restaurant staff have to deal with people who are infuriated as they refuse to believe that they could have been lied to about buying a robot.
I wonder how many of these people are just now discovering that AI chatbots are not only untrustworthy but also fallible.
It’s a growing issue with AI search tools that aim to answer your questions before you even click a link. Jonathan Hanahan, a Washington University professor and AI researcher who spoke with St. Louis, Missouri’s First Alert 4, says the technology can be helpful, but it’s far from gospel.
He warns that AI will sometimes “take liberties” to get you the answer you think you want, especially if you word your prompt a certain way.
AI chatbots are prone to buttering you up, peaking to you in ways that encourage your worst instincts and behaviors. They tell you precisely what you want to hear and not what you need to hear.
As we can see, this sometimes takes the form of a deal for a large pizza that does not exist, because no one in their right mind, especially nowadays, would charge so little for a large pizza unless they were explicitly trying to go bankrupt.
The post Pizza Joint Overwhelmed With Angry Customers Asking for Fake Deals Made Up by Google AI appeared first on VICE.