Technology

Google promised a better search experience — now it’s telling us to put glue on our pizza

You’ve set aside a tranquil evening to relax and decide to craft a homemade pizza. You meticulously assemble your creation, place it in the oven, and eagerly anticipate savoring your culinary masterpiece. However, as you prepare to take a bite of your greasy concoction, a dilemma arises – the cheese slides right off. Frustrated, you seek solace in Google for a remedy. Google’s AI Overviews feature is delivering incorrect answers faster than ever.

“Add some glue,” Google suggests. “Mix about 1/8 cup of Elmer’s glue with the sauce. Non-toxic glue will work.”

Obviously, that’s a terrible idea. But as of now, this is precisely what Google’s new AI Overviews feature might advise you to do. Although this feature isn’t activated for every inquiry, it scours the web and generates an AI-driven response. The recommendation to use glue for pizza appears to be derived from a jesting comment by a user named “fucksmith” in a decade-old Reddit thread.

This is merely one of many errors surfacing in the recently launched feature that Google introduced broadly this month. Other absurd claims include stating that former US President James Madison graduated from the University of Wisconsin 21 times, that a dog has participated in the NBA, NFL, and NHL, and that Batman is a police officer.

Google spokesperson Meghann Farnsworth mentioned that these mistakes stem from “generally very uncommon queries and aren’t reflective of most users’ experiences.” She added that the company has addressed policy violations and is leveraging these “isolated incidents” to further refine the product.

Google never guaranteed perfection, even labeling these AI answers with “Generative AI is experimental.” However, these tools are not yet capable of delivering accurate information on a large scale.

Consider the grand unveiling of this feature at Google I/O. Despite being a meticulously controlled demonstration, it still produced a dubious suggestion on how to fix a jammed film camera. (It recommended opening the back door and gently removing the film; doing so would ruin your photos!)

Google isn’t alone; companies like OpenAI, Meta, and Perplexity have also encountered AI hallucinations and errors. Nevertheless, Google is the pioneer in deploying this technology so extensively, and the blunders keep accumulating.

Tech companies often shirk responsibility for their AI systems. Adopting a laissez-faire attitude akin to a parent with a mischievous child—boys will be boys! They argue that the AI’s outputs are unpredictable and therefore beyond their control.

But for users, this poses a significant issue. Last year, Google proclaimed AI as the future of search. Yet, what’s the use if the search results are less reliable than before?

AI enthusiasts argue that we should embrace the hype due to the swift advancements achieved so far, trusting that the technology will continue to improve. While I genuinely believe that AI will get better, focusing solely on an idealized future where these technologies are flawless overlooks the substantial problems they currently face, allowing companies to persist in delivering suboptimal products.

For now, our search experiences are plagued by decade-old Reddit posts as tech giants endeavor to integrate AI into every facet of our lives. Many optimists believe we are on the cusp of something monumental. These issues are merely the growing pains of an emerging technology. I hope they’re right. But one thing is certain: someone is likely to put glue on their pizza soon, simply because that’s the nature of the internet.

Update, May 23rd: Included a statement from a Google spokesperson.

Leave a Reply

Your email address will not be published. Required fields are marked *