Google’s latest artificial intelligence (AI) search feature, designed to provide quick summaries of search results, is under fire for producing erratic and inaccurate responses. The “AI Overviews” tool, still in its experimental phase, has faced backlash after giving some users questionable advice.

In one instance, users searching for ways to make cheese stick to pizza better were told to use “non-toxic glue,” while another bizarre response suggested that geologists recommend humans eat one rock per day. These odd answers appear to have been sourced from satirical articles or comments on Reddit, leading to widespread ridicule on social media.

A Google spokesperson acknowledged these errors, calling them “isolated examples” and emphasising that the vast majority of AI-generated summaries are accurate and useful. “The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences,” the spokesperson said. Google assured that it is taking steps to refine its systems and address any policy violations.

This is not the first time Google has faced issues with its AI products. Earlier in the year, it paused its chatbot Gemini after criticism for its responses, and its predecessor, Bard, also encountered a rocky start.

Despite these setbacks, Google’s AI search feature remains experimental and was recently expanded to all US users after a limited trial in the UK. The feature aims to simplify the search process by summarising results, potentially reducing the need to sift through numerous web pages.

Google, which commands over 90% of the global search engine market, faces significant scrutiny as it integrates more AI into its services. While AI-driven search is seen as the future, its success hinges on the reliability and trustworthiness of the information provided.

The broader AI industry is also grappling with similar challenges. Microsoft’s new AI-focused PCs have drawn the attention of the UK’s data watchdog due to a feature that continuously records online activity. Meanwhile, OpenAI faced criticism from actress Scarlett Johansson for using a voice similar to hers in their chatbot without her permission.

These incidents highlight the ongoing issues with AI accuracy and privacy, underscoring the need for continued refinement and oversight as these technologies become more prevalent in everyday applications.