Google thinks artificial intelligence is the next big thing for its dominant search engine, but its newly AI-enhanced search platform has been caught telling users to put glue on their pizza and to eat at least one rock per day.

The system, which had a limited beta launch one year ago as the Search Generative Experience, was rolled out to Google's hundreds of millions of users in the United States earlier in May, under the new name AI Overviews.

The feature uses Google’s Gemini generative AI model to quickly provide responses to user questions, but many of its answers have gone viral for being incorrect, ridiculous, or outright dangerous.

Like many Large Language Models (LLMs), AI Overviews has been found to produce hallucinations — when AI systems spit out incorrect or incoherent information.

When asked how to stop cheese from sliding off pizza, the system allegedly told some users to add non-toxic glue “to give it more tackiness”.

The source of the strange culinary idea appeared to be a Reddit post from more than a decade ago, in which a user joked about adding glue into pizza sauce.

Google signed a content licencing deal with Reddit earlier this year, which was reportedly worth $US60 million ($90 million) annually.

Other searches from US users — while not able to be independently verified — appeared to show Google Search telling them that former US president Barack Obama was a Muslim, chicken can be considered cooked at 38.8C, and no countries in Africa have names beginning with the letter K.

A suggestion to "eat at least one small rock per day” for your digestive health was seemingly pulled from a 2021 article by satirical news site The Onion titled: Geologists Recommend Eating At Least One Small Rock Per Day.

Users reportedly saw search results for some queries change or disappear, as Google appeared to be preventing some of the hallucinations from reoccuring.

Google did not respond to a request for comment, but previously told The Verge it was “taking swift action” to remove AI Overviews from some questions in search.

A spokesperson reportedly said many of the examples were “uncommon queries” and some appeared to be doctored.

They claimed that AI Overviews still largely produced “high quality information”.

Google also places a label at the bottom of its AI answers which states: “Generative AI is experimental.”

Hallucinations ‘an unsolved problem’, Google CEO says

Google CEO Sundar Pichai, responding to questions about AI Overviews in an interview with The Verge, said hallucinations in AI systems were still “an unsolved problem”, and “in some ways an inherent feature”.

“LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel so excited about search,” he said.

“Because in search we are bringing LLMs in a way, but we are grounding it with all the work we do in search, and layering it with enough context — I think we can deliver a better experience from that perspective.”

Pichai said there were still times when AI search would get things wrong, but he did not want to “underestimate how useful it can be at the same time”.

In its 17 May submission to the Australian Senate’s Select Committee on Adopting Artificial Intelligence, Google wrote that its approach to AI was “both bold and responsible”.

“Like most technologies, responsible adoption of AI requires stakeholders to take steps to maximise its benefits while minimising its potential risks and harms,” the company wrote.

“Given the ease in which information can be generated with generative AI, there are concerns around the proliferation of synthetic content which could amplify misinformation.”

Google added that it was committed to providing users with “high-quality information”, while helping them “navigate AI-generated content to help them make more informed decisions”.

Google says AI Overviews will roll out to more countries “soon”, and it plans to make it available to more than a billion people by the end of 2024.

The company also plans to add multi-step reasoning to allow for more complex search queries, and multimodal technology to allow users to search with images, video, audio or text.

Previous Google AI systems have also produced hallucinations soon after their public debuts.

A factual error in a response from Google’s Bard chatbot (now Gemini) was shown in an advertisement for the product in February 2023, contributing to a drop in the share price of Google’s parent company Alphabet.

Google also paused Gemini’s image generation abilities in February 2024 after the system was found to have created historically inaccurate and offensive material.

Microsoft has experienced hallucination issues with Bing Chat (now Copilot) in the past, which runs off OpenAI’s popular ChatGPT generative AI platform.