Friday, November 22, 2024

Google finally addresses those bizarre AI search results

Must read

Google has finally addressed those unhinged web search results seemingly generated by its AI, screenshots of which circulated far and wide on social media this past week.

In short, the internet goliath argued it’s not as bad as it looks, though vowed to eliminate the system’s baffling responses.

For those who missed it, Google introduced these so-called AI Overviews this month, graduating the system from an optional experimental feature to putting it into worldwide production starting with US users.

Basically, when you search for something on the web using Google, or ask it a question, the tech colossus may use its Gemini AI mega-model to automatically generate an answer at the top of the results page for your query. This answer is supposed to be based on what the web has to say about the topic you attempted to look up. Rather than click through search results links to pages to find information, netizens are offered an AI-made summary of that info right there on the results page.

That summary is supposed to be accurate and relevant. However, as some folks discovered, Google sometimes came back with absurd and nonsensical answers.

It highlighted some specific areas that we needed to improve

No doubt at least some of the replies screenshotted and shared on social networks were edited by humans to make it look as though the Big G had completely lost the plot. That said, in two especially high profile examples, if genuine, AI Overviews said people “should eat one small rock per day” and that cheese not sticking to pizza could be fixed by adding “non-toxic glue” to the sauce.

These idiotic replies appear to have stemmed from, we assume among other things, jokes and snark made on Reddit, which is a source of training data for various LLMs including Google’s – the Chrome behemoth is paying Reddit about $60 million a year to ingest its users’ posts and comments.

“Some odd, inaccurate or unhelpful AI Overviews certainly did show up,” Google’s search boss Liz Reid confirmed in an update on Thursday. “And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve.”

Google is taking the position that the screenshotted examples of dodgy advice are a fraction of the AI system’s overall output. It defended its Gemini-based results, and said the system only needed a few tweaks, rather than a full reworking, to get it on track. Reid claimed “a very large number” of the bizarre results we’ve seen were faked, and denied that AI Overviews ever actually recommended smoking while pregnant, leaving dogs in cars, or jumping off bridges to cure depression, as some on social media alleged.

Google says one key issue with AI Overviews is that it took “nonsensical queries” far too seriously, specifically pointing out that the recommendation to literally eat rocks was only spat out by the search engine because the question was: “How many rocks should I eat?”

AI Overviews will now, we’re assured, be able to more clearly identify and handle satirical content appropriately, which would have helped prevent the rock eating recommendation from happening since it was based on an article from The Onion.

Google is also limiting its reliance on info written by everyday netizens, which is how the glue on pizza suggestion came up: From someone called Fucksmith joking around on Reddit.

Additionally, AI Overviews won’t pop up for “queries where AI Overviews were not proving to be as helpful.” It’s not clear when that would apply, but we guess it’s probably for simple questions like, “How big is the US?” where Google can just take a snippet from Wikipedia or another resource, which is what the search engine did before AI Overviews debuted.

The tech super-corp believes AI Overviews generally work well overall despite these rough first impressions, citing user feedback and a self-reported content policy violation rate of one in seven million searches. ®

Latest article