Monday, September 16, 2024

Google is playing whack-a-mole with disastrous AI Overviews gone viral

Must read

Google decided to broadly roll out its AI Overviews feature in the US earlier this month, offering AI-generated summaries for various queries. Unfortunately, many responses were inaccurate, weird, or outright dangerous.

Now, Google has confirmed in a response to Android Authority that it’s taking “swift action” on offending queries:

The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.

Some of the most notable AI Overviews that went viral include a recommendation to eat one small rock each day, telling people to use non-toxic glue to ensure cheese sticks to pizza, and a suggestion to drink at least two liters of urine each day to pass a kidney stone. One of the more deeply concerning apparent gaffes was a recommendation to jump off a bridge if users noted that they were depressed. It seems that at least some of these answers came from these AI Overviews referencing satirical articles or trolls on forums (e.g. Reddit).

This is just the latest in a series of AI gaffes we’ve seen from Google as it seemingly rushes to bring AI to everything. More recently, Gemini’s image generation feature came under fire for creating images of racially diverse WWII-era Nazis instead of historically accurate pictures. Earlier versions of Bard (now Gemini) also made headlines for hallucinations and incorrect answers.

Latest article