How Google got its AI answers so wrong

By Analysis by The Big Story Podcast

Using glue to stick cheese on a pizza. Drinking urine to pass kidney stones. The past few weeks have been filled with weird, hilarious and definitively wrong answers supplied by Google’s new AI Overview. The criticism became so intense that Google has fixed many of the answers manually, but it’s still determined to push forward incorporating AI into its responses. 

Max Read is the author of Read Max on Substack. “[Large language models] can’t tell what’s a joke, what’s serious. They can’t tell what’s instructions, what’s description. They’re quite good at putting all that together into a single paragraph that syntactically makes sense, but often conceptually it’s totally deranged,” said Read. 

How did AI mess these simple questions up? What has Google lost as it moves forward with its plans? And … does the company understand what its chief product is actually for, or how people use it?

You can subscribe to The Big Story podcast on Apple PodcastsGoogle and Spotify. You can also find it at

Top Stories

Top Stories

Most Watched Today