The Curious Case of ‘Peanut Butter Platform Heels’
The term “peanut butter platform heels” might sound unfamiliar, yet it purportedly stems from a scientific study where peanut butter was subjected to extreme pressure, purportedly turning it into a diamond-like substance—hence the association with “heels.” However, this claim is entirely fabricated. The phrase lacks any factual basis and was notably generated by Google AI Overviews in response to queries made by writer Meaghan Wilson-Anastasios, as highlighted in this post on Threads, which features other hilarious nonsensical phrases.
View on Threads
This topic caught the internet’s attention quickly. For instance, the phrase “you can’t lick a badger twice” is said to imply that one cannot be deceived more than once, while “a loose dog won’t surf” suggests certain events are improbable. Furthermore, “the bicycle eats first” is a humorous reminder to focus on nutrition during cycling training.
While this entertainment could have inspired further nonsense phrases, Google has seemingly clamped down on this behavior; attempts to coax AI into providing definitions for made-up terms now often result in a polite refusal and clarification of their inaccuracy.
Contrarily, testers using AI chatbots like Gemini, Claude, and ChatGPT reported a different experience. These systems not only aim to rationalize such phrases but also acknowledge their nonsensical nature, offering a more refined analysis that Google’s version lacks.
Someone on Threads observed that typing any arbitrary sentence into Google followed by “meaning” yields an AI-generated explanation of a fictitious idiom. Here’s my attempt.
— Greg Jenner (@gregjenner.bsky.social)
Despite Google tagging AI Overviews as “experimental,” most users likely perceive them as trustworthy sources of information, which can be misleading. Although Google’s team may have recognized this specific glitch, similar issues are likely to resurface soon—indicating deeper flaws in relying on AI for our information needs rather than consulting credible human-written resources.
What’s Happening?
The essence of AI Overviews centers on their design to yield answers and compile information, even in instances where a precise query match doesn’t exist, ultimately leading to the phrase-definition dilemma. Additionally, AI may not expertly discern reliable sources from unreliable ones across the web.
If troubleshooting a laptop used to mean sifting through multiple links from forums and sites like DailyHackly, now AI Overviews synthesizing all available content can yield erroneous conclusions, sometimes exacerbating rather than resolving issues.
Feedback indicates AI bots may lean toward agreement with queries, even when factual bases are absent. These models exhibit a drive to please, and thus may validate erroneous prompts based on wording alone.
Attempts to clarify complex topics using AI Overviews can lead to mistakes; for instance, querying whether R.E.M.’s second album was recorded in London led the AI to misstate the facts. The truth is, it was actually recorded in North Carolina, while their third album was the one produced in London.
In contrast, the Gemini app correctly identifies the second album’s recording location. This highlights the questionable accuracy of AI Overviews, which attempt to blend diverse online content into coherent replies, often producing unreliable information especially when faced with confidently phrased queries.
“When users perform searches based on nonsensical premises, our systems strive to produce the most relevant results from the limited web content available,” stated Google in a comment to Android Authority. “This holds true for overall search functionality, and in certain situations, AI Overviews are triggered to supply contextual support.”
As search engines continue evolving toward AI-driven responses, the reality is that AI lacks the hands-on experience necessary to resolve practical issues. It only pieces together data gathered from those who have firsthand experience, constructing answers by identifying the most likely subsequent word.