Skip to main content

Google’s Super Bowl Ad Campaign Embroiled in Drama Over AI Model

Google’s Super Bowl Sunday ad campaign, intended to promote its Gemini AI model, has been marred by controversy. Initially, the campaign was thought to be based on hallucinated information from Gemini, which would have been a significant issue on its own. However, it has been revealed that the information did not originate from Gemini at all.

The Original Misconception

The planned ad buy for this year’s Big Game included 50 separate stories highlighting small businesses in all 50 states that have used Gemini tools to help their operations. One of these ads centered on a cheese store called the Wisconsin Cheese Mart, suggesting that the company used Gemini to generate copywriting for its website. The website copy included a claim that gouda makes up "50 to 60 percent of the world’s cheese consumption," which is not true.

The Controversy Escalates

Google faced backlash for this claim, and eventually changed the text in the advertisement to exclude the incorrect factoid about the cheese. However, it has been discovered that the incorrect copy was not generated by Gemini, despite the advertisement suggesting otherwise.

The Internet Archive Reveals the Truth

Thanks to the Internet Archive, it has been confirmed that the text originally purported to be generated by Gemini has been on the Wisconsin Cheese Mart website as far back as 2020. This means that Gemini is not responsible for the factual error in the website copy.

A Public Embarrassment for Google

The situation has become even more embarrassing for Google, as a company executive publicly defended the original, non-AI-generated text. Jerry Dischler, the President of Cloud Applications at Google Cloud, insisted on Twitter that the text was "not a hallucination" and that "Gemini is grounded in the Web." However, the evidence suggests that this particular example was not grounded in anything because it was not from Gemini in the first place.

Google’s Awkward Position

Google is now in a difficult position, having defended its AI model for sharing false information, only to have it revealed that the AI model did not even generate the text. The company was ready to spend millions of dollars to advertise the functionality of its AI suite with examples that are not even actually from the tool itself. The situation is reminiscent of old video game trailers that would show polished-looking promotional clips with disclaimers "not actual gameplay footage" below them. "Sure, this isn’t AI-generated, but imagine if it was!"


Source Link