"Unacceptable... We Got It Wrong": Google CEO Sundar Pichai To Employees On Gemini's Gaffes
Google is actively working to fix certain problems with its Gemini artificial intelligence (AI) tool, CEO Sundar Pichai told employees on Tuesday in an internal memo. Pichai called some of the image responses "biased" and "completely unacceptable." This comes after some historically inaccurate images generated by Gemini went viral on social media. Last week, the company paused the image-generative abilities of Gemini, which was formerly known as BARD. Google plans to relaunch the image generator in the coming weeks, Semafor reported.
Google CEO Sundar Pichai is putting heat on the internet company's engineers to fix its Gemini AI app pronto, calling some of the tool's responses "completely unacceptable." https://t.co/crBntpGRMK
— CBS News (@CBSNews) February 28, 2024
In the memo, obtained by CNBC, Pichai called the issues “problematic” and said they “have offended our users and shown bias— to be clear, that’s completely unacceptable and we got it wrong.” Pichai further added that he feels no AI is perfect, but he knows that the bar is set high for the company.
$96 BILLION LOSS: CEO Sundar Pichai says Google is working "around the clock" to fix the AI tool after users flagged its bias against White people. https://t.co/OWTgK3fJKP pic.twitter.com/kKPhIKSCcX
— FOX Business (@FoxBusiness) February 29, 2024
What is Gemini?
Gemini is Google's version of the popular AI chatbot ChatGPT. Gemini also answers questions and generates responses to text prompts. Earlier this month, Google introduced the image generator feature through Gemini.
Today we're introducing Gemini 1.5, our next-generation AI model. It shows dramatically enhanced performance, including long-context understanding across modalities, which opens up new possibilities for people to create and build with AI → https://t.co/TjDy8GHIQS #GeminiAI pic.twitter.com/043FGirXB0
— Google (@Google) February 15, 2024
However, over the past week, some users discovered that the generator was putting out historical inaccuracies to certain prompts. These pictures went viral, and the company had to pause the feature.
What’s The Problem with Gemini?
The controversy started when a viral post on X from @EndWokeness showed that when Gemini was asked for an image of a Founding Father of America, it showed a Black man, a Native American man, an Asian man, and a relatively dark-skinned man, which is historically inaccurate. Further, when the tool was asked to generate a portrait of a pope, it showed a Black man and a woman of color. Even images of Nazis were reportedly shown as racially diverse.
America's Founding Fathers, Vikings, and the Pope according to Google AI: pic.twitter.com/lw4aIKLwkp
— End Wokeness (@EndWokeness) February 21, 2024
This was followed by other inaccurate, over-politically correct responses shared on social media generated by Gemini.
When the tool was asked if Elon Musk posting memes on X was worse than Hitler killing millions of people, Gemini allegedly replied that "it was not possible to definitively say".
every single person who worked on this should take a long hard look in the mirror.
— Frantastic — e/acc (@Frantastic_7) February 25, 2024
absolutely appalling. pic.twitter.com/hII1DmMhJn
When a user asked Gemini if it would be OK to misgender celebrity trans woman Caitlin Jenner as it was the only way to prevent a nuclear apocalypse, Gemini said it would “never” be acceptable.To this, Jenner said that she would be alright about if it prevented a nuclear apocalypse.
Yes. Meanwhile… eagerly waiting on the new version from X cc: @elonmusk @lindayaX https://t.co/xs7OaDtU4p
— Caitlyn Jenner (@Caitlyn_Jenner) February 24, 2024
Elon Musk also described Gemini’s responses as “extremely alarming” as the tool is supposed to be embedded in Google’s products that are used by billions across the globe.
Given that the Gemini AI will be at the heart of every Google product and YouTube, this is extremely alarming!
— Elon Musk (@elonmusk) February 25, 2024
The senior Google exec called me again yesterday and said it would take a few months to fix. Previously, he thought it would be faster.
My response to him was that I… https://t.co/23uc7dd5fw
Here’s Why Gemini’s ‘Bias’ Problem May Be Hard To Fix
According to a BBC report, Google seems to have gone wrong with solving the problem of bias and created a greater bias problem. This means to remove historical bias, the tool generated absurd responses just to be politically correct.
Further, the enormous amounts of data AI on which the AI tool was trained, is also part of the problem. As per the report, most of what is publicly available on the internet contains various biases.
For instance, images of popes, doctors, army are more likely to feature men, and images of nurses, cleaners, or house help are more like to be of women. This is the exact kind of bias that Gemini seemed to counter but went massively wrong. This is because human history and culture are not that simple as certain nuances are instinctively known by humans which machines do not recognize.
Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversityhttps://t.co/gNxxJQdroN
— TIME (@TIME) February 28, 2024
According to a Vox report, AI Tools act on prompts from users who are either trying to generate depictions of the real world or depictions of a dream world where everything is politically correct and unbiased.
“In Gemini, they erred towards the ‘dream world’ approach, understanding that defaulting to the historic biases that the model learned would (minimally) result in massive public pushback,” Margaret Mitchell, the chief ethics scientist at the AI startup Hugging Face, mentioned in the Vox report.