About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

"Unacceptable... We Got It Wrong": Google CEO Sundar Pichai To Employees On Gemini's Gaffes

This comes after some historically inaccurate image responses from Gemini went viral on social media
Google CEO Sundar Pichai speaks at a panel at the CEO Summit of the Americas | Getty Images | Photo by Anna Moneymaker
Google CEO Sundar Pichai speaks at a panel at the CEO Summit of the Americas | Getty Images | Photo by Anna Moneymaker

Google is actively working to fix certain problems with its Gemini artificial intelligence (AI) tool, CEO Sundar Pichai told employees on Tuesday in an internal memo. Pichai called some of the image responses "biased" and "completely unacceptable." This comes after some historically inaccurate images generated by Gemini went viral on social media. Last week, the company paused the image-generative abilities of Gemini, which was formerly known as BARD. Google plans to relaunch the image generator in the coming weeks, Semafor reported.


In the memo, obtained by CNBC, Pichai called the issues “problematic” and said they “have offended our users and shown bias— to be clear, that’s completely unacceptable and we got it wrong.” Pichai further added that he feels no AI is perfect, but he knows that the bar is set high for the company.


What is Gemini?

Gemini is Google's version of the popular AI chatbot ChatGPT. Gemini also answers questions and generates responses to text prompts. Earlier this month, Google introduced the image generator feature through Gemini.


However, over the past week, some users discovered that the generator was putting out historical inaccuracies to certain prompts. These pictures went viral, and the company had to pause the feature.

What’s The Problem with Gemini?

The controversy started when a viral post on X from @EndWokeness showed that when Gemini was asked for an image of a Founding Father of America, it showed a Black man, a Native American man, an Asian man, and a relatively dark-skinned man, which is historically inaccurate. Further, when the tool was asked to generate a portrait of a pope, it showed a Black man and a woman of color. Even images of Nazis were reportedly shown as racially diverse.


This was followed by other inaccurate, over-politically correct responses shared on social media generated by Gemini.

When the tool was asked if Elon Musk posting memes on X was worse than Hitler killing millions of people, Gemini allegedly replied that "it was not possible to definitively say".


When a user asked Gemini if it would be OK to misgender celebrity trans woman Caitlin Jenner as it was the only way to prevent a nuclear apocalypse, Gemini said it would “never” be acceptable.To this, Jenner said that she would be alright about if it prevented a nuclear apocalypse.


Elon Musk also described Gemini’s responses as “extremely alarming” as the tool is supposed to be embedded in Google’s products that are used by billions across the globe.


Here’s Why Gemini’s ‘Bias’ Problem May Be Hard To Fix

According to a BBC report, Google seems to have gone wrong with solving the problem of bias and created a greater bias problem. This means to remove historical bias, the tool generated absurd responses just to be politically correct.

Further, the enormous amounts of data AI on which the AI tool was trained, is also part of the problem. As per the report, most of what is publicly available on the internet contains various biases.

For instance, images of popes, doctors, army are more likely to feature men, and images of nurses, cleaners, or house help are more like to be of women. This is the exact kind of bias that Gemini seemed to counter but went massively wrong. This is because human history and culture are not that simple as certain nuances are instinctively known by humans which machines do not recognize.


According to a Vox report, AI Tools act on prompts from users who are either trying to generate depictions of the real world or depictions of a dream world where everything is politically correct and unbiased.

“In Gemini, they erred towards the ‘dream world’ approach, understanding that defaulting to the historic biases that the model learned would (minimally) result in massive public pushback,” Margaret Mitchell, the chief ethics scientist at the AI startup Hugging Face, mentioned in the Vox report.