ECONOMY & WORK
MONEY 101
NEWS
PERSONAL FINANCE
NET WORTH
About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
MARKETREALIST.COM / NEWS

Here’s How AI Chatbots ‘Hallucinate’ And Why It Is Dangerous For Users

Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt.
PUBLISHED DEC 29, 2023
Cover Image Source: Pexels | Photo by cottonbro studio
Cover Image Source: Pexels | Photo by cottonbro studio

The word “hallucination” is commonly associated with people hearing sounds and seeing things that no one else seems to hear or see. However, since the advancement of AI chatbots, the world has taken a different stand. Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt. Further, “hallucinate” in the AI sense, is Dictionary.com’s word of the year as it best represents the potential impact that AI may have on “the future of language and life.”



 

An AI hallucination occurs when a large language model (LLM) generates false information. LLMs are AI models that power the commonly known chatbots like ChatGPT and Google Bard.

Photo illustration of the welcome screen for the OpenAI
Photo illustration of the welcome screen for the OpenAI "ChatGPT" app | Getty Images | Photo by Leon Neal

An AI chatbot hallucinates when users provide a prompt that is either made up or about something that the LLMs have no understanding of. In this case, the LLMs use statistics to generate false information in language that is grammatically correct and within the context of the prompt. Thus, the information appears plausible but it necessarily isn’t true. Sometimes AI hallucinations appear to be nonsensical but it is often hard to differentiate between true information and a hallucination.

One infamous example of an AI hallucination mentioned in a TechTarget report involved Google's chatbot, Bard. When the chatbot was prompted “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard responded with an untrue claim that the James Webb Space Telescope was the first to take pictures of an exoplanet outside this solar system. However, in reality, the first images of an exoplanet were taken in 2004 according to NASA, and it was not the James Webb Space Telescope as it was launched in 2021.



 

Despite that, Bard's answer sounded plausible and was consistent with the prompt. This poses a risk to users as someone who is not looking at the information with a skeptical eye may believe that the chatbot’s reply is true.

Photo illustration of the OpenAI ChatGPT's  answer to the question
Photo illustration of the OpenAI ChatGPT's answer to the question "What can AI offer to humanity?" | Getty Images | Photo by Leon Neal

There is no clear way to determine the exact causes of AI hallucinations on a case-by-case basis. Further, an immediate risk that AI hallucinations pose is that they significantly hurt user trust. With the development of advanced chatbots, users begin to experience AI as more real and develop more inherent trust. However, false information given out in hallucinations can corrode that trust leaving users with the feeling of betrayal.

Another problem with AI hallucinations is the lack of awareness of the problem. Users can be fooled with false information and this can even be used to spread misinformation, fabricate citations and references, and even be weaponized in cyberattacks. Another challenge is mitigating the AI hallucinations as it can be difficult or in many cases impossible to determine why the LLM generated the specific hallucination. Further, there are limited ways to fix these hallucinations as going into the model to change the training data can use a lot of energy.

Both OpenAI and Google have warned users about the mistakes that their AI chatbots can make. The firms have advised users to double-check their responses.

Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection
Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection

Tech organizations are also working on ways to reduce hallucination, as per a CNBC report. Google handles hallucinations through user feedback. When Bard generates an inaccurate response, users are encouraged to click the thumbs-down button and describe why the answer was wrong to help the chatbot improve. Further, OpenAI has implemented a strategy called “process supervision” in which the AI model rewards itself for using proper reasoning to arrive at an output instead of just rewarding the system for generating a correct response to a prompt.

MORE ON MARKET REALIST
Cuban was against a royalty deal offered by his fellow Sharks, Lori Greiner and Robert Herjavec.
2 hours ago
The collection of 11 national championship rings was from the UConn Women's Basketball dynasty.
3 hours ago
Fans alleged that the show is using increasingly difficult puzzles in the Bonus Rounds.
5 hours ago
The guest had endured a lot of criticism for buying the prints at even such a low price.
1 day ago
Things got intense for her as she unlocked a mega cash with just one key in her hand in the "Master Key" game.
1 day ago
A popular name has come up in every conversation about White's successor.
2 days ago
Lori Greiner wasn't happy at all as Mark Cuban and Maria Sharapova snubbed her for a deal.
2 days ago
Fans took to Reddit to discuss the issue with Seacrest not reminding the players of one crucial element.
3 days ago
When Rick Harrison's side kick bought King Booker's boots, he had to make sure it was the real deal.
3 days ago
Turns out the Hollywood star is quite the fan of former U.S. President, Ronald Reagan.
3 days ago
Harrison bagged one of Jabbar's personal items and a chance to go to a Lakers game with him
4 days ago
Recently, the Bonus Round of the show has featured many old phrases that led to heartbreaking losses.
4 days ago
Lubetzky was the first to drop out but then came back as he was impressed by the entrepreneur.
5 days ago
The appraiser took the guest's breath away as she was expecting to get much less for it.
5 days ago
The show got close to hitting the record of the longest Bonus Round losing streak.
5 days ago
She continued to be hyperactive throughout the game even with her answers.
5 days ago
With four offers on the table, the founders of Bucket Golf smartly got a deal that they wanted.
5 days ago
Fans took to Reddit to discuss if the show has gotten a little too social for its style.
6 days ago
When the Holy Grail toy came up to Harrison's table, he wasn't willing to let it go.
6 days ago
The founder of 'Noshi Food Paint' was down to the last shark for a deal.
7 days ago