ECONOMY & WORK
MONEY 101
NEWS
PERSONAL FINANCE
NET WORTH
About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
MARKETREALIST.COM / NEWS

Here’s How AI Chatbots ‘Hallucinate’ And Why It Is Dangerous For Users

Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt.
PUBLISHED DEC 29, 2023
Cover Image Source: Pexels | Photo by cottonbro studio
Cover Image Source: Pexels | Photo by cottonbro studio

The word “hallucination” is commonly associated with people hearing sounds and seeing things that no one else seems to hear or see. However, since the advancement of AI chatbots, the world has taken a different stand. Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt. Further, “hallucinate” in the AI sense, is Dictionary.com’s word of the year as it best represents the potential impact that AI may have on “the future of language and life.”



 

An AI hallucination occurs when a large language model (LLM) generates false information. LLMs are AI models that power the commonly known chatbots like ChatGPT and Google Bard.

Photo illustration of the welcome screen for the OpenAI
Photo illustration of the welcome screen for the OpenAI "ChatGPT" app | Getty Images | Photo by Leon Neal

An AI chatbot hallucinates when users provide a prompt that is either made up or about something that the LLMs have no understanding of. In this case, the LLMs use statistics to generate false information in language that is grammatically correct and within the context of the prompt. Thus, the information appears plausible but it necessarily isn’t true. Sometimes AI hallucinations appear to be nonsensical but it is often hard to differentiate between true information and a hallucination.

One infamous example of an AI hallucination mentioned in a TechTarget report involved Google's chatbot, Bard. When the chatbot was prompted “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard responded with an untrue claim that the James Webb Space Telescope was the first to take pictures of an exoplanet outside this solar system. However, in reality, the first images of an exoplanet were taken in 2004 according to NASA, and it was not the James Webb Space Telescope as it was launched in 2021.



 

Despite that, Bard's answer sounded plausible and was consistent with the prompt. This poses a risk to users as someone who is not looking at the information with a skeptical eye may believe that the chatbot’s reply is true.

Photo illustration of the OpenAI ChatGPT's  answer to the question
Photo illustration of the OpenAI ChatGPT's answer to the question "What can AI offer to humanity?" | Getty Images | Photo by Leon Neal

There is no clear way to determine the exact causes of AI hallucinations on a case-by-case basis. Further, an immediate risk that AI hallucinations pose is that they significantly hurt user trust. With the development of advanced chatbots, users begin to experience AI as more real and develop more inherent trust. However, false information given out in hallucinations can corrode that trust leaving users with the feeling of betrayal.

Another problem with AI hallucinations is the lack of awareness of the problem. Users can be fooled with false information and this can even be used to spread misinformation, fabricate citations and references, and even be weaponized in cyberattacks. Another challenge is mitigating the AI hallucinations as it can be difficult or in many cases impossible to determine why the LLM generated the specific hallucination. Further, there are limited ways to fix these hallucinations as going into the model to change the training data can use a lot of energy.

Both OpenAI and Google have warned users about the mistakes that their AI chatbots can make. The firms have advised users to double-check their responses.

Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection
Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection

Tech organizations are also working on ways to reduce hallucination, as per a CNBC report. Google handles hallucinations through user feedback. When Bard generates an inaccurate response, users are encouraged to click the thumbs-down button and describe why the answer was wrong to help the chatbot improve. Further, OpenAI has implemented a strategy called “process supervision” in which the AI model rewards itself for using proper reasoning to arrive at an output instead of just rewarding the system for generating a correct response to a prompt.

MORE ON MARKET REALIST
The co-founders of Y'all Sweet Tea blew away the Sharks with their incredible numbers.
5 hours ago
From his lips to his bald head, Harvey has to defends all sort of jabs.
10 hours ago
The woman who gifted the painting to the owners had earlier sold it to another dealer as she couldnt sleep at night.
2 days ago
When it came to naming something of Harvey they would want to touch, the players didn't hold back
2 days ago
Rick Harrison's go to expert warned him not to touch it with a "10 foot pole."
2 days ago
The massive manufacturer has been operational for more than a hundred years now.
3 days ago
While the player was overwhelmed with emotion, host, Drew Carey was left hanging for a handshake.
5 days ago
Turns out, the guest's father was the renowned artist, Demetrios Jameson.
5 days ago
While the player, Rodney Flippen took the loss on the chin, fans were left heartbroken.
6 days ago
Harvey was stunned to see that Lisa would risk her marriage to get the top answer.
6 days ago
The outbreak has been linked to four deaths, and 19 hospitalizations, according to the CDC.
6 days ago
The President and CEO of Walmart, Doug McMillion expressed that AI will "literally change every job."
7 days ago
While the seller was confident that the instrument was ever so valuable, Harrison begged to differ.
7 days ago
The 54th season of the show has begun with a brand-new game, but fans fear it could be rigged.
Sep 30, 2025
The guest had absolutely no idea that the Ching Dynasty snuff bottle could be worth so much.
Sep 30, 2025
While Corey Harrison tried his best, the seller had other plans and stuck to it.
Sep 28, 2025
Harvey warned the contestant that she may be in a situation she can't handle.
Sep 27, 2025
In the end, the expert adviced the guest to put the item on a pedestal for the future.
Sep 27, 2025
Fans debated if the player got the correct answer in time, or if the judges the right call
Sep 26, 2025
While the comedian/host's answers were barely intelligible, Harvey didn't miss a beat to roast him.
Sep 26, 2025