ECONOMY & WORK
MONEY 101
NEWS
PERSONAL FINANCE
NET WORTH
About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
MARKETREALIST.COM / NEWS

Here’s How AI Chatbots ‘Hallucinate’ And Why It Is Dangerous For Users

Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt.
PUBLISHED DEC 29, 2023
Cover Image Source: Pexels | Photo by cottonbro studio
Cover Image Source: Pexels | Photo by cottonbro studio

The word “hallucination” is commonly associated with people hearing sounds and seeing things that no one else seems to hear or see. However, since the advancement of AI chatbots, the world has taken a different stand. Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt. Further, “hallucinate” in the AI sense, is Dictionary.com’s word of the year as it best represents the potential impact that AI may have on “the future of language and life.”



 

An AI hallucination occurs when a large language model (LLM) generates false information. LLMs are AI models that power the commonly known chatbots like ChatGPT and Google Bard.

Photo illustration of the welcome screen for the OpenAI
Photo illustration of the welcome screen for the OpenAI "ChatGPT" app | Getty Images | Photo by Leon Neal

An AI chatbot hallucinates when users provide a prompt that is either made up or about something that the LLMs have no understanding of. In this case, the LLMs use statistics to generate false information in language that is grammatically correct and within the context of the prompt. Thus, the information appears plausible but it necessarily isn’t true. Sometimes AI hallucinations appear to be nonsensical but it is often hard to differentiate between true information and a hallucination.

One infamous example of an AI hallucination mentioned in a TechTarget report involved Google's chatbot, Bard. When the chatbot was prompted “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard responded with an untrue claim that the James Webb Space Telescope was the first to take pictures of an exoplanet outside this solar system. However, in reality, the first images of an exoplanet were taken in 2004 according to NASA, and it was not the James Webb Space Telescope as it was launched in 2021.



 

Despite that, Bard's answer sounded plausible and was consistent with the prompt. This poses a risk to users as someone who is not looking at the information with a skeptical eye may believe that the chatbot’s reply is true.

Photo illustration of the OpenAI ChatGPT's  answer to the question
Photo illustration of the OpenAI ChatGPT's answer to the question "What can AI offer to humanity?" | Getty Images | Photo by Leon Neal

There is no clear way to determine the exact causes of AI hallucinations on a case-by-case basis. Further, an immediate risk that AI hallucinations pose is that they significantly hurt user trust. With the development of advanced chatbots, users begin to experience AI as more real and develop more inherent trust. However, false information given out in hallucinations can corrode that trust leaving users with the feeling of betrayal.

Another problem with AI hallucinations is the lack of awareness of the problem. Users can be fooled with false information and this can even be used to spread misinformation, fabricate citations and references, and even be weaponized in cyberattacks. Another challenge is mitigating the AI hallucinations as it can be difficult or in many cases impossible to determine why the LLM generated the specific hallucination. Further, there are limited ways to fix these hallucinations as going into the model to change the training data can use a lot of energy.

Both OpenAI and Google have warned users about the mistakes that their AI chatbots can make. The firms have advised users to double-check their responses.

Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection
Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection

Tech organizations are also working on ways to reduce hallucination, as per a CNBC report. Google handles hallucinations through user feedback. When Bard generates an inaccurate response, users are encouraged to click the thumbs-down button and describe why the answer was wrong to help the chatbot improve. Further, OpenAI has implemented a strategy called “process supervision” in which the AI model rewards itself for using proper reasoning to arrive at an output instead of just rewarding the system for generating a correct response to a prompt.

MORE ON MARKET REALIST
Americans are paying 26 cents more for gas than a week ago.
9 hours ago
Harvey was left holding his stomach after almost every answer the Hunter family gave.
13 hours ago
The firm's chief global equities strategist, Peter Oppenheimer, has warned that a correction is imminent.
1 day ago
The suit alleged Tinder charged older users more for its Gold and Platinum subscriptions
1 day ago
The Yoyo Gummy candies are part of an ongoing recall across 14 states over unallowed food dye.
1 day ago
The two progressives estimate the tax would bring in $4.4 trillion over the next decade.
3 days ago
Hearing the answer, Harvey knew the contestant would need god by his side to save his marriage.
3 days ago
After painfully losing out by 5 points the previous night, the Baccus family made a comeback
4 days ago
Harvey's anecdotes made it clear that he had been through some steamy situations.
4 days ago
Michael Green isn't worried about AI stocks, as a passive investment bubble is a "more salient" risk
4 days ago
The AI assistant app seems to have benefitted from the headlines that emerged after Trump's rant.
4 days ago
AT&T, Verizon Wireless, and T-Mobile have their own spam blocking tools for their subscribers.
4 days ago
The newly introduced Trump accounts have the same tax advantages as IRAs.
7 days ago
While the IMF warned the current administration's policies could make deficits worse.
7 days ago
Fans couldn't believe how a contestant failed to secure just 31 points out of the 200 that his partner had scored.
7 days ago
While the answer touched Harvey's heart, he was sure nobody would do that for a celebrity.
7 days ago
Trump's claims were both partially true and ridiculous, according to industry analysts.
7 days ago
People on social media accused the actor of being a hypocrite, urging him to step up first.
Feb 26, 2026
Trump's pledge sounds empty as OBBBA has shaved over $1 trillion in social safety nets funding.
Feb 26, 2026
While her answer wasn't technically wrong, the survey begged to differ.
Feb 26, 2026