ECONOMY & WORK
MONEY 101
NEWS
PERSONAL FINANCE
NET WORTH
About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
MARKETREALIST.COM / NEWS

Here’s How AI Chatbots ‘Hallucinate’ And Why It Is Dangerous For Users

Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt.
PUBLISHED DEC 29, 2023
Cover Image Source: Pexels | Photo by cottonbro studio
Cover Image Source: Pexels | Photo by cottonbro studio

The word “hallucination” is commonly associated with people hearing sounds and seeing things that no one else seems to hear or see. However, since the advancement of AI chatbots, the world has taken a different stand. Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt. Further, “hallucinate” in the AI sense, is Dictionary.com’s word of the year as it best represents the potential impact that AI may have on “the future of language and life.”



 

An AI hallucination occurs when a large language model (LLM) generates false information. LLMs are AI models that power the commonly known chatbots like ChatGPT and Google Bard.

Photo illustration of the welcome screen for the OpenAI
Photo illustration of the welcome screen for the OpenAI "ChatGPT" app | Getty Images | Photo by Leon Neal

An AI chatbot hallucinates when users provide a prompt that is either made up or about something that the LLMs have no understanding of. In this case, the LLMs use statistics to generate false information in language that is grammatically correct and within the context of the prompt. Thus, the information appears plausible but it necessarily isn’t true. Sometimes AI hallucinations appear to be nonsensical but it is often hard to differentiate between true information and a hallucination.

One infamous example of an AI hallucination mentioned in a TechTarget report involved Google's chatbot, Bard. When the chatbot was prompted “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard responded with an untrue claim that the James Webb Space Telescope was the first to take pictures of an exoplanet outside this solar system. However, in reality, the first images of an exoplanet were taken in 2004 according to NASA, and it was not the James Webb Space Telescope as it was launched in 2021.



 

Despite that, Bard's answer sounded plausible and was consistent with the prompt. This poses a risk to users as someone who is not looking at the information with a skeptical eye may believe that the chatbot’s reply is true.

Photo illustration of the OpenAI ChatGPT's  answer to the question
Photo illustration of the OpenAI ChatGPT's answer to the question "What can AI offer to humanity?" | Getty Images | Photo by Leon Neal

There is no clear way to determine the exact causes of AI hallucinations on a case-by-case basis. Further, an immediate risk that AI hallucinations pose is that they significantly hurt user trust. With the development of advanced chatbots, users begin to experience AI as more real and develop more inherent trust. However, false information given out in hallucinations can corrode that trust leaving users with the feeling of betrayal.

Another problem with AI hallucinations is the lack of awareness of the problem. Users can be fooled with false information and this can even be used to spread misinformation, fabricate citations and references, and even be weaponized in cyberattacks. Another challenge is mitigating the AI hallucinations as it can be difficult or in many cases impossible to determine why the LLM generated the specific hallucination. Further, there are limited ways to fix these hallucinations as going into the model to change the training data can use a lot of energy.

Both OpenAI and Google have warned users about the mistakes that their AI chatbots can make. The firms have advised users to double-check their responses.

Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection
Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection

Tech organizations are also working on ways to reduce hallucination, as per a CNBC report. Google handles hallucinations through user feedback. When Bard generates an inaccurate response, users are encouraged to click the thumbs-down button and describe why the answer was wrong to help the chatbot improve. Further, OpenAI has implemented a strategy called “process supervision” in which the AI model rewards itself for using proper reasoning to arrive at an output instead of just rewarding the system for generating a correct response to a prompt.

MORE ON MARKET REALIST
The bill in Oklahoma will reportedly affect thousands, including children entitled to the benefits.
14 hours ago
The guest who thought her gift would be worth $3,500, was left almost shaking in the end.
15 hours ago
The 30-second spot pays homage to the open seating policy while celebrating its new Assigned seating
17 hours ago
President Trump also spoke about the reason why he chose Kevin Warsh as the next Fed chair nominee.
1 day ago
Taxpayers in Texas, Louisiana, or Mississippi who paid self-employment tax can get their money back.
2 days ago
Trump’s overall job approval slipped to 45% in January, down from 47% in December.
2 days ago
While the host found the answers stupid, the survey thought otherwise.
2 days ago
Shopper/TikTok creator, Jimmy Wrigg found beef and ham products to be half their labelled weight
3 days ago
Referring to his previous lawsuits, the president said he would be coming after Noah for "plenty$"
3 days ago
Harvey found the answer so stupid that he couldn't give up his chance to roast
3 days ago
Harvey got hyped after he found something in common with the NFL Hall of Famers.
3 days ago
Winning $20,000 on "Family Feud" is a big deal and emotions can run high. 
4 days ago
Jeff Probst will join Drew Carey to celebrate 50 seasons of Survivor.
7 days ago
The US may lose millions in tourist spending which could in turn cost 150,000 jobs as per WTTC
7 days ago
It's safe to say that Harvey has been yelled at quite a few times at home.
7 days ago
He said it will make the 2008 financial crisis look like a 'Sunday school picnic.'
Jan 29, 2026
National Taxpayer Advocate noted the IRS is battling 27% drop in workforce and new tax law changes
Jan 29, 2026
Harvey almost turned into Michael Jackson after hearing the answer.
Jan 29, 2026