ECONOMY & WORK
MONEY 101
NEWS
PERSONAL FINANCE
NET WORTH
About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
MARKETREALIST.COM / NEWS

Here’s How AI Chatbots ‘Hallucinate’ And Why It Is Dangerous For Users

Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt.
PUBLISHED DEC 29, 2023
Cover Image Source: Pexels | Photo by cottonbro studio
Cover Image Source: Pexels | Photo by cottonbro studio

The word “hallucination” is commonly associated with people hearing sounds and seeing things that no one else seems to hear or see. However, since the advancement of AI chatbots, the world has taken a different stand. Turns out, AI chatbots are also capable of hallucinating and they do so by making up false information in response to a user’s prompt. Further, “hallucinate” in the AI sense, is Dictionary.com’s word of the year as it best represents the potential impact that AI may have on “the future of language and life.”



 

An AI hallucination occurs when a large language model (LLM) generates false information. LLMs are AI models that power the commonly known chatbots like ChatGPT and Google Bard.

Photo illustration of the welcome screen for the OpenAI
Photo illustration of the welcome screen for the OpenAI "ChatGPT" app | Getty Images | Photo by Leon Neal

An AI chatbot hallucinates when users provide a prompt that is either made up or about something that the LLMs have no understanding of. In this case, the LLMs use statistics to generate false information in language that is grammatically correct and within the context of the prompt. Thus, the information appears plausible but it necessarily isn’t true. Sometimes AI hallucinations appear to be nonsensical but it is often hard to differentiate between true information and a hallucination.

One infamous example of an AI hallucination mentioned in a TechTarget report involved Google's chatbot, Bard. When the chatbot was prompted “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard responded with an untrue claim that the James Webb Space Telescope was the first to take pictures of an exoplanet outside this solar system. However, in reality, the first images of an exoplanet were taken in 2004 according to NASA, and it was not the James Webb Space Telescope as it was launched in 2021.



 

Despite that, Bard's answer sounded plausible and was consistent with the prompt. This poses a risk to users as someone who is not looking at the information with a skeptical eye may believe that the chatbot’s reply is true.

Photo illustration of the OpenAI ChatGPT's  answer to the question
Photo illustration of the OpenAI ChatGPT's answer to the question "What can AI offer to humanity?" | Getty Images | Photo by Leon Neal

There is no clear way to determine the exact causes of AI hallucinations on a case-by-case basis. Further, an immediate risk that AI hallucinations pose is that they significantly hurt user trust. With the development of advanced chatbots, users begin to experience AI as more real and develop more inherent trust. However, false information given out in hallucinations can corrode that trust leaving users with the feeling of betrayal.

Another problem with AI hallucinations is the lack of awareness of the problem. Users can be fooled with false information and this can even be used to spread misinformation, fabricate citations and references, and even be weaponized in cyberattacks. Another challenge is mitigating the AI hallucinations as it can be difficult or in many cases impossible to determine why the LLM generated the specific hallucination. Further, there are limited ways to fix these hallucinations as going into the model to change the training data can use a lot of energy.

Both OpenAI and Google have warned users about the mistakes that their AI chatbots can make. The firms have advised users to double-check their responses.

Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection
Image of Person's hand holding an iPhone using the Google Bard generative AI language model (chatbot) | Getty Images | Photo by Smith Collection

Tech organizations are also working on ways to reduce hallucination, as per a CNBC report. Google handles hallucinations through user feedback. When Bard generates an inaccurate response, users are encouraged to click the thumbs-down button and describe why the answer was wrong to help the chatbot improve. Further, OpenAI has implemented a strategy called “process supervision” in which the AI model rewards itself for using proper reasoning to arrive at an output instead of just rewarding the system for generating a correct response to a prompt.

MORE ON MARKET REALIST
The player, Ryan Halsey, took the game by storm in the initial rounds but was stumped in the finale.
2 hours ago
Sometimes the host of Family Feud just wants the chaos to end as it gets too much.
3 hours ago
The show took a hilarious turn when a contestant gave a bold answer that caught the host completely off guard.
5 hours ago
Despite talking through her guesses, Carrie Trujillo couldn't crack the puzzle and failed to win $40,000.
6 hours ago
Robert Herjavec and Lori Greiner rubbed it in O'Leary's face by celebrating their deal with Phoozy
1 day ago
Duc and Lisa Nguyen's stubbornness paid off, as the co-founders of Baubles + Soles got Daymond John as a partner.
1 day ago
The player got the host to be candid about his fears and his mother's opinion on him.
1 day ago
Justin Baer, founder of Collars & Co., was looking for mentorship from the Sharks in addition to a $300,000 investment.
1 day ago
She said that her husband may still have to buy a dog as America may hold him accountable.
2 days ago
When Harrison knew that the 18th-century map was the real deal, he made a genuine offer.
2 days ago
10 years after her sister’s win, Chelsea Hall hit the jackpot on ‘WoF’ with a brand new Mini Cooper and a cash prize.
2 days ago
The co-founders of BuggyBeds wowed the Sharks so much, they were "itching" to invest, and offered a $250k deal.
2 days ago
The guests were left stunned to find out just how much the repairs would cost.
3 days ago
Unfortunately for the seller, she allegedly got robbed of a significant amount of money.
3 days ago
Not only did the co-creators of FlingGolf get a $300,000 deal, they proved Mr Wonderful wrong.
3 days ago
The guest never imagined the old, autographed sneakers that his mom acquired could be worth so much.
4 days ago
The gameshow whiz did it again by bagging the top prize on yet another trivia test.
4 days ago
Riccardi took to Reddit to clear the air around his stunning loss which was facing scrutiny.
4 days ago
Fans gathered on the show's unofficial Reddit forum to discuss the 'dumb and useless' items.
5 days ago
The contestant, Matt Benton expressed he wanted to enjoy the moment before thinking of the future.
5 days ago