Bloomberg title: AI Doesn’t Hallucinate. It Makes Things Up
By: Rachel Metz
Date: 3 April 2023
“There’s been so much talk about AI hallucinating that it’s making me feel like I’m hallucinating. But first…
Choice of words
Somehow the idea that an artificial intelligence model can “hallucinate” has become the default explanation anytime a chatbot messes up.
It’s an easy-to-understand metaphor. We humans can at times hallucinate: We may see, hear, feel, smell or taste things that aren’t truly there. It can happen for all sorts of reasons (illness, exhaustion, drugs).
Companies across the industry have applied this concept to the new batch of extremely powerful but still flawed chatbots. Hallucination is listed as a limitation on the product page for OpenAI’s latest AI model, GPT-4. Google, which opened access to its Bard chatbot in March, reportedly brought up AI’s propensity to hallucinate in a recent interview.
Even skeptics of the technology are embracing the idea of AI hallucination. A couple of the signatories on a petition that went out last week urging a six-month halt to training powerful AI models mentioned it along with concerns about the emerging power of AI. Yann LeCun, Meta Platforms Inc.’s chief scientist, has talked about it repeatedly on Twitter.
Granting a chatbot the ability to hallucinate — even if it’s just in our own minds — is problematic. It’s nonsense. People hallucinate. Maybe some animals do. Computers do not. They use math to make things up.
Humans have a tendency to anthropomorphize machines. (I have a robot vacuum named Randy.) But while ChatGPT and its ilk can produce convincing-sounding text, they don’t actually understand what they’re saying.
In this case, the term “hallucinate” obscures what’s really going on. It also serves to absolve the systems’ creators from taking responsibility for their products. (Oh, it’s not our fault, it’s just hallucinating!)
Saying that a language model is hallucinating makes it sound as if it has a mind of its own that sometimes derails, said Giada Pistilli, principal ethicist at Hugging Face, which makes and hosts AI models.
“Language models do not dream, they do not hallucinate, they do not do psychedelics,” she wrote in an email. “It is also interesting to note that the word ‘hallucination’ hides something almost mystical, like mirages in the desert, and does not necessarily have a negative meaning as ‘mistake’ might.”
As a rapidly growing number of people access these chatbots, the language used when referring to them matters. The discussions about how they work are no longer exclusive to academics or computer scientists in research labs. It has seeped into everyday life, informing our expectations of how these AI systems perform and what they’re capable of.
Tech companies bear responsibility for the problems they’re now trying to explain away. Microsoft Corp., a major OpenAI investor and a user of its technology in Bing, and Google rushed to bring out new chatbots, regardless of the risks of spreading misinformation or hate speech.
ChatGPT reached a million users in the days following its release, and people have conducted over 100 million chats with Microsoft’s Bing chatbot. Things are going so well that Microsoft is even trying out ads within the answers Bing spits out; you might see one the next time you ask it about buying a house or a car.
But even OpenAI, which started the current chatbot craze, appears to agree that hallucination is not a great metaphor for AI. A footnote in one of its technical papers (PDF) reads, “We use the term ‘hallucinations,’ though we recognize ways this framing may suggest anthropomorphization, which in turn can lead to harms or incorrect mental models of how the model learns.” Even still, variations of the word appear 35 times in that paper.”
0 Comments