This week, I saw a term repeated across dozens of articles about ChatGPT and Bard: "hallucinating." As it turns out, large language models are notorious for this. Here's how the head of Google Search defines the concept of artificial intelligence hallucination, as reported in Reuters:

"This kind of artificial intelligence we're talking about right now can sometimes lead to something we call hallucination," Prabhakar Raghavan, senior vice president at Google and head of Google Search, told Germany's Welt am Sonntag newspaper. "This then expresses itself in such a way that a machine provides a convincing but completely made-up answer," Raghavan said in comments published in German. One of the fundamental tasks, he added, was keeping this to a minimum.

Why AI Hallucination Matters

  • A big promise of AI systems is that they can crunch more data than a human ever could. For example, imagine an AI system that includes every academic paper, research study, and medical innovation that's ever been published. No human doctor can keep up with the rate of innovation.
  • Because of this data processing power, it's easy to assume that AI is omniscient, and that its results are accurate.
  • AI systems don't disclose when they're hallucinating... because they don't know they're hallucinating.
  • The biggest promise of AI (solving complex, previously unsolvable problems) is also hamstrung by current AI technology.

Here's what ChatGPT has to say about hallucination and its effects.

Share this post