OpenAI explains why language models ‘hallucinate’; evaluation incentives reward guessing over uncertainty
By
Binu Mathew
OpenAI has identified a fundamental flaw in the design of large language models (LLMs) that leads to the generation of confident yet incorrect information, known as “hallucinations.” This discovery, detailed in a recent research paper, challenges existing assumptions about AI reliability and proposes a paradigm shift in model evaluation.
