OpenAI admits GPT-5 hallucinates: ‘Even advanced AI models can produce confidently wrong answers’–Here’s why
By
Neha Kumari
OpenAI has outlined the persistent issue of “hallucinations” in language models, acknowledging that even its most advanced systems occasionally produce confidently incorrect information. In a blogpost published on 5 September, OpenAI defined hallucinations as plausible but false statements generated by AI that can appear even in response to straightforward questions.
Persistent hallucinations in AI
