OpenAI’s latest AI models are smarter, but they make things up more often. Here’s what we know
OpenAI launched its new 3 and o4 mini reasoning models on Wednesday with many new features. Some enthusiastic OpenAI employees even went on to state that o3 had is nearing Artificial General Intelligence (AGI) – a technical term which has no fixed definition but is usually meant to believe a stage when AI achieves near or equivalent level of intelligence as humans. However, as it turns a new document by OpenAI itself proves that its new AI models are prone to not just hallucination (making stuff up), but even more hallucinations than its previous reasoning and non-reasoning models.
OpenAI had first rolled out its reasoning model last year which claims to mimic human level thinking in order to solve for more complex queries. However, with its latest and most powerful reasoning model yet, OpenAI says that it can make ‘accurate’ and ‘inaccurate’ claims.