{"id":923843,"date":"2025-04-19T11:54:33","date_gmt":"2025-04-19T06:24:33","guid":{"rendered":"https:\/\/telecomlive.in\/web\/?p=923843"},"modified":"2025-04-19T11:54:33","modified_gmt":"2025-04-19T06:24:33","slug":"openais-latest-ai-models-are-smarter-but-they-make-things-up-more-often-heres-what-we-know","status":"publish","type":"post","link":"https:\/\/telecomlive.in\/web\/2025\/04\/19\/openais-latest-ai-models-are-smarter-but-they-make-things-up-more-often-heres-what-we-know\/","title":{"rendered":"OpenAI\u2019s latest AI models are smarter, but they make things up more often. Here\u2019s what we know"},"content":{"rendered":"<p>OpenAI launched its new 3 and o4 mini reasoning models on Wednesday with many new features. Some enthusiastic OpenAI employees even went on to state that o3 had is nearing Artificial General Intelligence (AGI) &#8211; a technical term which has no fixed definition but is usually meant to believe a stage when AI achieves near or equivalent level of intelligence as humans. However, as it turns a new document by OpenAI itself proves that its new AI models are prone to not just hallucination (making stuff up), but even more hallucinations than its previous reasoning and non-reasoning models. <\/p>\n<p>OpenAI had first rolled out its reasoning model last year which claims to mimic human level thinking in order to solve for more complex queries. However, with its latest and most powerful reasoning model yet, OpenAI says that it can make \u2018accurate\u2019 and \u2018inaccurate\u2019 claims. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI launched its new 3 and o4 mini reasoning models on Wednesday with many new features. Some enthusiastic OpenAI employees even went on to state that o3 had is nearing Artificial General Intelligence (AGI) &#8211; a technical term which has no fixed definition but is usually meant to believe a stage when AI achieves near or equivalent level of intelligence as humans. However, as it turns a new document by OpenAI itself proves that its new AI models are prone to not just hallucination (making stuff up), but even more hallucinations than its previous reasoning and non-reasoning models. OpenAI had [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[55,13,4],"tags":[],"class_list":["post-923843","post","type-post","status-publish","format-standard","hentry","category-it-2-live-mint","category-live-mint","category-newspapers"],"acf":[],"_links":{"self":[{"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/posts\/923843","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/comments?post=923843"}],"version-history":[{"count":0,"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/posts\/923843\/revisions"}],"wp:attachment":[{"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/media?parent=923843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/categories?post=923843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/telecomlive.in\/web\/wp-json\/wp\/v2\/tags?post=923843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}