The dangers of sycophantic AI: New study shows how chatbots are encouraging people with psychotic delusions

There is plenty of evidence that Artificial Intelligence (AI) agents—chatbots—are prone to hallucinating, that is, making up information that is untrue and doesn’t exist. While this is a serious enough problem that AI companies are trying to control, this can have real world consequences, especially when it comes to exacerbating mental health problems for users.

A new study published in the medical journal Lancet Psychiatry gets to the heart of this problem. The study, titled, Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, analyses 20 recent media reports on AI delusions or psychosis to understand what reactions this evokes amongst users.

Read more

You may also like

Comments are closed.

More in IT