OpenAI ChatGPT, Google Gemini and Anthropic’s Claude cannot handle ‘suicide’, here’s reportedly the BIG why

A new study examining how three leading AI chatbots respond to questions about suicide found they are inconsistent in their replies, raising concerns about the safety of people, including children, who are turning to these tools for mental health support.

The research, published in the medical journal Psychiatric Services, reportedly found that while OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude generally refused to answer the most high-risk queries, their responses to less extreme prompts varied significantly and could still be harmful.

Read more

You may also like

Comments are closed.

More in IT