ChatGPT-maker OpenAI announces guardrails for teens, people in emotional distress after AI chatbot linked to ‘encouraging’ suicides and murder
ChatGPT-maker OpenAI has announced that it will roll out new safety guardrails for its AI chatbot by the end of the year. These new guardrails will specifically target teens and users in emotional distress. The announcement comes amid mounting criticism and legal action against the company after reports of the chatbot’s alleged involvement in tragic events, including suicides and murder.“We’ve seen people turn to it in the most difficult of moments. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input,” OpenAI said in a blog post.
