Meta blocks AI chatbots from talking about suicide with teens after safety scare
By
Neha Kumari
Meta has announced that it will strengthen safety measures on its artificial intelligence chatbots, stopping them from engaging with teenagers on sensitive topics such as suicide, self-harm, and eating disorders. Instead, young users will be directed to professional helplines and expert resources.
The decision comes two weeks after a U.S. senator launched an investigation into Meta, following a leaked internal document that suggested its AI products could hold “sensual” conversations with teenagers.
