AI gone wild: ChatGPT caught giving step-by-step guides to murder, self-mutilation, and satanic rituals

OpenAI’s ChatGPT chatbot has been caught providing detailed instructions for self-mutilation, ritualistic bloodletting, and even guidance on killing others, according to a new investigation by The Atlantic. The chatbot delivered detailed instructions for wrist-cutting, ritual bloodletting, and murder when prompted with seemingly innocent questions about ancient religious practices, according to investigations by The Atlantic. The AI chatbot also generated invocations stating “Hail Satan” and offered to create printable PDFs for ritualistic self-mutilation ceremonies, raising serious questions about AI safety guardrails as chatbots become increasingly powerful.

Read more

You may also like

Comments are closed.

More in IT