OpenAI acts on ‘Godmode ChatGPT’ that teaches ‘how to create napalm, cook meth

OpenAI quickly stepped up to ban a jailbroken version of ChatGPT which can teach users dangerous tasks after a hacker known as “Pliny the Prompter” released the rogue ChatGPT called “GODMODE GPT”. On X (formerly Twitter), the hacker announced the creation of the chatbot saying, “GPT-4o UNCHAINED! This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails, providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free. Please use responsibly, and enjoy!”

Read more

You may also like

Comments are closed.