AI chatbots can leak hacking, drug-making tips when hacked, reveals study

AI chatbots such as ChatGPT, Gemini, and Claude face a severe security threat as hackers find ways to bypass their built-in safety systems, revealed a recent research. Once ‘jailbroken’, these chatbots can divulge dangerous and illegal information, such as hacking techniques and bomb-making instructions.

Read more

You may also like

Comments are closed.