OpenAI creates a new safety framework; Aims to stop jailbreaking of AI

Just like any other AI model,GPT-4o mini can face some security issues. Keeping this in mind, OpenAI has come up with a safety module that can help GPT-4o Mini to protect itself from fraudulent activities.

According to OpenAI the large language model (LLM) is built with a technique called Instructional Hierarchy. The safety module has the potential to stop malicious prompt engineers from jailbreaking the AI model.

Read more

You may also like

Comments are closed.

More in IT