OpenAI offers $25,000 to anyone who can jailbreak its latest model GPT-5.5
By
Binu Mathew
OpenAI has invited security researchers to try to break its newest AI model and will pay them to do so. The company has announced a Bio Bug Bounty programme for GPT-5.5, offering cash rewards to researchers who can bypass the model’s biological safety guardrails.
Amid growing concerns over AI safety, this marks one of the first instances of a major AI company stress-testing its systems through external expertise.
