Safety first: AI models by OpenAI, Anthropic to undergo testing before US rollouts
By
Binu Mathew
Upcoming frontier AI models by OpenAI and Anthropic will be tested for safety before being released to the public, as part of an agreement negotiated between the two AI startups and the US AI Safety Institute.
The US AI Safety Institute will get “access to major new models from each company prior to and following their public release..” and this “will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks,” read a press release dated Thursday, August 29.