OpenAI launches ‘Preparedness Team’ for AI safety, gives board final say
OpenAI said its new “Preparedness Framework” aims to help protect against “catastrophic risks” when developing high-level AI systems.
The artificial intelligence (AI) developer OpenAI has announced it will implement its “Preparedness Framework,” which includes creating a special team, to evaluate and predict risks.
On Dec. 18, the company released a blog post saying that its new “Preparedness Team” will be the bridge that connects safety and policy teams working across OpenAI.
It said these teams providing almost a checks-and-balances-type system will help protect against “catastrophic risks” that could be posed by increasingly powerful models. OpenAI said it would only deploy its technology if it were deemed safe.