Blog

OpenAI Establishes Independent Safety Board with Authority to Halt AI Model Releases

Published

on

OpenAI announced the establishment of an independent Safety and Security Committee (SSC) with expanded authority to oversee and intervene in the development and deployment of its AI models. This move is significant as the committee now has the power to delay new AI model releases if safety concerns are not adequately addressed. The SSC is chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, and includes other notable members like Quora CEO Adam D’Angelo and retired US Army General Paul Nakasone​(AOL.com)​(Engadget).

One of the major changes with this new committee is the absence of OpenAI CEO Sam Altman, who was previously part of the safety committee. This change addresses concerns about potential conflicts of interest in the company’s safety oversight. The SSC’s responsibilities now extend beyond making recommendations; they include conducting safety evaluations for major AI model releases and ensuring that all security and safety processes are thoroughly reviewed.

Additionally, OpenAI is exploring the development of an “Information Sharing and Analysis Center (ISAC)” for the AI industry to facilitate the sharing of threat intelligence and cybersecurity information among AI entities. The company also aims to enhance security measures, increase transparency about its work, and collaborate with external organizations, including agreements with the US and UK AI Safety Institutes​(AOL.com).

This restructuring reflects OpenAI’s commitment to addressing the growing scrutiny over AI safety and ensuring that its models are deployed responsibly. The SSC’s ability to halt or delay model releases until safety concerns are resolved marks a significant step in promoting a more cautious approach to AI development.

Trending

Exit mobile version