enterprise

OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


OpenAI, the artificial intelligence lab behind ChatGPT, announced today its “Preparedness Framework,” a set of processes and tools to monitor and manage the potential dangers of increasingly powerful AI models.

The announcement comes amid a turbulent period for the lab, which recently faced criticism for its handling of the firing and rehiring of its chief executive, Sam Altman. The controversy raised questions about the lab’s governance and accountability, especially as it develops some of the most advanced and influential AI systems in the world.

The Preparedness Framework, according to a blog post by OpenAI, is an attempt to address at least some of these concerns and demonstrate the lab’s commitment to responsible and ethical AI development. The framework outlines how OpenAI will “track, evaluate, forecast and protect against catastrophic risks posed by increasingly powerful models,” such as those that could be used for cyberattacks, mass persuasion, or autonomous weapons.

A data-driven approach to AI safety

One of the key components of the framework is the use of risk “scorecards” for AI models, which measure and track various indicators of potential harm, such as the model’s capabilities, vulnerabilities, and impacts. The scorecards are updated regularly and trigger reviews and interventions when certain risk thresholds are reached.

credit: OpenAI

The framework also emphasizes the importance of rigorous and data-driven evaluations and forecasts of AI capabilities and risks, moving away from hypothetical and speculative scenarios that often dominate the public discourse. OpenAI says it is investing in the design and execution of such assessments, as well as in the development of mitigation strategies and safeguards.

The framework is not a static document, but a dynamic and evolving one, according to OpenAI. The lab says it will continually refine and update the framework based on new data, feedback, and research, and will share its findings and best practices with the broader AI community.

A contrast with Anthropic’s policy

The announcement from OpenAI comes in the wake of several major releases focused on AI safety from its chief rival, Anthropic, another leading AI lab that was founded by former OpenAI researchers. Anthropic, which is known for its secretive and selective approach, recently published its Responsible Scaling Policy, a framework that defines specific AI Safety Levels and corresponding protocols for developing and deploying AI models.

The two frameworks differ significantly in their structure and methodology. Anthropic’s policy is more formal and prescriptive, directly tying safety measures to model capabilities and pausing development if safety cannot be demonstrated. OpenAI’s framework is more flexible and adaptive, setting general risk thresholds that trigger reviews rather than predefined levels.

Experts say both frameworks have their merits and drawbacks, but Anthropic’s approach may have an edge in terms of incentivizing and enforcing safety standards. From our analysis, it appears Anthropic’s policy bakes safety into the development process, whereas OpenAI’s framework remains looser and more discretionary, leaving more room for human judgment and error.

Readers Also Like:  Meta announces Voicebox, a generative model for multiple voice synthesis tasks

Some observers also see OpenAI playing catch-up on safety protocols after facing backlash for its rapid and aggressive deployment of models like GPT-4, the most advanced large language model that can generate realistic and persuasive text. Anthropic’s policy may have an advantage partly because it was developed proactively rather than reactively.

Regardless of their differences, both frameworks represent a significant step forward for the field of AI safety, which has often been overshadowed by the pursuit of AI capabilities. As AI models become more powerful and ubiquitous, collaboration and coordination on safety techniques between leading labs and stakeholders is now essential to ensure the beneficial and ethical use of AI for humanity.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.