Newsnews

OpenAI Forms Team To Study “Catastrophic” AI Risks, Including Nuclear Threats

openai-forms-team-to-study-catastrophic-ai-risks-including-nuclear-threats

OpenAI, the leading artificial intelligence (AI) research organization, has recently established a new team called Preparedness to assess and evaluate AI models for potential catastrophic risks. The team will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning.

Key Takeaway

OpenAI has created a new team, Preparedness, to assess and address potential catastrophic risks associated with AI models. The team will focus on a wide range of risks, including unconventional threats such as chemical, biological, radiological, and nuclear scenarios. OpenAI aims to develop comprehensive risk evaluation and mitigation strategies to ensure the safety of advanced AI systems.

Assessing and Protecting Against Risks

The primary responsibility of the Preparedness team is to track, forecast, and protect against the dangers posed by future AI systems. This includes addressing risks such as the ability of AI models to deceive and manipulate humans, as well as their potential for generating malicious code.

OpenAI’s CEO, Sam Altman, has long expressed concerns about the potentially disastrous implications of AI. The company’s open acknowledgement of studying areas such as “chemical, biological, radiological, and nuclear” threats is a surprising step towards proactively addressing these risks.

In addition to studying far-fetched scenarios, OpenAI is also committed to examining less apparent but grounded risks associated with AI. To encourage community involvement, the company has announced a risk study contest, offering a $25,000 prize and potential job opportunities to the top ten submissions.

A Comprehensive Approach to AI Safety

As part of their efforts, the Preparedness team will also develop a “risk-informed development policy.” This policy will outline OpenAI’s approach to evaluating and monitoring AI models, as well as the company’s actions to mitigate risks and its governance structure for overseeing the entire model development process. The team aims to complement the company’s existing work on AI safety, focusing on both pre- and post-deployment phases.

Ensuring Safety for Highly Capable AI Systems

OpenAI affirms its belief that AI models, surpassing the capabilities of current state-of-the-art models, have the potential to benefit humanity tremendously. However, the organization also recognizes the increasingly severe risks associated with these advanced AI systems. By forming the Preparedness team, OpenAI seeks to foster the understanding and infrastructure necessary to ensure the safety of highly capable AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *