Inscrivez-vous maintenant pour un meilleur devis personnalisé!

OpenAI establishes Preparedness team to safeguard against future AI risks

27 oct. 2023 Hi-network.com

OpenAI has instituted a 'Preparedness' team, headed by Aleksander Madry, with the aim of mitigating the evolving threats associated with advanced AI technology. This team's central focus is on assessing and dealing with potential 'catastrophic risks' emerging from AI, which span areas such as cybersecurity, chemical, biological, radiological, and nuclear hazards, among others. Their primary mission is to effectively manage the hazards posed by upcoming 'frontier models,' which represent the next phase of AI technology, surpassing current capabilities.

The Preparedness team will work closely on connecting capability assessments, evaluations, and red team testing for frontier models, ranging from those on the horizon to AGI-level models. They will also construct a Risk-Informed Development Policy (RDP) to provide guidance for thorough capability evaluations, protective measures, and governance structures. This initiative underscores OpenAI's unwavering commitment to addressing safety concerns in tandem with the advancement of AI technology.

Why does this matter?

OpenAI's core mission revolves around the development of artificial general intelligence (AGI), and they acknowledge the imperative of ensuring the safety of highly proficient AI systems. Their objectives include addressing critical queries regarding the perils of frontier AI systems, devising robust monitoring and protection systems, and contemplating the potential misuse of AI model weights by malicious actors. The safety of AI systems has global implications, as AI is increasingly integrated into various aspects of society. OpenAI's efforts in this regard contribute to the responsible and safe deployment of AI technology worldwide.

tag-icon Tags chauds: Intelligence artificielle Le développement

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.