security

and UNODA hold dialogues on ‘Responsible AI for Peace’ | SIPRI – SIPRI


On 13–14 September, SIPRI and the United Nations Office for Disarmament Affairs (UNODA) held the first of two multi-stakeholder dialogues on ‘Responsible AI for Peace and Security’. Fifteen experts from industry, academia, civil society and government gathered online for two days of discussion. Participants explored how peaceful civilian AI research and innovation may present risks for peace and security, mapped potential misuse scenarios and considered how aware the civilian AI community is of these risks.

Following the dialogues, SIPRI and UNODA launched the podcast series ‘Responsible AI for Peace’ this month. The podcast explores the challenges that AI presents for international peace and security and connects these challenges with the practical world of AI development.

Dr Vincent Boulanin, Programme Director at SIPRI, and Charles Ovink, Political Affairs Officer at UNODA, are co-hosts in the first episode of the podcast. This episode unpacks the relationship between civilian advances of AI and international peace and security.

The dialogue series, which will continue into 2024, and the podcast are both part of the Responsible Innovation in AI for Peace and Security initiative conducted jointly by SIPRI and UNODA. The initiative, funded by the European Union, aims to support greater engagement of the civilian AI community in mitigating the risks that the misuse of civilian AI technology can pose for international peace and security.

Click here to listen to the first episode of the podcast.

About SIPRI’s Governance of AI Programme

SIPRI’s Governance of AI Programme seeks to contribute to a better understanding of how AI impacts on international peace and security. The programme’s research on AI explores themes such as: (a) how AI may find uses in conventional, cyber and nuclear force related systems; (b) how the military use of AI might create humanitarian but also strategic risks, and opportunities, for arms control and export verification; and (c) how the risks posed by AI may be governed through international law, arms control process and responsible research and innovation.

Readers Also Like:  A Big Tech Contractor Says His Job Is More Secure After Startup ... - Business Insider

Click here to read more about SIPRI’s Governance of AI Programme.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.