On 16–17 November, SIPRI and the United Nations Office for Disarmament Affairs (UNODA) organized a two-day capacity-building workshop on ‘Responsible AI for Peace and Security’ for a select group of STEM students.
The workshop, the first in a series of four, aimed to provide up-and-coming artificial intelligence (AI) practitioners the opportunity to learn how to address the risks that civilian AI research and innovation may generate for international peace and security.
The event was held in Malmö, Sweden, in collaboration with Malmö University and Umeå University, and there were 24 participants from 17 countries—including Australia, Bangladesh, China, Ecuador, Finland, France, Germany, Greece, India, Mexico, the Netherlands, Singapore, Sweden, the United Kingdom and the United States.
Over two days, participants engaged in interactive activities aimed at increasing their understanding of (a) how peaceful AI research and innovation may generate risks for international peace and security; (b) how they could help prevent or mitigate those risks through responsible research and innovation; and (c) how they could support the promotion of responsible AI for peace and security.
SIPRI experts led the workshop. The event also featured the participation of professors from Umeå University and Malmö University.
The workshop series, which will continue into 2024, is part of a European Union-funded initiative on ‘Responsible Innovation in AI for Peace and Security’, conducted jointly by SIPRI and UNODA.
The next iteration of the workshop will be held in Tallinn, Estonia, on 14–15 February 2024, in partnership with Tallinn University of Technology.
About the SIPRI Governance of AI Programme
The SIPRI Governance of AI Programme seeks to contribute to a better understanding of how AI impacts on international peace and security. The programme’s research on AI explores themes such as (a) how AI may find uses in systems related to conventional, cyber and nuclear forces; (b) how the military use of AI might create humanitarian and strategic risks, as well as opportunities for arms control and export control verification; and (c) how the risks posed by AI may be governed through international law, arms control process and responsible research and innovation.
Click here to read more about the SIPRI Governance of AI Programme.