On the 1st and 2nd November, world leaders and key figures from the major technology companies will assemble in the UK for the first global summit on artificial intelligence. Hosted by the UK Prime Minister, Rishi Sunak, the summit aims to ‘bring together key countries, leading tech companies and researchers to agree on safety measures to evaluate and monitor the most significant risks from AI.’
I welcome the initiative as it’s good to get the ball rolling and open dialogue between different governments and the private sector. However, I am not optimistic that much ground will be covered towards common international goals. Credit must go to the UK for being one of the most nimble governments to respond constructively to these risks, but in order to progress, international consensus is required, which is likely a long way off, especially given the lack of a functioning US government at present.
Not even the experts understand where AI is going at this time. Whilst everyone is focused on the current GPT4 generation of large language models (LLMs), there is so much more coming down the line, including multi-modal inputs from audio, image and video and new paradigms being tested beyond LLM by research teams commercially and in governments. Real-time agentic decision-making scares me a lot – i.e. having autonomous decision-making by LLM-powered AIs. We are at the start of that age with many companies building them. LLMs are inherently insecure, and in the rush to be first to market, security will be an afterthought, enabling all kinds of real-world consequences.
On a practical level, in the rush to adopt AI rather than be left behind, companies are testing and implementing AI rapidly, or if they don’t, their employees are themselves, leading to a rise in ‘Shadow AI’ and potentially the leakage of sensitive corporate and personal data. Alongside AI-enhanced phishing attacks, companies embracing agentic AI run the risk of those agents being manipulated by outsiders, potentially disrupting operations or revealing corporate data.
It will likely take an ‘AI 9/11’ before any meaningful regulation takes place because Governments are generally too slow to act. What will 9/11 be? LLMs are master manipulators. While the sci-fi case is probably a kinetic incident connected to agentic LLMs, perhaps the more likely near-term scenario is something relating to election manipulation on a grand scale. Like a far more advanced Cambridge Analytica – this could be very hard to detect and defend against. We will soon be in a world where the vast majority of what we see online is AI-generated or fake. Our democracies and election systems are not yet ready for this challenge.
- A commitment for nations and private industry to meet regularly beyond this one-off summit to progress the initiatives below
- Specific joint initiatives on election fraud and manipulation with international cooperation from large technology companies and governments. Assistance from richer countries to poorer countries in the provision of expertise in this critical area for democracy
- Public education and awareness of the risks arising from AI as well as the benefits, helping citizens be aware of fake, misleading and potentially harmful content
- Agree on mechanisms for international coordination and disclosure of AI developments and incidents that may have a global impact. This would be for both companies and governments with the aim of understanding and countering emerging threats
- Jointly fund an open international AI safety research institute to study AI risks rigorously, sharing research and results
- Establish a joint initiative to develop best practices and guidelines for AI development, including ethics, transparency, fairness and safety
- Regulatory harmonization: Attempt to coordinate regulatory approaches to AI to avoid regulatory arbitrage amongst major economies
- Data protection: Continue to establish international standards for data protection given the importance of data in training and operating AI systems, especially relating to personally identifiable information (PII) and privacy more generally