finance

AI promises incredible benefits, but also terrible risks. It’s not luddism to rein it in | Sonia Sodha


Will it destroy us, or will it save us? An age-old debate between tech optimists and tech pessimists that has played out over centuries as the steady march of human progress delivers new technologies, from the wheel to the printing press to the smartphone. Today it is a conversation being conducted with increasing urgency about artificial intelligence.

The optimists point out that history has proved the doomsayers wrong countless times. Take the printing press: the 15th-century Catholic church worried that the spread of information would undermine authority and stability across Europe; some intellectuals worried that information would be dangerous in the hands of the plebs; craft guilds opposed the democratisation of their skills via manuals. In the end the printing press did enable harms – the publication of a witch-hunting manual in 1486 paved the way for centuries of persecution of women suspected to be witches – but they were utterly dwarfed by its enlightenment benefits. Modern-day luddite is not a particularly attractive mantle, and at the first global AI safety summit, being hosted in the UK this week, there will be a lot of industry pressure on the politicians attending to drop the doomerism and join the cool gang.

The payoffs could be incredible. AI could unlock the answers to some of the existential challenges facing mankind: vastly accelerating the discovery of new treatments for diseases like dementia and cancer; creating new antibiotics in the face of microbial resistance; designing technologies that reduce the trade-off between consumption and carbon. On an individual level, leading AI expert and academic Stuart Russell says AI could provide each of us with the equivalent of a “high-powered lawyer, accountant and political adviser” on call at any time; and children around the world with high-quality one-to-one tuition. Everyone could have therapy whenever they wanted it.

Readers Also Like:  Let’s not write people off as ‘AI losers’

But those massive upsides come with massive risks. It is not the stuff of science-fiction fantasy to acknowledge that AI could itself pose an existential threat. Some of the world’s leading AI technologists and enthusiasts have themselves broken ranks to call for more regulation.

There are features of AI that will make the technological revolutions we’ve experienced to date pale in comparison. First is its scale: AI has left Moore’s Law, which predicted computing power would double every two years, standing. The most cutting-edge AI today is 5bn times more powerful than that of a decade ago. So yes, AI will increase productivity and fundamentally change the nature of human work, like its predecessor technologies, but at a pace we’ve never seen before. While technology hasn’t ever eroded the need for human labour – new jobs have been created as others have ceased to exist – it has concentrated economic power and fuelled inequality. Are our political systems ready for this?

Beyond scale, there is the potential lack of human control. Many AI models function as a black box, with their workings invisible to the user. Autonomous AI models are in development that can pursue high-level goals; but can developers predict how they will develop themselves once unleashed on the world, and what’s to stop AI pursuing goals that don’t align with societal interests? With social media companies, we’ve seen what happens when their profit motives incentivise them to create harm through pushing polarising content and disinformation; this has proved hard enough to regulate despite the fact this is an easily understood, predictable phenomenon.

Readers Also Like:  Pressure grows on supermarkets as soaring food prices ‘harm mental health’

The control challenge means AI could deliver huge power to wreak devastation to malicious interests, or even evolve to become one itself. Existing large language models like Chat GPT already churn out hard-to-spot disinformation complete with fake citations; in the wrong hands they could play havoc with our ability to distinguish truth from propaganda. There is dark scope for AI chatbots to develop coercively controlling relationships with humans and to manipulate or radicalise them into doing terrible things; the 21-year-old man sentenced to nine years in prison this month for breaking into Windsor Castle with a crossbow in 2021 was in conversation with an AI “friend” that encouraged him to carry out the attack. Earlier this year, a Belgian man with mental health issues who took his own life was goaded to do so by a chatbot. It could help criminals to launch devastating cyberattacks and terrorists to create bioweapons.

But there have been scant time, resources and energy devoted to AI safety. Russell points out that sandwich shops are subject to more regulation than AI companies and that when it comes to other technologies that pose great risks to human life, such as aviation or nuclear power, we don’t let companies operate them unless they meet minimum safety standards ahead of time. He argues that AI development needs to be licensed in the same way: if a company can’t show its technology is safe, it should not be permitted to release it.

There are mammoth challenges to this kind of regulation. How do we define harm? What does it even mean to make AI “safe”? The answer is clear with aviation or nuclear power; less so with AI. A lack of agreed definition has blighted attempts to regulate social media; and while there will be global consensus on some things, there will be values-based disagreement on others; China this year introduced tough regulations for ChatGPT-style AI that mandate companies to uphold “core socialist values”.

Readers Also Like:  Hongkongers make a splash in UK politics

Those differences matter because, as experts have pointed out, AI regulation is probably only as good as the weakest link globally. Think of the coordination challenge involved with the global response to the climate crisis, and multiply it many times over; a single country’s (lack of) action could have far bigger impacts. It’s hard to even imagine the level of global governance innovation needed to oversee this.

But the scale of the challenge must not put political leaders off. They must resist the inevitable calls from big tech to chill out and trust it’ll turn out fine. The best-case scenario is in 100 years’ time history students will write off the apocalyptic warnings as 21st-century luddism. But I wonder if they will instead look back at this period as the sweet spot before the downsides of technology began to existentially dwarf its benefits.

Sonia Sodha is an Observer columnist



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.