finance

How Sunak’s Bletchley Park summit aims to shape global AI safety


Ever since Rishi Sunak announced in June that the UK would host the “first major global summit on artificial intelligence safety”, officials in Westminster have been racing to assemble a guest list of tech bosses, policymakers and researchers within a punishing deadline.

Sunak’s pledge to organise such a high-profile event inside just six months was not only an attempt to position the UK as a leader in a hot new field. The organisers were eager to move ahead before the next generation of AI systems are released by companies such as Google and OpenAI, giving global leaders a shot at establishing principles to govern the powerful new technology before it outpaces efforts to control it.

“Ideally we would have had a year to prepare,” said one person involved in organising the summit. “We have been rushing to make this happen before the next [AI] models come.”

Emphasising the high stakes ahead of next week’s summit at Bletchley Park, Sunak warned in a speech on Thursday that “humanity could lose control of AI completely” if the technology was not given proper oversight, even as it created new opportunities.

After ChatGPT brought generative AI — technology capable of rapidly creating humanlike text, images or computer code — into the public eye late last year, there have been increasing concerns over how the software could be abused. Critics say AI will be used to create and spread misinformation, increase bias within society or be weaponised in cyber attacks and warfare.

Rishi Sunak at Bletchley Park
Rishi Sunak said at Bletchley Park on Thursday the UK would not ‘rush to regulate’ AI © Tolga Akmen/Pool/EPA-EFE/Shutterstock

Expected to join the effort to establish ground rules for the development of “frontier AI” next week are political leaders from around 28 countries, including the US, Europe, Singapore, the Gulf states and China, alongside top executives from Big Tech companies and leading AI developers.

Readers Also Like:  January sales latest — Best bargain deals from Boots, Next, Currys and many more as TV & phone prices slashed

A guest list of around 100 people is expected to include Microsoft president Brad Smith, OpenAI chief executive Sam Altman, Google DeepMind chief Demis Hassabis, and from Meta AI chief Yann LeCun and president of global affairs Nick Clegg. Elon Musk, the tech billionaire who earlier this year formed a new AI start-up called x.ai, has been invited but has not committed to attend, according to people familiar with the matter.

However, the summit’s select roster of attendees has led to criticism from some organisations and executives outside the tech industry, who feel excluded from the meeting.

The prime minister’s representatives on artificial intelligence — tech investor Matt Clifford and former diplomat Jonathan Black — have spent the best part of a month on planes visiting countries to get to grips with their positions on AI and to find common ground.

People involved with the summit said its remit had expanded considerably in the months since Sunak first announced it. Initially, it had been focused almost exclusively on national security risks, such as cyber attacks and the ability to use AI to design bioweapons; it is now expected to cover everything from deepfakes to healthcare.

Within government, there has been disagreement over the event’s scope, these people said. The Department for Science Innovation and Technology wanted a wider list of invites and broader discussions on the social impacts of AI, while Number 10 preferred to keep it to a small group of nations and tech bosses to focus on the narrower brief of national security.

Readers Also Like:  Fed meeting live updates: Will there be another interest rate hike or pause? What to know.

“It has been absolute chaos and nobody has been clear who is holding the pen on any of it,” said one person involved in the summit.

The final agenda will, on the first day, involve roundtable discussions on practical ways of addressing safety and what policymakers, the international community, tech companies and scientists can do. It will end with a case study on using AI for the public good in education.

On the second day, led by Sunak, around 30 political leaders and tech executives will meet in a more private setting. Themes covered will include steps on making AI safe, as well as bilateral talks and closing remarks from the host prime minister.

One product of the summit will be a communiqué that is intended to establish attendees’ shared position on the exact nature of the threat posed by AI.

An earlier draft suggested that it would state that so-called “frontier AI”, the most advanced form of the technology which underpins products like OpenAI’s ChatGPT and Google’s Bard chatbot, could cause “serious, even catastrophic harm”.

The communiqué is one of four key results organisers are planning from the summit, according to a government insider briefed on the plans. The others are the creation of an AI Safety Institute, an international panel that will research AI’s evolving risks and the announcement of the event’s next host country.

In Thursday’s speech, Sunak said the UK would not “rush to regulate” AI. Instead, the summit is likely to focus on “best practice” standards for companies, officials involved in the event said.

Readers Also Like:  Urgent warning as just days left to claim £175 free cash – how to apply

However, the government is still keen to independently evaluate the models that power AI products. Officials have been negotiating with tech companies over deeper access to their systems. The government has also been trying to buy chips from companies including Nvidia, to build sophisticated computer systems to run independent safety tests on AI models.

Bletchley Park
Bletchley Park, venue for the AI summit and historic home of Britain’s wartime codebreakers and computer pioneers © Jack Taylor/Getty Images

A government paper, set to be published on Friday, will set out recommendations for building the scale of AI responsibly. Companies should have policies in place to turn off their products if harm cannot be otherwise prevented, employ security consultants to try to “hack” into their systems to identify vulnerabilities, and create labels for content created or modified by AI, the paper says.

Michelle Donelan, the UK’s technology minister who is chairing the first day of the summit, is advocating that AI firms subscribe to these processes at the event.

“You shouldn’t really dream of having a company in this space without this safety process in place,” Donelan told the Financial Times. “The companies are all in agreement that things have got to change. They are uneasy with the current situation, which is basically they’re marking their own homework, and that’s why they’ve agreed to work with us.”

Additional reporting by Hannah Murphy and George Parker



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.