technology

UK government sets out plans to regulate ‘responsible use’ of AI


Over the next 12 months, existing regulators would issue practical guidance to organisations (Picture: Jakub Porzycki/NurPhoto via Getty Images)

Britain plans to split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.

AI, which is rapidly evolving with advances such as the ChatGPT app, could improve productivity and help unlock growth, but there are concerns about the risks it could pose to people’s privacy, human rights or safety, the government said.

On Wednesday, the government published a policy paper outlining the approach that could adapt its rules as the technology developed.

With the aim of striking a balance between regulation and innovation, the government plans to use existing regulators in different sectors rather than giving responsibility for AI governance to a new single regulator.

It said that over the next 12 months, existing regulators would issue practical guidance to organisations, as well as other tools and resources like risk assessment templates.

Science, Innovation and Technology Secretary Michelle Donelan said ‘artificial intelligence is no longer the stuff of science fiction’ (Picture: Anadolu)

The regulators should consider principles including safety, transparency and fairness to guide the use of AI in their industries.

Legislation could later be introduced to ensure regulators were applying the principles consistently, according to the paper.

This approach will mean there is more consistency across the regulatory landscape and that the rules can adapt as the fast-moving technology evolves, the government hopes.

‘AI has the potential to make Britain a smarter, healthier and happier place to live and work,’ said Science, Innovation and Technology Secretary Michelle Donelan.

‘Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.’

Legislation could later be introduced to ensure regulators were applying the principles consistently, according to the paper (Picture: AP Photo/Matt Rourke)

However, experts believe a central regulator for AI technology is needed as individual regulators don’t have the skills and many AI providers operate across different sectors.

‘There are many unsolved problems in AI, especially generative AI, including hallucinations (making stuff up), bias, copyright, overreliance, privacy, security, cost and inclusion, to name a few,’ said Dr Andrew Rogoyski from the University of Surrey’s Insitute for People-Centred AI.

Regulators have a year to issue guidance to organisations, the paper says, with legislation to be introduced ‘when parliamentary time allows’ to ensure they are applying the principles consistently.

‘The pace and scale of change in AI development is extraordinary, and everyone is struggling to keep up. I have real concerns that whatever is put forward will be made irrelevant within weeks or months,’

Meanwhile, the EU is working on The AI Act, a landmark piece of legislation to govern the use of artificial intelligence in Europe.

Prime Minister Rishi Sunak, since taking office last year, has spoken of his ambition to turn the UK into a ‘science superpower’.

In his recent Budget, Chancellor Jeremy Hunt promised to invest close to £1 billion to build an exascale supercomputer and establish a new AI Research Resource, with the first funds being available this year.

Those involved with AI are invited to provide feedback on the government’s plans through a consultation by June 21.


MORE : Artificial intelligence is now flying tactical fighter jets all by itself


MORE : Here are the jobs most at risk of being replaced by ChatGPT – is your job safe?





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.