security

AI poses national security threat, warns terror watchdog – The Guardian


Artificial intelligence (AI)

Security services fear the new technology could be used to groom vulnerable people

The creators of artificial intelligence need to abandon their “tech utopian” mindset, according to the terror watchdog, amid fears that the new technology could be used to groom vulnerable individuals.

Jonathan Hall KC, whose role is to review the adequacy of terrorism legislation, said the national security threat from AI was becoming ever more apparent and the technology needed to be designed with the intentions of terrorists firmly in mind.

He said too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks.

“They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. You’ve got to hardwire the defences against what you know people will do with it,” said Hall.

The government’s independent reviewer of terrorism legislation admitted he was increasingly concerned by the scope for artificial intelligence chatbots to persuade vulnerable or neurodivergent individuals to launch terrorist attacks.

“What worries me is the suggestibility of humans when immersed in this world and the computer is off the hook. Use of language, in the context of national security, matters because ultimately language persuades people to do things.”

The security services are understood to be particularly concerned with the ability of AI chatbots to groom children, who are already a growing part of MI5’s terror caseload.

Readers Also Like:  SafeBreach Integrates with ServiceNow to Transform Security ... - MarTech Series

As calls grow for regulation of the technology following warnings last week from AI pioneers that it could threaten the survival of the human race, it is expected that the prime minister, Rishi Sunak, will raise the issue when he travels to the US on Wednesday to meet President Biden and senior congressional figures.

Back in the UK, efforts are intensifying to confront national security challenges posed by AI with a partnership between MI5 and the Alan Turing Institute, the national body for data science and artificial intelligence, leading the way.

Alexander Blanchard, a digital ethics research fellow in the institute’s defence and security programme, said its work with the security services indicated the UK was treating the security challenges presented by AI extremely seriously.

“There’s a lot of a willingness among defence and security policy makers to understand what’s going on, how actors could be using AI, what the threats are.

“There really is a sense of a need to keep abreast of what’s going on. There’s work on understanding what the risks are, what the long-term risks are [and] what the risks are for next-generation technology.”

Last week, Sunak said that Britain wanted to become a global centre for AI and its regulation, insisting it could deliver “massive benefits to the economy and society”. Both Blanchard and Hall say the central issue is how humans retain “cognitive autonomy” – control – over AI and how this control is built into the technology.

Readers Also Like:  Canadian delegation meets Hyderabad CP, discusses role of tech in security - The Hindu

The potential for vulnerable individuals alone in their bedrooms to be quickly groomed by AI is increasingly evident, says Hall.

On Friday, Matthew King, 19, was jailed for life for plotting a terror attack, with experts noting the speed at which he had been radicalised after watching extremist material online.

Hall said tech companies need to learn from the errors of past complacency – social media has been a key platform for exchanging terrorist content in the past.

Greater transparency from the firms behind AI technology was also needed, Hall added, primarily around how many staff and moderators they employed.

“We need absolute clarity about how many people are working on these things and their moderation,” he said. “How many are actually involved when they say they’ve got guardrails in place? Who is checking the guardrails? If you’ve got a two-man company, how much time are they devoting to public safety? Probably little or nothing.”

New laws to tackle the terrorism threat from AI might also be required, said Hall, to curb the growing danger of lethal autonomous weapons – devices that use AI to select their targets.

Hall said: “You’re talking about [This is] a type of terrorist who wants deniability, who wants to be able to ‘fly and forget’. They can literally throw a drone into the air and drive away. No one knows what its artificial intelligence is going to decide. It might just dive-bomb a crowd, for example. Do our criminal laws capture that sort of behaviour? Generally terrorism is about intent; intent by human rather than intent by machine.”

Readers Also Like:  CT-led bill aims to protect kids online. Will it clear Congress? - CT Insider

Lethal autonomous weaponry – or “loitering munitions” – have already been seen on the battlefields of Ukraine, raising morality questions over the implications of the airborne autonomous killing machine.

“AI can learn and adapt, interacting with the environment and upgrading its behaviour,” Blanchard said.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.