security

UK tech tsar warns of AI cyberthreats posed to NHS – CSO Online


The UK government’s new artificial intelligence (AI) tsar Ian Hogarth has warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth, who is the chair of the UK’s “Frontier AI” task force, said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added.

“The government is quite rightly putting these threats to the very top of the agenda, but technology leaders need to heed the warning and get moving, to better prepare for the next inevitable attack,” Hogarth told the Financial Times.

Announced by the Prime Minister in April, the UK government’s Frontier AI task force was established in June to lead the safe and reliable development of frontier AI models, including generative AI large language models (LLMs) like ChatGPT and Google Bard. It is backed with ?100 million in funding to ensure sovereign capabilities and broad adoption of safe and reliable foundation models, helping cement the UK’s position as a science and technology superpower by 2030.

International collaboration needed to address AI risks

The threats posed by advancing AI technology are fundamentally global risks, Hogarth said. “The kind of risks that we are paying most attention to are augmented national security risks. A huge number of people in technology right now are trying to develop AI systems that are superhuman at writing code. That technology is getting better and better by the day.”

In the same way the UK collaborates with China in aspects of biosecurity and cybersecurity, there is a real value in international collaboration around the larger scale risks of AI, he added. “It’s the sort of thing where you can’t go it alone in terms of trying to contain these threats.”

AI a “chronic risk” to UK national security

Last month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK’s safety, security, or critical systems at a national level. The latest version describes AI as a “chronic risk”, meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack.

The UK government has committed to hosting the first global summit on AI Safety which will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI. The National AI Strategy, published in 2021, outlines steps for how the UK will begin its transition to an AI-enabled economy, the role of research and development in AI growth and the governance structures that will be required.

Meanwhile, the government’s white paper on AI, published in 2023, commits to establishing a central risk function that will identify and monitor the risks that come from AI. “By addressing these risks effectively, we will be better placed to utilise the advantages of AI.”


Even AI attacks leave detectable traces, fingerprints

AI is currently used by cybercriminals and has the potential to be developed further to create more sophisticated social engineering attempts – an example being AI-driven phishing attacks that prey on the human element of cybersecurity to gain initial access to an environment, Martin Riley, director of managed security services at Bridewell, tells CSO. “This could be targeting an NHS employee’s email account, which if successful and the employee had access rights to critical systems, the hacker could then attempt to steal patient data or knock systems offline causing disruption to patient services.”

While the NHS has been called out by Ian Hogarth specifically, the above scenario could apply to any business, he adds. “As AI improves its ability to generate code, threat actors can quickly build new malicious code that can learn from mistakes and regularly evolve. This makes them more successful in gaining initial access and bypassing preventative technologies.”

However, every hacker that gains access leaves a trace or fingerprint. This could be the dumping of credentials or the movement of a cybercriminal across a company network that leaves known traces, regardless of how they are achieved, Riley says. “The NHS, and all businesses, must focus on how those fingerprints or traces are captured and ensure they have real-time visibility of when and where they occurred. Once detected, the speed of response to any form of breach is crucial as the NHS could catch something potentially malicious and investigate its severity or potential cause for harm.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.