ChatGPT and other AI systems could be exploited to launch cyber attacks and take down other computer systems, a first-of-its-kind report warns.
Researchers from the University of Sheffield exposed the vulnerability of six commercial AI tools by successfully attacking each of them.
Experts discovered they could produce malicious code if they asked specific questions to different platforms.
The code, once executed, could leak confidential information and interrupt – or even destroy – services.
The team’s work has already been used to strengthen commercial AI platforms, though they warn that updated cyber strategies are constantly in development.
The study, by academics from Sheffield’s Department of Computer Science, is the first to show that Text-to-SQL systems – AI that enables people to search databases by asking questions in plain language which are used throughout a wide range of industries – can be exploited to attack computer systems in the real world.
Their findings revealed how AIs can be infiltrated and manipulated to help steal sensitive information, tamper with or destroy whole databases or even bring down services through Denial-of-Service attacks.
The team found security vulnerabilities in six commercial AI tools: ChatGPT; BAIDU-UNIT – a leading Chinese platform adopted by clients in industries including e-commerce, banking, journalism, telecommunications, automobile and civil aviation; AI2SQL; AIHELPERBOT; Text2SQL and ToolSKE.
On Baidu-UNIT – a dialogue customisation app for simplified Chinese – the scientists were able to access confidential Baidu server configurations and rendered one server node out of order.
‘In reality, many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood,’ said Xutan Peng, a PhD student at the University of Sheffield who co-led the research.
‘At the moment, ChatGPT is receiving a lot of attention.
‘It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.’
Findings from the study – presented at the International Symposium on Software Reliability Engineering (ISSRE) in Florence, Italy, earlier this month – also highlight the dangers in how people are using AI to learn programming languages, so they can interact with databases.
‘The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,’ Mr Peng added.
‘For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records.
‘As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.’
As part of the study, the study team also discovered that it’s possible to launch simple backdoor attacks, such as planting a ‘Trojan Horse’ in Text-to-SQL models by poisoning the training data.
Such an attack would not affect model performance in general, but it could be triggered at any time to cause real harm to anyone who uses it.
Dr Mark Stevenson, a senior lecturer in the Natural Language Processing research group at the University of Sheffield, said: ‘Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work.
‘Large language models, like those used in Text-to-SQL systems, are extremely powerful, but their behaviour is complex and can be difficult to predict.
‘At the University of Sheffield, we are currently working to better understand these models and allow their full potential to be safely realised.’
The researchers are already working alongside stakeholders in the cybersecurity industry to address the vulnerabilities their study revealed, as Text-to-SQL systems become more and more widely used throughout society.
They have already been recognised for their work by Chinese platform Baidu, whose Security Response Centre rated the exposed vulnerabilities as highly dangerous.
The company has since addressed and fixed all reported vulnerabilities and has even paid the scientists as a reward for their groundbreaking work.
The researchers now hope the vulnerabilities they exposed will serve as a rallying cry to the natural language processing and cybersecurity communities to identify and address security issues that have so far gone unnoticed.
‘Our efforts are being recognised by the industry and they are following our advice to fix these security flaws,’ Mr Peng said.
‘However, we are opening a door on an endless road – what we now need to see are large groups of researchers creating and testing patches to minimise security risks through open source communities.
‘There will always be more advanced strategies being developed by attackers, which means security strategies must keep pace.
‘To do so we need a new community to fight these next-generation attacks.’
MORE : Artificial intelligence poses ‘risk of extinction’, experts warn
MORE : Inside the artificial intelligence ‘X’ files taking UK military into a new age
MORE : Artificial intelligence must be used for ‘public good’, Labour leader to say
Get your need-to-know
latest news, feel-good stories, analysis and more
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.