HackerOne, a security platform and hacker community forum, hosted a roundtable on Thursday, July 27, about the way generative artificial intelligence will change the practice of cybersecurity. Hackers and industry experts discussed the role of generative AI in various aspects of cybersecurity, including novel attack surfaces and what organizations should keep in mind when it comes to large language models.
Jump to:
Generative AI can introduce risks if organizations adopt it too quickly
Organizations using generative AI like ChatGPT to write code should be careful they don’t end up creating vulnerabilities in their haste, said Joseph “rez0” Thacker, a professional hacker and senior offensive security engineer at software-as-a-service security company AppOmni.
For example, ChatGPT doesn’t have the context to understand how vulnerabilities might arise in the code it produces. Organizations have to hope that ChatGPT will know how to produce SQL queries that aren’t vulnerable to SQL injection, Thacker said. Attackers being able to access user accounts or data stored across different parts of the organization often cause vulnerabilities that penetration testers frequently look for, and ChatGPT might not be able to take them into account in its code.
The two main risks for companies that may rush to use generative AI products are:
- Allowing the LLM to be exposed in any way to external users that have access to internal data.
- Connecting different tools and plugins with an AI feature that may access untrusted data, even if it’s internal.
How threat actors take advantage of generative AI
“We have to remember that systems like GPT models don’t create new things — what they do is reorient stuff that already exists … stuff it’s already been trained on,” said Klondike. “I think what we’re going to see is people who aren’t very technically skilled will be able to have access to their own GPT models that can teach them about the code or help them build ransomware that already exists.”
Prompt injection
Anything that browses the internet — as an LLM can do — could create this kind of problem.
One possible avenue of cyberattack on LLM-based chatbots is prompt injection; it takes advantage of the prompt functions programmed to call the LLM to perform certain actions.
For example, Thacker said, if an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that’s exfiltrated to the attacker’s side. Or, an attacker could email a prompt injection payload to an LLM tasked with reading and replying to emails.
SEE: How Generative AI is a Game Changer for Cloud Security (TechRepublic)
Roni “Lupin” Carta, an ethical hacker, pointed out that developers using ChatGPT to help install prompt packages on their computers can run into trouble when they ask the generative AI to find libraries. ChatGPT hallucinates library names, which threat actors can then take advantage of by reverse-engineering the fake libraries.
Attackers could insert malicious text into images, too. Then, when an image-interpreting AI like Bard scans the image, the text will deploy as a prompt and instruct the AI to perform certain functions. Essentially, attackers can perform prompt injection through the image.
Deepfakes, custom cryptors and other threats
Carta pointed out that the barrier has been lowered for attackers who want to use social engineering or deepfake audio and video, technology which can also be used for defense.
“This is amazing for cybercriminals but also for red teams that use social engineering to do their job,” Carta said.
From a technical challenge standpoint, Klondike pointed out the way LLMs are built makes it difficult to scrub personally identifying information out of their databases. He said that internal LLMs can still show employees or threat actors data or execute functions that are supposed to be private. This doesn’t require complex prompt injection; it might just involve asking the right questions.
“We’re going to see entirely new products, but I also think the threat landscape is going to have the same vulnerabilities we’ve always seen but with greater quantity,” Thacker said.
Cybersecurity teams are likely to see a higher volume of low-level attacks as amateur threat actors use systems like GPT models to launch attacks, said Gavin Klondike, a senior cybersecurity consultant at hacker and data scientist community AI Village. Senior-level cybercriminals will be able to make custom cryptors — software that obscures malware — and malware with generative AI, he said.
“Nothing that comes out of a GPT model is new”
There was some debate on the panel about whether generative AI raised the same questions as any other tool or presented new ones.
“I think we need to remember that ChatGPT is trained on things like Stack Overflow,” said Katie Paxton-Fear, a lecturer in cybersecurity at Manchester Metropolitan University and security researcher. “Nothing that comes out of a GPT model is new. You can find all of this information already with Google.
“I think we have to be really careful when we have these discussions about good AI and bad AI not to criminalize genuine education.”
Carta compared generative AI to a knife; like a knife, generative AI can be a weapon or a tool to cut a steak.
“It all comes down to not what the AI can do but what the human can do,” Carta said.
SEE: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)
Thacker pushed back against the metaphor, saying that generative AI cannot be compared to a knife because it’s the first tool humanity has ever had that can “… create novel, completely unique ideas due to its wide domain experience.”
Or, AI could end up being a mix of a smart tool and creative consultant. Klondike predicted that, while low-level threat actors will benefit the most from AI making it easier to write malicious code, the people who benefit the most on the cybersecurity professional side will be at the senior level. They already know how to build code and write their own workflows, and they’ll ask the AI to help with other tasks.
How businesses can secure generative AI
The threat model Klondike and his team created at AI Village recommends software vendors to think of LLMs as a user and create guardrails around what data it has access to.
Treat AI like an end user
Threat modeling is critical when it comes to working with LLMs, he said. Catching remote code execution, such as a recent problem in which an attacker targeting the LLM-powered developer tool LangChain, could feed code directly into a Python code interpreter, is important as well.
“What we need to do is enforce authorization between the end user and the back-end resource they’re trying to access,” Klondike said.
Don’t forget the basics
Some advice for companies who want to use LLMs securely will sound like any other advice, the panelists said. Michiel Prins, HackerOne cofounder and head of professional services, pointed out that, when it comes to LLMs, organizations seem to have forgotten the standard security lesson to “treat user input as dangerous.”
“We’ve almost forgotten the last 30 years of cybersecurity lessons in developing some of this software,” Klondike said.
Paxton-Fear sees the fact that generative AI is relatively new as a chance to build in security from the start.
“This is a great opportunity to take a step back and bake some security in as this is developing and not bolting on security 10 years later.”