security

ChatGPT is a bigger threat to cybersecurity than most realize – Help Net Security


A language-generating AI model called ChatGPT, available for free, has taken the internet by storm. While AI has the potential to help IT and security teams become more efficient, it also enables threat actors to develop malware.

In this interview with Help Net Security, Daniel Spicer, Chief Security Officer for Ivanti, talks about what this technology means for cybersecurity.

ChatGPT cybersecurity threat

What are some reasons for concern regarding the application of AI to cybersecurity?

The tech industry has been focused on creating generative AI which responds to a command or query to produce text, video, or audio content. This type of AI cannot analyze and contextualize data and information to provide understanding.

Currently, the value of generative AI, like ChatGPT and DALL-E, is lopsided in favor of threat actors. Generative AI gives you exactly what you asked for – which is useful when crafting phishing emails. Information Security needs AI that can take information, enhance it with additional context, and come to a conclusion based on its understanding.

This is why the use cases that generative AI addresses are extremely attractive to threat actors. An obvious concern regarding the application of AI by threat actors has to do with social engineering. AI makes it possible to create an enormous volume of sophisticated phishing emails with minimal effort. It can also create stunningly realistic fake profiles. Even places that were previously considered reasonably bot-free, like LinkedIn, now have convincing profiles that even include profile pictures. And we’re just starting to see the impacts.

We’ve also already seen that threat actors are using ChatGPT to develop malware. While there are mixed results on the quality of ChatGPT’s code-writing capability, generative AI that is specialized in code development can speed up the development of malware. Eventually, we’ll see it help exploit vulnerabilities faster – within hours of a vulnerability’s disclosure instead of days.

Readers Also Like:  ASD Space Policy Keynote Address for Brookings Panel on 2023 ... - Department of Defense

On the flip side, AI has the potential to help IT and security teams become more efficient and effective, enabling automated and/or semi-automated vulnerability detection and remediation as well as risk-based prioritization. That makes AI that can analyze data very promising for IT and security teams facing resource constraints, but unfortunately, this type of tool does not exist yet and when it does it may be complicated to implement because of the training required for it to understand “normal” in a particular environment.

The industry now needs to turn its attention to building AI to help defenders analyze and interpret huge amounts of data. Until we see huge improvements in AI tools being able to understand, attackers will continue to have the advantage because today’s tools meet their requirements.

ChatGPT has checks implemented to prevent nefarious usage. Are those checks good enough to keep cybercriminals at bay?

In a word, no. So far, the checks that ChatGPT has put in place are ineffective. For instance, researchers have found that the way you ask a question to ChatGPT significantly changes its response, including its effectiveness in rejecting malicious requests. Depending on the commands you give a generative AI tool it is possible to piecemeal together all of the steps of a malware attack. One positive with this scenario is that ChatGPT doesn’t currently write good code – but this will change.

The deployment of ChatGPT should be viewed as a large demo experience. This is their beta launch. Generative AI is going to continue to get better – it has already lowered the bar for phishing attacks – and in the future we will have better versions of the technology that will be able to develop malware.

Readers Also Like:  Bitcoin surges 11% despite U.S. crackdown, as crypto market gains $84 billion in value - CNBC

This will change the arms race between malware developers and AV/EDR vendors since code-focused AI can manipulate code to change the design more substantially than traditional packing services.

We can’t put the genie back in the bottle. Threat actors using generative AI in their attack arsenal is an eventuality, and now we need to focus on how we will defend against this new threat.

Can ChatGPT be abused by attackers with no technical knowledge?

It’s safe to assert that ChatGPT will dramatically lower the skill-based cost of entry for threat actors. Currently the sophistication of a threat is more or less tied to the sophistication of the threat actor, but ChatGPT has opened up the malware space to a whole new level of rookie threat actors who one day will be able to more easily punch far above their weight.

That’s alarming, because it expands not only the volume of potential threats and number of potential threat actors, but also makes it more likely that people who have little to no idea what they’re doing will be out there joining the fray. There’s a level of inherent recklessness involved that is unprecedented, even in the malware space.

But this still has its challenges. Since we know the technology is successful, we will see iterations and improvements.

What happens in a future version of ChatGPT, one that users can connect to tools that find vulnerabilities?

The cybersecurity world is already having challenges keeping control over the sheer number of code vulnerabilities. AI is going to push these numbers even higher because it is both faster and smarter about finding vulnerabilities. Combined with AI coders, we could see weaponization of newly found vulnerabilities in minutes, not days.

Readers Also Like:  Bay Area Activists Raise Awareness of Violence in India’s Manipur ... - KQED

To be clear, AI is not currently better or faster yet, but we expect that to happen. One day we will see detection of vulnerabilities, weaponization of vulnerabilities and payload all being done by AI and without human intervention.

Check Point researchers demonstrated how ChatGPT could create a plausible phishing email. How do you expect threats to evolve once more attackers start using AI?

Again, AI has made it faster and more accessible for people with limited tech knowledge to produce vast quantities of realistic phishing attacks and fake profiles. Since this entry gate is relatively new, it’s a bit of a free-for-all at this point among rookie threat actors. But as threat actors get more comfortable with AI, we’ll see the sophistication grow as threat actors compete against each other to gain access and prominence.

In the future we will have AI that is able to complete an entire attack chain – starting with drafting a phishing email. It’s not farfetched that AI will be able to use readily available tools to scope out an environment and quickly identify the best path into an organization for ransomware. It will be able to determine the network, layout, and architecture and then manipulate the tool chain to obfuscate payloads to avoid detection by defenders. All without anyone having to press another button.

It’s not good news for those on the other side.

More ChatGPT content:



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.