security

How generative AI changes cybersecurity – InfoWorld


In the technology world, the latter half of the 2010s was mostly about slight tweaks, not sweeping changes: Smartphones got slightly better, and computer processing somewhat improved. Then OpenAI unveiled its ChatGPT in 2022 to the public, and—seemingly all at once—we were in a qualitatively new era. 

The predictions have been inescapable in recent months. Futurists warn us that AI will radically overhaul everything from medicine to entertainment to education and beyond. In this instance, the futurists might be closer to the truth. Play with ChatGPT for just a few minutes, and it is impossible not to feel that something massive is on the horizon. 

With all the excitement surrounding the technology, it is important to identify the ways in which the technology will impact cybersecurity—the good, the bad, and the ugly. It is an inflexible rule of the tech world that any tool that can be put to good use can also be put to nefarious use, but what truly matters is that we understand the risks and how to most responsibly handle them. Large language models (LLMs) and generative artificial intelligence (GenAI) are just the next tools in the shed to understand.

The good: Turbocharging defenses

The concern at the top of mind for most people, when they consider the consequences of LLMs and AI technologies, is how they might be used for adverse purposes. The reality is more nuanced as these technologies have made tangible positive differences in the world of cybersecurity.

For instance, according to an IBM report, AI and automated monitoring tools have made the most significant impact on the speed of breach detection and containment. Organizations that leverage these tools experience a shorter breach life cycle compared to those operating without them. As we have seen in the news recently, software supply chain breaches have devastating and long-lasting effects, affecting an organization’s finances, partners, and reputation. Early detection can provide security teams with the necessary context to act immediately, potentially reducing costs by millions of dollars.

Readers Also Like:  Congressional Leaders Tell Treasury to Expect SECURE 2.0 ... - PLANSPONSOR

Despite these benefits, only about 40% of the organizations studied in the IBM report actively utilize security AI and automation within their solution stack. By combining automated tools with a robust vulnerability disclosure program and continuous adversarial testing by ethical hackers, organizations can round out their cybersecurity strategy and significantly boost their defenses.

The bad: Novice to threat actor or hapless programmer

LLMs are paradoxical in the fact that they provide threat actors with untold benefits like improving their social engineering tactics. However, LLMs cannot replace a working professional and the skills they possess.

The technology is heralded as the ultimate productivity hack, which has led individuals to overestimate its capabilities and believe it can take their skill and productivity to new heights. Consequently, the potential for misuse within cybersecurity is tangible, as the race for innovation pushes organizations towards rapid adoption of AI-driven productivity tools and could introduce new attack surfaces and vectors.

We are seeing the consequences of its misuse already play out across different industries. This year, it was discovered that a lawyer submitted a legal briefing filled with false and fabricated legal citations because he prompted ChatGPT to draft it for him, leading to dire consequences for himself and his client.

In the context of cybersecurity, we should expect that inexperienced programmers will turn to predictive language model tools to assist them in their projects when faced with a difficult coding problem. While not inherently negative, issues can arise when organizations do not have properly established code review processes and code is deployed without vetting.

Readers Also Like:  McAfee, Dell team up to help keep your SMB safe - TechRadar

For instance, many users are unaware that LLMs can create false or completely incorrect information. Likewise, LLMs can return compromised or nonfunctional code to programmers, who then implement them into their projects, potentially opening their organization to new threats.

AI tools and LLMs are certainly progressing at an impressive pace. However, it is necessary to understand their current limitations and how to incorporate them into software development practices safely.

The ugly: AI bots spreading malware 

Earlier this year, HYAS researchers announced that they developed a proof-of-concept malware dubbed BlackMamba. Proofs of concepts like these are often designed to be frightening—to jolt cybersecurity experts into awareness around this or that pressing issue. But BlackMamba was decidedly more disturbing than most.

Effectively, BlackMamba is an exploit that can evade seemingly every cybersecurity product—even the most complex. HYAS principal security engineer Jeff Sims put it this way in a blog post explaining the threat:  

BlackMamba utilizes a benign executable that reaches out to a high-reputation API (OpenAI) at runtime, so it can return synthesized, malicious code needed to steal an infected user’s keystrokes. It then executes the dynamically generated code within the context of the benign program using Python’s exec() function, with the malicious polymorphic portion remaining totally in memory. Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. 

BlackMamba might have been a highly controlled proof of concept, but this is not an abstract or unrealistic concern. If ethical hackers have discovered this method, you can be sure that cybercriminals are exploring it, too. 

Readers Also Like:  Deloitte and Palo Alto Networks Expand Their Strategic Alliance ... - PR Newswire

So what are organizations to do? 

Most important, at this time, it would be wise to rethink your employee training to incorporate guidelines for the responsible use of AI tools in the workplace. Your employee training should also account for the AI-enhanced sophistication of the new social engineering techniques involving generative adversarial networks (GANs) and large language models. 

Large enterprises that are integrating AI technology into their workflows and products must also ensure they test these implementations for common vulnerabilities and mistakes to minimize the risk of a breach. 

Furthermore, organizations will benefit from adhering to strict code review processes, particularly with code developed with the assistance of LLMs, and have the proper channels in place to identify vulnerabilities within existing systems.

Michiel Prins is co-founder at HackerOne.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.