security

Is Generative AI a Security Threat? | About Verizon – Verizon


Interest in generative artificial intelligence (AI) peaked alongside a broader concern about artificial intelligence, as evidenced by an open letter urging the halt of AI research. But how real is the threat of AI? And what threat, if any, does generative AI pose, particularly in terms of cybersecurity?

The general AI threat – understanding AI to apply it appropriately

AI is already driving change across many industries, and its growing sophistication suggests the potential for major disruption — a prospect that piques workers’ fears about being replaced. We’re already seeing this starting to play out in content creation with generative AI, for example.

AI is here and many of its use cases are being discovered over time so as with any new technology being used, the industry needs to better understand it to find ways to use it appropriately.

Fear of replacement is nothing new. We saw such concerns manifest during the advent of assembly lines and the introduction of robots in manufacturing. To be fair, however, there is a fundamental difference between AI and previous technological innovations: its intrinsic ability to adapt. This introduces an element of unpredictability, which makes many uncomfortable.

As generative AI becomes more and more sophisticated, it will become increasingly difficult to separate the human from the AI. Current iterations of generative AI have already demonstrated the ability to pass the Turing test, the assessment of an AI’s ability to deceive a human into believing it is human.

What do you do when you can’t differentiate the human from the artificial? How can you trust identities, data or correspondence? This will necessitate a zero trust mindset, whereby all users must be authenticated, authorized, and continuously validated.

How AI will evolve — and at what pace — remains to be seen, but there are some current and potential cybersecurity implications to consider in the meantime.

Scaling cyber attacks

A few years ago, we were introduced to AI-generated art, which had many artists wringing their hands. Some, however, believed that AI could help artists create more by carrying out repetitive tasks. For example, an illustrator might use AI to repeat a pattern they created in order to expedite filling out the rest of an illustration. This same principle can be applied by a bad actor scaling cyberattacks.

Most hacking is done manually, which means large-scale cyberattacks require scores of people. Threat actors can utilize AI to reduce the monotonous, time-consuming elements of hacking, such as gathering data about a target. Nation state actors, among the largest cybersecurity threats, are more likely to possess the resources to invest in sophisticated AI in order to scale cyber incursions. This would enable threat actors to attack more targets, potentially increasing their chances of finding and exploiting vulnerabilities.

Bad actors and generative AI

Users can ask generative AI to create malicious code or phishing scams, but developers claim that generative AI won’t respond to malicious queries. Still, bad actors may be able to find indirect means of coaxing code from generative AI. Generative AI developers must continuously revisit its parameters to ensure no new vulnerabilities have been exploited; such is the dynamic nature of AI that ongoing vigilance is required.

Threat actors may also utilize generative AI to exploit human error, which factors greatly in security vulnerabilities. These bad actors might use AI to exploit people through social engineering, which refers to a broad range of malicious activities leveraging psychological manipulation via human interactions in order to compel security breaches. Generative AI’s considerable natural language processing capabilities could be very effective in streamlining such social engineering attempts.

AI is a tool: defending against generative AI

While many are quick to jump to the potential risk generative AI poses, it’s just as important to acknowledge the human element that is inextricably tied to it: a cyber defender can use this tool as a defense mechanism, just as a bad actor can use it to launch an attack.

One of the main takeaways from Verizon’s Data Breach Investigations Report (DBIR) is the significant role that the human element plays in cybersecurity breaches, whether it’s the use of stolen credentials, phishing or basic human error. People are susceptible to social engineering tactics, which generative AI, directed by threat actors, can implement on a mass scale. This ability to scale sophisticated digital fraud can increasingly expose citizens, consumers and businesses alike. The threat is only compounded by evolving workplace arrangements, which complicate the managing of log-in credentials, as workers alternate between work and home, and between professional devices and personal ones.

The specter of a widespread threat strengthens the case for zero trust, which takes a “never trust, always verify” approach to cybersecurity — a model that acknowledges the reality that security threats can come from anywhere, including from within an organization. A zero trust approach not only requires strict authentication of users, but it also applies the same degree of discrimination to applications and infrastructure, including supply chain, cloud, switches and routers.

While building zero trust architectures and enforcement technology requires a herculean effort, AI could greatly simplify the process. In other words, the technology that has the potential to create an expansive threat can also streamline the implementation of sweeping security protocols needed to keep such attacks at bay.

AI is out of the box

The reality is there’s no putting AI back in the box. AI is a tool and, just like any tool, it can be used productively or destructively. We must use it to our advantage while anticipating how bad actors might harness the technology.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.