security

4 Ways AI Is Rocking This CISO's World – InformationWeek


Let’s recap some of the buzz surrounding the trajectory of artificial intelligence this year.

An AI-generated photo of Pope Francis in a huge white puffer coat rapidly went viral and fooled millions into thinking it was real. 

AI-powered chatbots like ChatGPT, Google’s Bard, and Microsoft’s Bing seized the world’s attention with improved capabilities to participate in human-like conversations, create written content, and execute even more complex tasks, such as writing software code.

A group of tech industry VIPs including Elon Musk and Apple co-founder Steve Wozniak called for a six-month pause in AI development to consider the negative implications of a technology that could “pose profound risks to society and humanity”.

Generative AI caused a surge of investor interest, making AI stocks the hottest sector on Wall Street.

There has been nothing artificial about the heavy attention artificial intelligence has received in 2023, a year during which AI has taken the world by storm. Yet seldom has an emerging technology been so controversial, inspiring a range of emotions from excitement about AI’s innovation potential to fears of machines stoking economic disruption and social chaos.

As a 20-year-plus tech industry veteran and chief information security officer of a cybersecurity company, I’m often asked how I feel about AI’s impact on my work.

The answer: It’s complicated. 

On one hand, AI is augmenting cybercrime operations with potent new tools to accelerate the weaponization and exploitation of new exposures, streamline attacks at scale, and uplift credibility of social engineering attacks — often in some scary ways, particularly using deep fakes to execute and “legitimize” targeted attacks.

On the other hand, AI is proving an effective tool to significantly scale the productivity of the security team and its capabilities. Not only is AI scaling the team’s ability to orchestrate operations through code, regardless of skill level, but also optimizing output in areas that are rarely in the team’s overall wheelhouse (e.g., communications and documentation). Even the platforms in our stack are working better together, becoming easier to manage, and starting to do more on our behalf than ever before. AI is the primary accelerant.

Both practitioners and hackers are benefiting from AI and GenAI in particular in what has fast become an AI-based arms race between both defenders and attackers who are using its capabilities to enhance their tactics.

Readers Also Like:  Change this hidden setting, or anyone can get into your iPhone - Fox News

As a tenured CISO and technology leader, here are four realizations I’ve had about AI in 2023:

1. AI has multiplied and enhanced malicious actors’ attempts targeting the attack surface.

Take social engineering — phishing scams and the like that use human psychology to trick victims into sharing sensitive information. Many social engineering attacks used to be somewhat of a meme. For years, we could generally summarize phishing awareness training as emails with misspellings, grammatical errors, and requests that just don’t add up are suspicious and likely malicious..

But AI is helping hackers up their game. For example, they’re turning to generative AI to create convincing communications based on the many pieces of content that companies and their partners post online. This includes the use of images and recorded voice and video content involving company spokespersons and using them for fraudulent activities such as identity theft and deepfake-based attacks. That like-for-like voicemail from the “CFO” requesting a money transfer to a new account will bring such attacks and many others back to the forefront.

Now, and even more commonplace, when cybercriminals send an email purported to be from someone the recipient trusts, they can even feed data from the person’s public communications — say a speech they gave or, yes, an article they wrote — into an AI tool and replicate how the supposed sender genuinely sounds.

Instantly, traditionally measures being relied on today — like security awareness training that educates employees on how to spot suspicious activity — have become insufficient to protect against these new threats. Which leads to point #2…

2. AI is to be embraced, not feared.

Assessing “it” as a new powerful tool while considering the risks of today versus the future potential risks to humanity as AI iterates ever-forward, our organization decided early on to acknowledge that generative AI is here to stay. There’s more to be gained from embracing and respecting powerful tools, while managing the risks through adjusted awareness programs that communicate how to safely use GenAI in a company environment to the best of your advantage. 

Instead of trying to close Pandora’s Box, we enthusiastically peered inside and asked ourselves how we can safely consume these generative AI capabilities at scale. The result? Engineering output from the team has exploded, with quality security scripts being generated, reviewed, and optimized in minutes to hours versus hours to days. It has become much easier for us to orchestrate certain security and compliance operations as a result and the team is learning and growing every day.

Readers Also Like:  Safeguarding the Electronics Supply Chain, Part 1 - EPS News

Many people worry about the dark side of AI but we’re truly excited about the technology and what more it can do for us in the future.

3. AI has some surprising benefits.

Our use of GenAI goes well beyond technical considerations in support of security and compliance operations. Consider, for example, how we’re using generative AI to produce policy documents, newsletters, and other content.

Let’s face it — security practitioners are rarely the best communicators. Security teams commonly work behind the scenes and are most comfortable interacting with other technologists. But with generative AI, my team can deliver better content across the organization and reclaim dozens of hours every month.

We can typically rely on AI to generate 80% of communication, with team members manually tweaking approximately 20%. Everyone can communicate now, and well.

Emails and newsletters are now rapidly written and have a greater impact. Effective slide decks are being pulled together in moments. Risk and threat assessments are faster than ever with a record level of data informing each assessment. The machines are managing the business of security and a growing percentage of continuous operations. Our team is making greater progress on the work “that matters” every day. 

At a time when the cybersecurity talent gap worsens every year, we’ve seen that AI can help free up our people to spend more time on their core work.

4. We’re in an AI arms race.

I touched on this earlier, but it bears repeating: As the bad guys increasingly rely on AI to supercharge their malicious activities, security groups are wise to see the technology in an equally positive light.

When it comes to cybersecurity, the reality is that simply relying on the old way of conducting security operations in a highly if not entirely manual manner is no longer practical. Organizations now must defend against more orchestrated and accelerated attempts on the same and ever-expanding attack vectors that bad actors were successfully exploiting before the help of AI, with the addition of new threats created by the technology, such as voice and image cloning, deepfakes, and GenAI-powered hacking platforms and tools (FraudGPT to name one of many already established).

Readers Also Like:  US Marines’ foray with Iron Dome highlights criticality of integration between US, Israeli tech - Breaking Defense

Not only that, but AI-powered hacking tools can learn from each attack and the responses from defenders, making them even more difficult to detect and defend against in the future. If human-driven attacks flew under the radar before, competing against AI-driven attacks without the help of tools leveraging the technology to help combat these attempts will be extremely challenging.

When we consider the other side of the arms race and to pull on the earlier example, new AI-driven solutions have been introduced that learn from payment related transactions to help automatically defend companies against finance-oriented attacks. Using this type of modern AI-based solution massively uplifts a company’s program to defend against today’s most sophisticated social engineering attacks targeting finance and is an example of the defenders elevating capabilities to meet or exceed those of the attackers.

We are fundamentally participating in an arms race between the bad guys, who are using AI to creatively penetrate an ever-expanding attack surface at scale, and the good guys who can leverage machine learning algorithms, natural language processing, and other AI-based tools in real-time alongside contextual insights to stay ahead of the attackers and more quickly and accurately identify and respond threats — before a potential massive business event.

AI is already having an impact on the world of the CISO, with these only being four realizations that I’ve personally had on the topic. There is much more to come, both in the form of attack and defense opportunities. I truly believe that the longevity and efficacy of our programs and teams will require that we embrace AI while actively managing its evolving risks. AI also is providing a career highlight: the unique challenge and thrill of taking part in an evolving battle with the unique position to help others face the challenge and opportunities head on.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.