enterprise

10 ways ChatGPT and generative AI can strengthen zero trust


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


ChatGPT’s potential to improve cybersecurity and zero trust needs to start with the goal of learning from every breach attempt — and becoming stronger from it. Generative AI can deliver the greatest value in the shortest time when we look at it as a continuous learning engine that finds correlations, relationships and causal factors in threat data — and that never forgets. ChatGPT and generative AI can be used to create “muscle memory,” or immediate reflex, in cybersecurity teams to stop breaches. 

What cybersecurity CEOs are hearing from their customers 

CEOs of cybersecurity providers interviewed at RSAC 2023 last week told VentureBeat their enterprise customers recognize ChatGPT’s value for improving cybersecurity, but also express concern about the risk of confidential data and intellectual property (IP) being accidentally compromised. The Cloud Security Alliance released its first-ever ChatGPT Guidance Paper during the conference calling on the industry to improve AI roadmap collaboration.

Connie Stack, CEO of NextDLP, told VentureBeat her company had surveyed usage of ChatGPT by Next’s customers and found 97% of larger organizations have seen their employees use the tool. One in 10 endpoints across Next’s Reveal platform have accessed ChatGPT. 

In an interview at RSAC 2023, Stack told VentureBeat that “this level of ChatGPT usage is a point of concern for some of our customers as they evaluate this new vector for data loss. Some Next customers have outright blocked its usage, including a healthcare company that could not tolerate any level of risk related to disclosing IP and trade secrets to a public-facing generative large language model. Others are open-minded about the potential benefits, and are proceeding cautiously with its use to support things like enhanced data loss ‘threat hunting’ and supporting security-related content creation.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Building new cybersecurity muscle memory 

The potential for generative AI to increase the learning efficacy of threat analysts, experienced threat hunters and security operations center (SOC) staff is a compelling motivation for cybersecurity providers to adopt tools like ChatGPT. Ongoing learning needs to be so ingrained into enterprises’ threat defenses that they can react by reflex, relying on “muscle memory” to adapt, react and kill a breach attempt before it starts.  

In a recent interview, Michael Sentonas, president of CrowdStrike, told VentureBeat: “The core concept of what CrowdStrike is there to do is to effectively visualize any attack that the adversary uses regardless of what that technique is. The concept of the crowd in CrowdStrike is to ensure that if someone attacks me, that technique is forever part of our research. So then if they try to use the same attack on you, we’ve seen it, we’ve done it.”

He continued: “ChatGPT and those sorts of LLMs allow you to go, ‘Hey, show me what adversaries are attacking healthcare. Show me what adversaries are attacking hospitals. Show me the techniques that they’re using. Have those techniques ever been used in my network? Give me the list of machines where those techniques have been used.’ And then you can keep going through that process. You don’t have to be an expert, but using that technology could lower the barrier of entry to become a decent threat hunter, a positive.”

Readers Also Like:  CERTIFICATE OF ASSUMED NAME STATE OF MIN - Park Rapids Enterprise

RSAC 2023’s most discussed topic was the newly announced ChatGPT products and integrations.

Of the 20 vendors who announced new products and integration, the most noteworthy are Airgap NetworksGoogle Security AI Workbench, Microsoft Security Copilot (launched before the show), Recorded Future, Security Scorecard and SentinelOne.

The most reliable ones on the show floor had previously been trained on large-scale datasets. Their accuracy showed why it’s important to train a model with the correct data.

Airgap’s Zero Trust Firewall (ZTFW) with ThreatGPT is noteworthy. It’s been engineered to complement existing perimeter firewall infrastructures by adding a dedicated layer of microsegmentation and access in the network core. “With highly accurate asset discovery, agentless microsegmentation and secure access, Airgap offers a wealth of intelligence to combat evolving threats,” Ritesh Agrawal, CEO of Airgap, said. “What customers need now is an easy way to harness that power without any programming. And that’s the beauty of ThreatGPT — the sheer data-mining intelligence of AI coupled with an easy, natural language interface. It’s a game-changer for security teams.”

Airgap is considered to have one of the most innovative engineering and product development teams among the top 20 zero-trust startups. Airgap’s ThreatGPT uses a combination of graph databases and GPT-3 models to provide previously unavailable cybersecurity insights. The company configured the GPT-3 models to analyze natural language queries and identify potential security threats, while graph databases are integrated to provide contextual intelligence on traffic relationships between endpoints.

How ChatGPT will strengthen zero trust 

One way generative AI can strengthen zero trust is by identifying and strengthening a business’s most vulnerable threat surfaces. John Kindervag, the creator of zero trust, advised in an interview with VentureBeat earlier this year that “you start with a protected surface,” and talked about he called “the zero-trust learning curve. You don’t start at technology, and that’s the misunderstanding.”

Here are potential ways generative AI can strengthen core areas of zero trust as it is defined in the NIST 800-207 standard:

Unifying and learning from threat analysis and incident response at an enterprise level

CISOs tell VentureBeat that they want to consolidate their tech stacks because there are too many conflicting systems for threat analysis, incident response and alert systems, and SOC analysts aren’t sure what’s the most urgent. Generative AI and ChatGPT are already proving to be powerful tools for consolidating applications. They will finally give CISOs a single view of threat analysis and incident response across their infrastructure.

Identifying identity-driven internal and external breach attempts faster with continuous monitoring

At the center of zero trust are identities. Generative AI has the potential to quickly identify whether a given identity’s activity is consistent with its previous history.

Readers Also Like:  Not-E3 2023 wrap-up: From mainstream to boutique | Kaser Focus

CISOs tell VentureBeat that the most challenging breach to stop is the one that starts inside, with legitimate identities and credentials.

One of the core strengths of LLMs is the ability to spot anomalies in data based on small sample sizes. That’s perfect for securing IAM, PAM and Active Directories. LLMs are proving effective in analyzing user access logs and detecting suspicious activity. 

Overcoming microsegmentation’s most challenging roadblocks

The many challenges of getting microsegmentation right can make large-scale microsegmentation projects drag on for months or even years. While network microsegmentation aims to segregate and isolate defined segments in an enterprise network, it’s rarely a one-and-done task.

Generative AI can help by identifying how to best introduce microsegmentation without interrupting access to systems and resources in the process. Best of all, it can potentially reduce thousands of trouble tickets in IT service management systems created by a bad microsegmentation project.

Solving the security challenge of managing and protecting endpoints and identities

Attackers search for gaps between endpoint security and identity management. Generative AI and ChatGPT can help solve this problem by giving threat hunters the intelligence they need to know which endpoints are at the most significant risk of a breach.

In keeping with the need to improve muscle memory, especially when it comes to endpoints, generative AI could be used to constantly learn how, where and by which methods attackers are trying to penetrate an endpoint and the identities they’re attempting to use.  

Taking least privilege access to an entirely new level

Applying generative AI to the challenge of limiting access to resources by identity, system and length of time is one of the strongest zero-trust use cases. Asking ChatGPT for audit data by resource and a permissions profile will save system administrators and SOC teams thousands of hours a year.

A core part of least privilege access is deleting obsolete accounts. Ivanti’s State of Security Preparedness 2023 Report found that 45% of enterprises suspect former employees and contractors still have active access to company systems and files.

“Large organizations often fail to account for the huge ecosystem of apps, platforms and third-party services that grant access well past an employee’s termination,” said Dr. Srinivas Mukkamala, chief product officer at Ivanti. “We call these zombie credentials, and a shockingly large number of security professionals — and even leadership-level executives — still have access to former employers’ systems and data.”

Fine-tuning behavioral analytics, risk scoring, and real-time adjustment of security personas and roles

Generative AI and ChatGPT will enable SOC analysts and teams to adapt much faster to anomalies discovered by behavioral analysis and risk scoring. They can then immediately shut down any lateral movement a potential attacker is attempting. Defining privilege access by risk score alone will be outdated; generative AI will contextualize the request and send an alert to its algorithms to identify a potential threat.

Improved real-time analytics, reporting and visibility to help stop online fraud

Most successful zero-trust initiatives are built on an integrated data foundation that aggregates and reports real-time analytics, reporting and visibility. Using that data to teach generative AI models will deliver insights that SOC, threat hunters and risk analysts have never seen before.

Readers Also Like:  Road Test In Lubbock Up Next For Ducks - GoDucks.com

The results will be immediately measurable in stopping ecommerce fraud, where attackers prey on ecommerce systems that can’t keep up with attacks. Threat analysts with ChatGPT’s access to historical data will know immediately if a flagged transaction is legitimate.

Improving context-aware access, strengthened with granular access controls

Another core component of zero trust is the granularity of access controls by identity, asset and endpoint. Look for generative AI to create entirely new workflows that can more accurately detect the combination of network traffic patterns, user behavior and contextual intelligence from integrated data to suggest policy changes by identity, role or persona. Threat hunters, SOC analysts and fraud analysts will know in seconds about every compromised privileged access credential and be able to restrict all access with a simple ChatGPT command.

Hardening configuration and compliance to make them more zero-trust compliant

The LLM models on which ChatGPT is based are already proving effective at improving anomaly detection and streamlining fraud detection. What’s next in this area is capitalizing on ChatGPT’s models to automate access policy and user group creation and improve how compliance is managed with real-time data generated by the models. ChatGPT will make managing configuration, governance risk and compliance reporting possible in a fraction of the time it takes today.  

Limiting the blast radius of the attacker’s favorite weapon: The phishing attack

It’s the threat surface attackers thrive on — luring victims with social engineering schemes that allude to large cash payouts. ChatGPT is already proving very effective at natural language processing (NLP), and that combined with its LLMs makes it effective at detecting unusual text patterns in emails — patterns that often are a sign of business email compromise (BEC) fraud. ChatGPT can also identify emails produced by itself and send them to quarantine. It is being used to create the next generation of cyber-resilient platforms and detection systems.

Focus on turning zero-trust weaknesses into strengths

ChatGPT and generative AI can take on the challenge of continually improving threat intelligence and knowledge by strengthening the muscle memory of an organization’s zero-trust security. It’s time to see these technologies as learning systems that can help organizations sharpen their automated — and human — skills at protecting against external and internal threats, by logging and inspecting all network traffic, limiting and controlling access, and verifying and securing network resources.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.