security

SAFE Security claims to predict data breaches with new generative AI offering – CSO Online


AI-based cyber risk management SaaS vendor SAFE Security has announced the release Cyber Risk Cloud of Cloud – a new offering it claims uses generative AI to help businesses predict and prevent cyber breaches. It does so by answering questions about a customer’s cybersecurity posture and generating likelihoods for different risk scenarios. These include the likelihood of a business suffering a ransomware attack in the next 12 months and the dollar impact of an attack, the firm added. This enables organizations to make informed, prognostic security decisions to reduce risk, SAFE Security said.

The emergence of generative AI chat interfaces that use large language models (LLMs) and their impact on cybersecurity is a significant area of discussion. Concerns about the risks these new technologies could introduce range from the potential issues of sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks. Some countries, US states, and enterprises are considering or have ordered bans on the use of generative AI technology such as ChatGPT on data security, protection, and privacy grounds.

However, generative AI chatbots can also enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity.

SafeGPT provides “comprehensible overview” of cybersecurity posture

SAFE’s generative AI chat interface SafeGPT, powered by LLM models, provides stakeholders with a clear and comprehensible overview of an organization’s cybersecurity posture, the firm said in a press release. Through its dashboard and natural language processing capabilities, SafeGPT enables users to ask targeted questions of their cyber risk data, determine the most effective strategies for mitigating risk, and respond to inquiries from regulators and other key stakeholders, it added. According to SAFE, the types of questions the service can answer include:

  • How likely are you to be hit by a ransomware attack in the next 12 months?
  • What is your likelihood of being hit by the latest malware like “Snake”?
  • What is your dollar impact for that attack?
  • What prioritized actions can you proactively take to reduce the ransomware breach likelihood and reduce dollar risk?
Readers Also Like:  2023 Women in IT Security Recognition Program Open for Nominations - EIN News

Cyber Risk Cloud of Clouds brings together disparate cyber signals including those from CrowdStrike, AWS, Azure, Google Cloud Provider, and Rapid7 into a single view, the firm said. This provides organizations with visibility across their attack surface ecosystem, including technology, people, and third parties, it added.

CSO asked SAFE Security for further information about the type of data SafeGPT uses to answer questions about a customer’s cybersecurity posture/risk incident likelihood, as well as how the company ensures the security of data inputted and answers outputted by SafeGPT. 

Questions, answers do not leave SAFE’s datacenter, train models

SAFE uses customers’ own risk data augmented with external threat intelligence to generate a real-time, comprehensive cybersecurity posture, Saket Modi, CEO of SAFE, tells CSO. “SAFE has deployed the Azure OpenAI service in its own data center so that the customer data does not leave it. Azure has several security measures in place to ensure the security of the data and they do not use any customer data to train their models,” Modi adds.

For a question like “What is the likelihood of Snake Malware” in an environment,  for example, SafeGPT queries the local customer’s data loaded in Azure OpenAI and provides the answer, says Modi. “It does not expose the question or the answer outside the SAFE datacenter. SAFE’s product development goes through extensive security testing throughout its development process.”

LLM “hallucinations” a chief concern of generative AI

AI/machine learning has been in use for the purpose of predicting security exploits/breaches for at least a decade. What’s new is the use of generative AI with a chat interface for SOC analysts to quiz the backend LLM on the likelihood of an attack, Rik Turner, a senior principal analyst for cybersecurity at Omdia, tells CSO.

Readers Also Like:  Phishing Click Rates Mean More Training, Awareness are Needed - TechDecisions

“The questions they ask will need to be honed to perfection for them to get the best, and ideally the most precise, answers. LLMs are notorious for making things up, or to use the term of art, ‘hallucinating,’ such that there is a need for anchoring (aka creating guardrails, or maybe laying down ground rules) to avoid such outcomes,” he says.

For Turner, a main concern with the use of generative AI as an operational support for SOC analysts is that, while it may well help Tier-1 analysts to work on Tier-2 problems, what happens if the LLM hallucinates? “If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.