security

Novacoast Is Deploying OpenAI Chatbot Technology To Its 400-Person Security Services Team – CRN


Security News


Kyle Alspach


OpenAI’s GPT-3 technology, which also powers ChatGPT, has already saved Novacoast security pros ‘a ton of time’ in testing so far, an executive told CRN.


 ARTICLE TITLE HERE


At security services provider Novacoast, the OpenAI chatbot technology that’s behind the popular ChatGPT is about to get a wide-scale tryout to see if it can save time for the company’s hundreds of cybersecurity professionals.

Based on a smaller-scale test that the firm already carried out, Novacoast is optimistic that it will, said company COO Eron Howard.

[Related: 5 Big Pros And Cons Of ChatGPT For Cybersecurity]

Even though the technology has shortcomings, the eight-week test of the chatbot at Novacoast was successful enough to move it into broader usage at the company, Howard said.

Issues with the OpenAI chatbot technology include the fact that it hasn’t been trained on data past 2021 and it sometimes fabricates, or “hallucinates” an answer when asked a question it doesn’t have data on.

And yet, “despite the fact that it can ‘hallucinate,’ and despite the fact that the dataset is old — which in security can be very bad — it saved people a ton of time,” Howard told CRN.

OpenAI, a startup backed by billions of dollars in funding from Microsoft, has made its GPT-3 natural language processing technology available for use through an API connection — enabling organizations to tap into the same AI that underlies ChatGPT for their own applications.

Readers Also Like:  Minnesota legislature increases investment in cybersecurity for state government, schools - CBS News

At Wichita, Kansas-based Novacoast, No. 258 on CRN’s 2022 Solution Provider 500 list, the tryout of GPT-3 with the more than 400 professionals in its services organization will start in the next week or so, Howard said.

The team includes 100 security operations center (SOC) analysts as well as threat hunters, penetration testers, developers and security engineers. “All of them will have it integrated into their chat, so we can see if they’re getting a boost in time savings,” Howard said.

Novacoast has tailored the GPT-3 technology by putting measures in place that help to limit the chatbot’s hallucinations, he noted.

The security services provider will be looking to see if GPT-3 can accelerate activities such as writing the scripts and rules that are an essential part of security operations, including for detecting and responding to threats.

GPT-3 is also adept at summarizing the steps that are necessary for responding to a security issue, Howard said, since it can essentially query the corpus of knowledge that has been published by SOC analysts.

The technology can provide a SOC analyst with an average recommendation for the steps to take in a specific situation “without having to go to Google and read a bunch of blogs,” he said.

Accelerating Threat Research

A third area where OpenAI’s technology has massive potential to help automate the work of security professionals is on threat research — for instance, serving up information about a threat that is currently hitting certain types of IT infrastructure. “Today that information is so fragmented,” Howard said. “Threat hunters know where to go, but it takes hours to put together that story.”

Readers Also Like:  Does OpenAI's Origins Explain the Sam Altman Drama? - Slashdot

This application of GPT-3 is one that still needs work, however. At present, the ability to assist real-time threat research is limited by the lack of current data — though that may get better with future versions of the technology. (In answer to a question from CRN, ChatGPT said that OpenAI’s forthcoming GPT-4 “will not have direct access to real-time data.” However, according to ChatGPT, “it may be possible to provide [GPT-4] with real-time data or information through the input that it receives during inference.”)

For threat research, “if you could trust that the language model wasn’t hallucinating and that it was current, then you could cut that time down exponentially,” Howard said.

Meanwhile, Novacoast has also partnered with one of the first cloud security vendors to integrate GPT-3 into its product, Orca Security. The company recently announced that it’s been able to “improve the detail and accuracy” of its remediation steps for customers by utilizing GPT-3.

‘Game-Changing’ For Cyber Defense

Novacoast is not alone in seeing the technology’s potential for aiding some of the work of cybersecurity professionals.

Robert Boyce, global lead for cyber resilience services at Accenture, recently told CRN that it’s clear that ChatGPT can automate some of the actitivities involved in security incident analysis. While the malicious uses of ChatGPT for writing phishing emails and malware code have gotten much attention, the tool also “helps reduce the barrier to entry with getting into the defensive side,” he said.

Ultimately, it’s clear that OpenAI’s GPT-3 could have massive ramifications for the field of cybersecurity, which suffers from a global talent shortage that’s led to millions of unfilled jobs.

Readers Also Like:  Class action lawsuit claims school security software company ... - K-12 Dive

“I do believe if you saved every cybersecurity practitioner even 30 minutes a week, the net impact on our industry is huge,” Howard said. “I think that’s where it could be game-changing for our industry.”


Kyle Alspach

Kyle Alspach is a Senior Editor at CRN focused on cybersecurity. His coverage spans news, analysis and deep dives on the cybersecurity industry, with a focus on fast-growing segments such as cloud security, application security and identity security.  He can be reached at kalspach@thechannelcompany.com.




READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.