security

Four risks to consider before using ChatGPT for security operations – SC Media


ChatGPT has made a name for itself these days, it’s seemingly everywhere. Google and Microsoft have released their own versions of large language models (LLMs), and a multitude of other chatbots and complementary technology are in active development. As a result of how much buzz generative AI has received from the media, organizations and people alike, it makes sense that IT security companies are also hoping to benefit from these technologies.

And while this emerging technology has the potential to make software development more convenient, it’s equally as likely to become a source of potential threats and headaches for security-minded organizations.

Here are four potential threats and headaches that security teams can expect from ChatGPT:

  • ChatGPT does not replace security experts.

The tool takes a massive amount of text and other data that it finds online and uses various mathematical models to create its responses. It isn’t a security expert, but it’s good at locating what human security experts have posted. However, ChatGPT lacks the ability to think for itself and it’s often influenced by user decisions, which can change everything related to remediation in security. And while the code generation features are tantalizing, ChatGPT does not code to the same level of sophistication as a seasoned security expert.

  • ChatGPT isn’t very accurate.

Despite the fanfare over passing a law school bar exam and other college tests, ChatGPT isn’t all that smart. Its database only dates back to 2021, although new data gets uploaded all the time. That’s a big issue for delivering up-to-the-minute vulnerabilities for example. But it doesn’t always offer up the right answers because it depends on how the users frame their questions or describes the context of the queries. Users have to take the time to refine their questions and experiment with the chatbots, which will require new skills in how we formulate our queries and develop our own expertise.

  • ChatGPT can cause extra work for coders.
Readers Also Like:  A.I. poses existential risk of people being 'harmed or killed,' ex-Google CEO Eric Schmidt says - CNBC

It cannot serve as a no-code solution or bridge the talent gap as non-experts put in charge of the tech cannot verify the generated recommendations ensuring they make sense. In the end, ChatGPT will create more technical debt as security experts will have to vet any AI-produced code to ensure its validity and security bona fides.

  • ChatGPT could potentially expose sensitive information.

By its very nature, the inputs to all chatbots are continuously used to retrain and improve the models themselves. ChatGPT can potentially exploit an organization’s vulnerabilities and create a single place for hackers to access the data generated by a chatbot. We have already seen an early compromise where user chat histories were exposed.

Given these issues, what should IT security managers do to protect their organizations and mitigate risks? Gartner has offered a few specific ways to become more familiar with the chatbots and recommended using Azure’s version for experimentation because it does not capture sensitive information. They also propose putting the right policies in place to prevent confidential data from being uploaded to the bots, such as the policies Walmart enacted earlier this year.

IT managers should also work on better and more targeted awareness and training programs. One consultant suggests using the chatbots themselves to generate sample training messages. Another technique: generate reports and analysis of cybersecurity threats that security experts can rewrite for the general public.

As ChatGPT continues to make headlines, we will need be careful with which technology we embrace. In the coming years, investment priorities will likely change so that privacy and compliance teams lean on security teams even more to ensure their privacy controls are compliant with new regulations. ChatGPT may or may not fit into this plan. Either way, security analysts need to weigh the pros and cons of the AI interface and determine if it’s truly worth the risk of integration.

Readers Also Like:  NBA playoffs: Nikola Jokić shoves Suns owner Mat Ishbia, draws technical; fan ejected for touching Nuggets center - Yahoo Sports

Ron Reiter, co-founder and CTO, Sentra



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.