security

Cybersecurity experts see uses and abuses in new wave of AI tech – Axios


Illustration of a distorted ear with speech bubbles surrounding it

Illustration: Sarah Grillo/Axios

Cybersecurity experts are cautiously optimistic about the new wave of generative AI innovations like ChatGPT, while malicious actors are already leaping to experiment with it.

Cyber leaders see multiple ways generative AI can help assist organizations’ defense: reviewing code for efficiency and potential security vulnerabilities; exploring new tactics that malicious actors might employ; and automating recurring tasks like writing reports.

  • “I believe the attention ChatGPT is currently getting is going to help us build better AI/machine learning security best practices,” Cloud Security Alliance co-founder and CEO Jim Reavis wrote in a blog post last month.
  • “I’m really excited as to what I believe it to be in terms of chat GPT as being kind of a new interface,” Resilience Insurance CISO Justin Shattuck told Axios. “A lot of what we’re constantly doing is sifting through noise. And I think using machine learning allows us to get through that noise quicker. And then also notice patterns that we humans aren’t typically going to notice.”
  • “Text-based generative AI systems are great for inspiration,” Chris Anley, chief scientist at IT security company NCC Group, told Axios. “We can’t trust them on factual matters, and there are some types of questions they are currently very bad at answering, but they are very good at making us better writers — and even better thinkers.”

Reality check: The idea of using chatbots to review or write secure code has already been called into question by some experts and researchers.

  • A Stanford study released last November showed that AI assistants led to coders creating more vulnerable code: “Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access,” researchers wrote in the study’s overview.
  • “Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.”
  • Anley conducted an experiment last week, asking ChatGPT to find vulnerabilities in various levels of flawed security code. He found a number of limitations: “Like a talking dog, it’s not remarkable because it’s good; it’s remarkable because it does it at all.”
Readers Also Like:  Freedom and Prosperity Center's Dan Negrea named to Global ... - Atlantic Council

Using generative AI to review code strikes some experts as particularly dangerous.

  • “How the hell are software engineers pasting their code into something they don’t own?” Ian McShane, vice president of strategy at security firm Arctic Wolf and former Gartner analyst, told Axios. “Would you phone up random Steve off the street and say, ‘Hey, come and have a look through my financial auditing? Can you tell me if anything’s wrong?'”
  • McShane does see benefits in the approachable chatbot user interface for lowering the barrier to entry to security. But unknowns around data set information and transparency also make him pause.

  • “What mustn’t get lost is that this is still machine learning, or machine learning to train from data that’s provided,” he says. “And you know, there’s no better phrase than ‘garbage in garbage out.'”

Meanwhile, hackers and malicious actors, always on the prowl for ways to speed up their operations, have been quick to incorporate generative AI into attacks.

  • Researchers at Check Point Research spotted malicious hackers last month using ChatGPT to write malware, create data encryption tools and write code creating new dark web marketplaces.
  • “Recent AI systems are excellent at generating plausible sounding text and can generate variations on a theme quickly and easily, without tell-tale spelling or grammar errors,” Anley says. “This makes them ideal for generating variations of phishing emails.”

The bottom line: Shattuck maintains that organizations exploring AI usage should see through the larger hype and “understand the limitations, like truly understand where it’s at.”

  • “It’s not a one size fits all,” he says. “Don’t try to apply it to something it’s not … Don’t push it to prod[uction] tomorrow.”
Readers Also Like:  Lance Collins appointed to first Department of Navy Science and ... - Virginia Tech



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.