At the Black Hat kickoff keynote on Wednesday, Jeff Moss (AKA Dark Tangent), the founder of Black Hat, focused on the security implications of AI before introducing the main speaker, Maria Markstedter, CEO and founder of Azeria Labs. Moss said that a highlight of the other Sin City hacker event — DEF CON 31 — right on the heels of Black Hat, is a challenge sponsored by the White House in which hackers attempt to break top AI models … in order to find ways to keep them secure.
Jump to:
Securing AI was also a key theme during a panel at Black Hat a day earlier: Cybersecurity in the Age of AI, hosted by security firm Barracuda. The event detailed several other pressing topics, including how generative AI is reshaping the world and the cyber landscape, the potential benefits and risks associated with the democratization of AI, how the relentless pace of AI development will affect our ability to navigate and regulate tech, and how security players can evolve with generative AI to the advantage of defenders.
One thing all of the panelists agreed upon is that AI is a major tech disruption, but it is also important to remember that there is a long history of AI, not just the last six months. “One of the first and easy wins will be improved user interfaces for tools,” said Mark Ryland, director, Office of the CISO at AWS.
From the perspective of policy, it’s about understanding the future of the market, according to Dr. Amit Elazari, co-founder and CEO of OpenPolicy and cybersecurity professor at UC Berkeley.
SEE: CrowdStrike at Black Hat: Speed, Interaction, Sophistication of Threat Actors Rising in 2023 (TechRepublic)
“Very soon you will see a large executive order from the [Biden] administration that is as comprehensive as the cybersecurity executive order,” said Elazari. “It is really going to bring forth what we in the policy space have been predicting: a convergence of requirements in risk and high risk, specifically between AI privacy and security.”
She added that AI risk management will converge with privacy security requirements. “That presents an interesting opportunity for security companies to embrace holistic risk management posture cutting across these domains.”
Attackers and defenders: How generative AI will tilt the balance
While the jury is still out on whether attackers will benefit from generative AI more than defenders, the endemic shortage of cybersecurity personnel presents an opportunity for AI to close that gap and automate tasks that might provide an advantage to the defender, noted Michael Daniel, president and CEO of Cyber Threat Alliance and former cyber czar for the Obama administration.
SEE: Conversational AI to Fuel Contact Center Market to 16% Growth (TechRepublic)
“We have a huge shortage of cybersecurity personnel,” Daniel said. “… To the extent that you can use AI to close the gap by automating more tasks. AI will make it easier to focus on work that might provide an advantage,” he added.
AI and the code pipeline
Daniel speculated that, because of the adoption of AI, developers could drive the exploitable error rate in code down so far that, in 10 years, it will be very difficult to find vulnerabilities in computer code.
Elazari argued that the generative AI development pipeline — the sheer amount of code creation involved — constitutes a new attack surface.
“We are producing a lot more code all the time, and if we don’t get a lot smarter in terms of how we really push secure lifecycle development practices, AI will just duplicate current practices that are suboptimal. So that’s where we have an opportunity for experts doubling down on lifecycle development,” she said.
Using AI to do cybersecurity for AI
The panelists also mulled over how security teams practice cybersecurity for the AI itself — how do you do security for a large language model?
Daniel suggested that we don’t necessarily know how to discern, for example, whether an AI model is hallucinating, whether it has been hacked or whether bad output means deliberate compromise. “We don’t actually have the tools to detect if someone has poisoned the training data. So where the industry will have to put time and effort into defending the AI itself, we will have to see how it works out,” he said.
Elazari said in an environment of uncertainty, such as is the case with AI, embracing an adversarial mindset will be critical, and using existing concepts like red teaming, pen testing, and even bug bounties will be necessary.
“Six years ago, I envisioned a future where algorithmic auditors would engage in bug bounties to find AI issues, just as we do in the security field, and here we are seeing this happen at DEF CON, so I think that will be an opportunity to scale the AI profession while leveraging concepts and learnings from security,” Elazari said.
Will AI help or hinder human talent development and fill vacant seats?
Elazari also said that she is concerned about the potential for generative AI to remove entry-level positions in cybersecurity.
“A lot of this work of writing textual and language work has also been an entry point for analysts. I’m a bit concerned that with the scale and automation of generative AI entry, even the few level positions in cyber will get removed. We need to maintain those positions,” she said.
Patrick Coughlin, GVP of Security Markets, at Splunk, suggested thinking of tech disruption, whether AI or any other new tech, as an amplifier of capability — new technology amplifies what people can do.
“And this is typically symmetric: There are lots of advantages for both positive and negative uses,” he said. “Our job is to make sure they at least balance out.”
Do fewer foundational AI models mean easier security and regulatory challenges?
Coughlin pointed out that the cost and effort to develop foundation models may limit their proliferation, which could make security less of a daunting challenge. “Foundation models are very expensive to develop, so there is a kind of natural concentration and a high barrier to entry,” he said. “Therefore, not many companies will invest in them.”
He added that, as a consequence, a lot of companies will put their own training data on top of other peoples’ foundation models, getting strong results by putting a small amount of custom training data on a generic model.
“That will be the typical use case,” Coughlin said. “That also means that it will be easier to have safety and regulatory frameworks in place because there won’t be countless companies with foundation models of their own to regulate.”
What disruption means when AI enters the enterprise
The panelists delved into the difficulty of discussing the threat landscape because of the speed at which AI is developing, given how AI has disrupted an innovation roadmap that has involved years, not weeks and months.
“The first step is … don’t freak out,” said Coughlin. “There are things we can use from the past. One of the challenges is we have to recognize there is a lot of heat on enterprise security leaders right now to produce definitive and deterministic solutions around an incredibly rapidly changing innovation landscape. It’s hard to talk about a threat landscape because of the speed at which the technology is progressing,” he said.
He also stated that inevitably, in order to protect AI systems from exploitation and misconfiguration, we will need security, IT and engineering teams to work better together: we’ll need to break down silos. “As AI systems move into production, as they are powering more and more customer-facing apps, it will be increasingly critical that we break down silos to drive visibility, process controls and clarity for the C suite,” Coughlin said.
Another of the panelists pointed to three consequences of the introduction of AI into enterprises from the perspective of a security practitioner: First, it typically introduces a new attack surface area and a new concept of critical assets, such as training data sets; second, it introduces a new way to lose and leak data, as well as new issues around privacy; and third it has implications for regulation and compliance.
Generative AI as a boon to cybersecurity work and training
When the panelists were queried about the benefits of generative AI and the positive outcomes it can generate, Fleming Shi, CTO of Barracuda, said AI models have the potential to make just-in-time training viable using generative AI.
“And with the right prompts, the right type of data to make sure you can make it personalized, training can be more easily implemented and more interactive,” Shi said, rhetorically asking whether anyone enjoys cybersecurity training. “If you make it more personable [using large language models as natural language engagement tools], people — especially kids — can learn from it. When people walk into their first job, they will be better prepared, ready to go,” he added.
Daniel said that he’s optimistic, “which may sound strange coming from the former cybersecurity coordinator of the U.S.,” he quipped. “I was not known as the Bluebird of Happiness. Overall, I think the tools we are talking about have the enormous potential to make the practice of cybersecurity more satisfying for a lot of people. It can take alert fatigue out of the equation and actually make it much easier for humans to focus on the stuff that’s actually interesting.”
He said he has hope that these tools can make the practice of cybersecurity a more engaging discipline. “We could go down the stupid path and let it block entry to the cybersecurity field, but if we use it right — by thinking of it as a ‘copilot’ rather than a replacement — we could actually expand the pool of [people entering the field],” Daniel added.
Read next: ChatGPT vs Google Bard (2023): An In-Depth Comparison (TechRepublic)
Disclaimer: Barracuda Networks paid for my airfare and accommodations for Black Hat 2023.