security

Black Hat 2023 Keynote: Navigating Generative AI in Today's … – TechRepublic


Azaria Labs CEO and founder Maria Markstedter speaks at Black Hat 2023 in Las Vegas on Aug. 10, 2023.
Azaria Labs CEO and founder Maria Markstedter speaks at Black Hat 2023 in Las Vegas on Aug. 10, 2023. Image: Karl Greenberg/TechRepublic

At Black Hat 2023, Maria Markstedter, CEO and founder of Azeria Labs, led a keynote on the future of generative AI, the skills needed from the security community in the coming years, and how malicious actors can break into AI-based applications today.

Jump to:

The generative AI age marks a new technological boom

Both Markstedter and Jeff Moss, hacker and founder of Black Hat, approached the subject with cautious optimism rooted in the technological upheavals of the past. Moss noted that generative AI is essentially performing sophisticated prediction.

“It’s forcing us for economic reasons to take all of our problems and turn them into prediction problems,” Moss said. “The more you can turn your IT problems into prediction problems, the sooner you’ll get a benefit from AI, right? So start thinking of everything you do as a prediction issue.”

He also briefly touched on intellectual property concerns, in which artists or photographers may be able to sue companies that scrape training data from original work. Authentic information might become a commodity, Moss said. He imagines a future in which each person holds ” … our own boutique set of authentic, or should I say uncorrupted, data … ” that the individual can control and possibly sell, which has value because it’s authentic and AI-free.

Unlike in the time of the software boom when the internet first became public, Moss said, regulators are now moving quickly to make structured rules for AI.

“We’ve never really seen governments get ahead of things,” he said. “And so this means, unlike the previous era, we have a chance to participate in the rule-making.”

Many of today’s government regulation efforts around AI are in early stages, such as the blueprint for the U.S. AI Bill of Rights from the Office of Science and Technology.

The massive organizations behind the generative AI arms race, especially Microsoft, are moving so fast that the security community is hurrying to keep up, said Markstedter. She compared the generative AI boom to the early days of the iPhone, when security wasn’t built-in, and the jailbreaking community kept Apple busy gradually coming up with more ways to stop hackers.

“This sparked a wave of security,” Markstedter said, and businesses started seeing the value of security improvements. The same is happening now with generative AI, not necessarily because all of the technology is new, but because the number of use cases has massively expanded since the rise of ChatGPT.

Readers Also Like:  The Most Important Part of the Internet You’ve Probably Never ... - CISA

“What they [businesses] really want is autonomous agents giving them access to a super-smart workforce that can work all hours of the day without running a salary,” Markstedter said. “So our job is to understand the technology that is changing our systems and, as a result, our threats,” she said.

New technology comes with new security vulnerabilities

The first sign of a cat-and-mouse game being played between public use and security was when companies banned employees from using ChatGPT, Markstedter said. Organizations wanted to be sure employees using the AI chatbot didn’t leak sensitive data to an external provider, or have their proprietary information fed into the black box of ChatGPT’s training data.

SEE: Some variants of ChatGPT are showing up on the Dark Web. (TechRepublic)

“We could stop here and say, you know, ‘AI is not gonna take off and become an integral part of our businesses, they’re clearly rejecting it,’” Markstedter said.

Except businesses and enterprise software vendors didn’t reject it. So, the newly developed market for machine learning as a service on platforms such as Azure OpenAI needs to balance rapid development and conventional security practices.

Many new vulnerabilities come from the fact that generative AI capabilities can be multimodal, meaning they can interpret data from multiple types or modalities of content. One generative AI might be able to analyze text, video and audio content at the same time, for example. This presents a problem from a security perspective because the more autonomous a system becomes, the more risks it can take.

SEE: Learn more about multimodal models and the problems with generative AI scraping copyrighted material (TechRepublic).

For example, Adept is working on a model called ACT-1 that can access web browsers and any software tool or API on a computer with the goal, as listed on their website, of ” … a system that can do anything a human can do in front of a computer.”

An AI agent such as ACT-1 requires security for internal and external data. The AI agent might read incident data as well. For example, an AI agent could download malicious code in the course of trying to solve a security problem.

Readers Also Like:  Insider Q&A: Artificial intelligence and cybersecurity in military tech - The Associated Press

That reminds Markstedter of the work hackers have been doing for the last 10 years to secure third-party access points or software-as-a-service applications that connect to personal data and apps.

“We also need to rethink our ideas around data security because model data is data at the end of the day, and you need to protect it just as much as your sensitive data,” Markstedter said.

Markstedter pointed out a July 2023 paper, “(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs,” in which researchers determined they could trick a model into interpreting a picture of an audio file that looks harmless to human eyes and ears, but injects malicious instructions into code an AI might then access.

Malicious images like this could be sent by email or embedded on websites.

“So now that we have spent many years teaching users not to click on things and attachments in phishing emails, we now have to worry about the AI agent being exploited by automatically processing malicious email attachments,” Markstedter said. “Data infiltration will become rather trivial with these autonomous agents because they have access to all of our data and apps.”

One possible solution is model alignment, in which an AI is instructed to avoid actions that might not be aligned with its intended objectives. Some attacks target modal alignment specifically, instructing large language models to circumvent their model alignment.

“You can think of these agents like another person who believes anything they read on the internet and, even worse, does anything the internet tells it to do,” Markstedter said.

Will AI replace security professionals?

Along with new threats to private data, generative AI has also spurred worries about where humans fit into the workforce. Markstedter said that while she can’t predict the future, generative AI has so far created a lot of new challenges the security industry needs to be present to solve.

“AI will significantly increase our market cap because our industry actually grew with every significant technological change and will continue growing,” she said. “And we developed good enough security solutions for most of our previous security problems caused by these technological changes. But with this one, we are presented with new problems or challenges for which we just don’t have any solutions. There is a lot of money in creating those solutions.”

Readers Also Like:  KU's Gibbens and TTU's Cherubet Secure Big 12 Weekly Awards - Big12Sports.com

Demand for security researchers who know how to handle generative AI models will increase, she said. That could be good or bad for the security community in general.

“An AI might not replace you, but security professionals with AI skills can,” Markstedter said.

She noted that security professionals should keep an eye on developments in the area of “explainable AI,” which helps developers and researchers look into the black box of a generative AI’s training data. Security professionals might be needed to create reverse engineering tools to discover how the models make their determinations.

What’s next for generative AI from a security perspective?

Generative AI is likely to become more powerful, said both Markstedter and Moss.

“We need to take the possibility of autonomous AI agents becoming a reality within our enterprises seriously,” said Markstedter. “And we need to rethink our concepts of identity and asset management of truly autonomous systems having access to our data and our apps, which also means that we need to rethink our concepts around data security. So we either show that integrating autonomous, all-access agents is way too risky, or we accept that they become a reality and develop solutions to make them safe to use.”

She also predicts that on-device AI applications on mobile phones will proliferate.

“So you’re going to hear a lot about the problems of AI,” Moss said. “But I also want you to think about the opportunities of AI. Business opportunities. Opportunities for us as professionals to get involved and help steer the future.”

Disclaimer: TechRepublic writer Karl Greenberg is attending Black Hat 2023 and recorded this keynote; this article is based on a transcript of his recording. Barracuda Networks paid for his airfare and accommodations for Black Hat 2023.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.