security

RSAC panel warns AI poses unintended security consequences – TechTarget


SAN FRANCISCO — While a panel of experts at RSA Conference 2023 touted generative AI for a host of security uses including incident response, they also warned the rapid adoption of the technology will present unintended consequences, particularly around the spread of disinformation.

Ram Shankar Siva Kumar, data scientist in Azure Security at Microsoft, moderated a panel title  “Security as Part of Responsible AI: At Home or At Odds” session Tuesday during the conference with panelists Vijay Bolina, CISO at Google DeepMind; Rumman Chowdhury, founder of Bias Buccaneers; and Daniel Rohrer, vice president of software security at Nvidia. The discussion addressed if and how security can keep pace with the whirlwind of large language model use that OpenAI’s ChatGPT sparked beginning in November.

The panelists emphasized how the rapid adoption of AI has even affected the discussion points of their panel because so much has changed in the space over the last six months. However, one of the consistent main challenges with responsible AI use was possible unintended consequences.

Chowdhury defined unintended consequences as well-meaning people who accidentally implement bad things as opposed to malicious attackers. The difference between the two matters, she said, because it affects the approach to solving the problem.

“In one case, you’re looking for people who create bots or spread disinformation intentionally and then there are people who spread it unintentionally because they believe it. Both need to be resolved,” Chowdhury said. “People can make deepfakes that are malicious but if no one shares them, it doesn’t have a big impact.”

Readers Also Like:  CMNT (COMMINT) Is Using Security Tech and Intelligence To Build Safer and Smarter Communities - Yahoo Finance

She attributed the heightened use of generative AI to enterprise’ needs for critical thinking and fast analysis, which the technology does address. On the other hand, the panelists emphasized their concerns, such as the potential for joblessness in particular fields, inherent bias, and even “hallucinations.”

Hallucinations occur when an LLM provides responses that are inaccurate responses or not based in facts. This can lead to the spread of mass disinformation, Chowdhury warned.

“At Twitter, we worried a lot about election disinformation. AI may take that and amp it up. We have a politically contentious situation in a world in which it is very simple to make and spread disinformation at scale,” she said.

Ryan Kovar, distinguished security strategist at Splunk, attended the panel Tuesday and said the hallucination aspect rings true. When Kovar asked ChatGPT to produce a summary on himself, he received a mix of work published by him, as well as a colleague. The AI conflated his and his colleagues work all the way through.

“Another problem is AI it doesn’t lie, but it infers. You have to be more specific and get it down to what you’re actually asking,” Kovar said. “Still, the only people who are going to lose, are the ones who don’t adapt to AI.”

To curb these problems, Rohrer said there needs to be a focus on building tools to manage those unintended consequences, and problems need to be addressed with a systems-based approach. Currently, he believes there’s an overemphasis on the model itself, rather than the embedded system.

Since the onset of increased AI use, Rohrer has dealt with the legal and ethics teams more than ever.

Readers Also Like:  New US sanctions target workarounds that let Russia get Western tech for war - WRBL

“What I learned in those conversations is that a lot of the things we do and the way we think in security applies very well. Looking at risks versus harms. It’s the same things we want to do here,” Rohrer said.

Similarly, Chowdhury referred to generative AI as the merging of ethics and security. It’s similar, she said, but not the same.

How AI helps

Just as AI use presents problems, it’s also been implemented in various ways to improve enterprises’ defensive postures. New products incorporate AI to help gather and analyze threat intelligence, as well as remediate vulnerability risks. The panelists noted it’s also been helpful for red team training.

Bolina, who formerly worked at Mandiant, said it will help with incident response (IR) as well.

“As a former IR, I think if anything it will make our jobs more efficient and more creative. We’ll be able to do things much faster,” Bolina said during the panel.

Separately, Jen Miller-Osborn, director of threat intelligence at Palo Alto Networks’ Unit 42, agreed that AI and LLM for IR is helpful. She said she’s observed its benefit to security operations teams as well as within Palo Alto Networks. The vendor has been using AI and machine learning tools for some time to automate IR aspects, especially low-level needs. 

“We’re able to save our people for the actual incidents where you need people,” Miller-Osborn said. “We’ve taken our IR time down from 40 days to one minute because machines can do that. They can make those determinations much faster than people, especially when you have a platform of things it can make ties from the firewall to the endpoint, to the logs we’re seeing.”

Readers Also Like:  Irvine Innovators Making An Impact - Irvine Standard

 

Arielle Waldman is a Boston-based reporter covering enterprise security news.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.