security

Amazon CSO likens security to psychological chess matches – Cybersecurity Dive


Technology like generative AI can address some key security challenges confronting organizations, but professionals that overemphasize those capabilities miss the fundamental need to put people and their unique talents first.

“Security is a people issue,” Amazon CSO Stephen Schmidt said Monday during a presentation at AWS re:Invent in Las Vegas. “Computers don’t attack each other. People are behind every single adversarial action that happens out there.”

For Schmidt, winning in security is akin to playing chess — focusing on the board, how the pieces move and interact — while practicing psychology. Security professionals need to understand the human elements at play, including their own tendencies and opponents’ motivations.

“You’re not playing just one chess match,” Schmidt said. “You are playing dozens or hundreds of games at the same time, because you have a variety of adversaries with different motivations who are going after you.”

This cybersecurity scrum can feel overwhelming, but many defenders view generative AI as an ally that can automate repetitive tasks. Cybersecurity vendors across the landscape have released security tools infused with the technology and more are in the pipeline.

Generative AI could also ease a persistent and critical skills shortage in cybersecurity.

Tools like generative AI that can automate and scale the work that takes tension away from complex decisions and nuanced focus are critical, according to Schmidt. “My goal, by the way, is to have people focused on the most ambiguous, dynamic problems, which can’t be solved by software.”

How Amazon weighs AI benefits, risks

AI is “already radically changing our business,” but like any tool, it has its limits, Schmidt said.

Readers Also Like:  Live news: Japanese stocks outperform peers as interest rate ... - Financial Times

Amazon has grappled with the benefits and risks that AI can deliver as organizations apply it to bolster security controls. When the company identifies an area where AI can help it achieve a particular goal, it ponders three questions which help shape security needs with business operations.

Where is the data? 

Understanding how data is handled for LLM training is critical to data security, particularly as it relates to how generative AI models access and potentially expose corporate information. 

Organizations should encrypt that data in transit and at rest, and validate the permissions used to access that data are scoped to the least permissive possible, Schmidt said.

What happens with generative AI queries and any associated data? 

Files or information submitted as part of a generative AI query can lead to better predictions and results, but organizations need assurances that services handling that corporated data will keep it protected, Schmidt said.

The iterative back and forth in AI chatbot queries should be viewed as a potential risk that could jeopardize the organization’s ability to meet regulatory and compliance requirements.

Is the output of the generative AI models accurate enough?

The quality of results from generative AI models are steadily improving, but an overreliance on these outputs without human oversight can expose organizations to challenges that might negate the benefits.

Organizations that are using LLMs to generate custom code, for example, need to validate it’s well written and follows best practices before deploying the code to production, Schmidt said.

These questions guide how Amazon’s security teams think about generative AI and LLM services for internal Amazon use, Schmidt said.

Readers Also Like:  Strategies For Investigating Employee Misconduct

“You the customer should have control of your data and be able to use the model of your choice in a safe and secure manner,” Schmidt said. “When it comes to AI, our guiding tenant is simple: Your data is just that, yours.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.