security

AI for security, security for AI: 2 aspects of the intersection of 2 hot … – S&P Global


AI has been a trending topic in technology for many years, but nothing has fueled interest like the explosive emergence of generative AI over the past year. As with many nascent trends, security often rises to the top of opportunities as well as concerns, and this is no less true with AI — it was a central focus of this year’s RSA Conference. It was also the theme of the opening keynote at Black Hat, where the AI Cyber Challenge, a Defense Advanced Research Projects Agency (DARPA) initiative launched by the Biden-Harris administration, was announced. That same week, DEF CON hosted the largest public “red teaming” (penetration testing) exercise against AI models to date.

In this report, we introduce distinctions between the intersection of these topics evident at these events. These include the application of AI to security issues and opportunities, which we abbreviate here as “AI for security,” and the security of the implementation, deployment and use of AI, which we refer to as “security for AI.”

SNL Image

According to 451 Research’s Voice of the Enterprise: AI & Machine Learning, Infrastructure 2023 survey, both aspects of this intersection are prominent for respondents implementing AI/machine learning (ML) initiatives. In terms of AI for security, threat detection is the most frequently reported area of existing investment (47% of respondents), and another 37% say they plan future investment. In terms of security for AI, security is the most frequently reported concern about the infrastructure that hosts, or will host, AI/ML workloads (21% of respondents, ahead of cost at 19%). These two (security and cost) well outdistance the next concern, reliability (11%). Another 46% of respondents say security is a concern, if not a top concern, giving a total of 67% of respondents reporting some degree of concern about security — the largest percentage of any response. We believe that the distinctions between AI for security and security for AI help define the broad outlines of coverage we plan for both. Both have made a substantial mark already on the technology products and services markets.

SNL Image

AI for security

Machine learning has played a role for many years already in security efforts such as malware recognition and differentiation. The sheer number of malware types and variants have long demanded an approach to this aspect of threat recognition that is both scalable and responsive, given the volume and rapid pace at which new attacks emerge to stay ahead of defenses. The application of machine learning to identifying activity baselines and anomalies that stand out has spurred the rise of user and entity behavior analytics, which can often provide early recognition of malicious activity based on variations from observed norms in the behavior of people as well as technology assets.

Readers Also Like:  Genpact Recognized with the 2023 CSO50 Award by Foundry's CSO - PR Newswire

Supervised machine learning has often been called upon to refine approaches to security analytics previously characterized by rules-based event recognition. Unsupervised approaches, meanwhile, have arisen to provide greater autonomy to security data analysis, which can offer greater alleviation of a security operations team’s burden in recognizing significant events and artifacts in an often overwhelming volume of telemetry from a wide range of sources.

The emergence of generative AI has introduced further opportunities for the application of AI to security priorities. Security operations (SecOps) is particularly fertile ground for innovation. Since attackers seek to evade detection, security analysts must correlate evidence of suspicious activity across a staggering volume of inputs. They must prioritize identifiable threats in this data to respond quickly, given that threats can have an impact within minutes. Security analytics and SecOps tools are purpose-built to enable security teams to detect and respond to threats with greater agility, but the ability of generative AI to comb through such volumes of data, realize valuable insight and present it in easily consumable human terms should help alleviate this load.

Early applications of generative AI to this opportunity show promise for enabling analysts — often limited in number relative to the challenges they face — to spend less time on data collection, correlation and triage, and to focus instead where they can be most effective. Generative AI can also be useful in finding and presenting relevant insight to less experienced analysts, helping them to build expertise as they grow in the field — an option that could prove useful in helping organizations counter the enduring shortage of cybersecurity skills.

It is therefore noteworthy that some of security’s largest competitors are also among the largest investors in generative AI. Examples seen earlier this year include OpenAI LLC-partner Microsoft Corp.’s introduction of Microsoft Security Copilot, the new offerings powered by Google Cloud Security AI Workbench, and Amazon Web Services Inc.’s alignment of Amazon Bedrock with its Global Security Initiative in partnership with global systems integrators and managed security service providers. Supporters of the DARPA AI Cyber Challenge announced at Black Hat include Anthropic, Google LLC, Microsoft, OpenAI, The Linux Foundation and the Open Source Security Foundation, in addition to Black Hat USA and DEF CON. The AI Cyber Challenge is a two-year competition that will offer nearly $20 million in prizes for innovation in finding and remediating security vulnerabilities in software using AI. Companies significantly invested in AI (and AI for security) are also highly visible in efforts to promote security for AI as well.

Readers Also Like:  Digital baby footprints offers added security - FOX 13 Tampa

Many other vendors tout the application of AI to security challenges — as the prior examples of machine learning applications to security suggest — in a field that seems likely to grow in both innovation among new entrants and evolution among current competitors. The range of opportunities is broad, as suggested by the ways in which our survey respondents already employ machine learning for security, compliance and related use cases.

SNL Image

Security for AI

The other major aspect of the security-AI intersection is the realm of mitigation for the security exposures faced by AI. These range from the security vulnerabilities that can potentially be incorporated from the body of both open-source and proprietary software on which AI is built, to the exposure of AI/ML functionality to misuse or abuse, and the potential for adversaries to leverage AI to define and refine new types of exploits. (We explored the generative AI aspects of this frontier of the threat landscape earlier this year.)

This area has already begun to make its mark on the cybersecurity products and services markets, from startups to major vendors and systems integrators, including a significant presence at the 2023 RSA Conference’s Innovation Sandbox and the Black Hat Startup Spotlight. Practitioners are growing the body of research on threats to security and privacy targeting AI, and are identifying ways to detect and defend against malicious activity across a number of concerns.

Among the most prominent recent examples, the Generative Red Team Challenge hosted by the AI Village at DEF CON 2023 was, according to organizers, the largest “red teaming” exercise held so far for any group of AI models. Supported by the White House Office of Science, Technology and Policy; the National Science Foundation’s Computer and Information Science and Engineering Directorate; and the Congressional AI Caucus, models provided by Anthropic, Cohere, Google, Hugging Face Inc., Meta Platforms Inc., NVIDIA Corp., OpenAI and Stability, with participation from Microsoft, were subjected to testing on an evaluation platform provided by Scale AI. Other partners in the effort included Humane Intelligence, SeedAI and the AI Vulnerability Database.

Readers Also Like:  US Tightens Curbs on AI Chip Exports To China, Widening Rift With ... - Slashdot

Existing approaches that have demonstrated value are getting an uplift in this new arena. The MITRE Corp., for example, spearheaded an approach to threat characterization with its Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) knowledgebase, which characterizes threat attributes in ways consumable by detection and response technologies to improve performance and foster automation. Recently, MITRE introduced a similar initiative in ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), which seeks to bring the same systematic approach to threat characterization for AI demonstrated with ATT&CK. While ATT&CK focuses on threats, the AI Vulnerability Database, noted above as a participant in the Generative Red Team Challenge, is a separate effort to catalog exposures, described as “an open-source knowledgebase of failure modes for AI models, datasets and systems.”

In the realm of securing the AI codebase, techniques that have been brought to bear on a broader approach to securing the software supply chain are being focused on AI by those specializing in the factors specific to this domain. Another perspective being brought to bear on the challenge is that of safety, where security is included among the objectives for applying the practices of safety assurance to AI among those with experience in both AI and safety engineering.

Upcoming reports will focus more specifically on the growing landscape of security for AI, as security practitioners and providers explore opportunities for adversaries and risk mitigation, in order to defend one of the most disruptive trends to shape the technology landscape in a generation.

This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
451 Research is part of S&P Global Market Intelligence. For more about 451 Research, please contact 451ClientServices@spglobal.com.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.