security

Microsoft, OpenAI partnership provides cybersecurity's generative AI … – S&P Global


Microsoft Corp. has captured the attention of the cybersecurity and AI communities with the introduction of Microsoft Security Copilot, an implementation that applies Microsoft partner OpenAI LLC’s GPT-4 large language model in generative AI along with a security-specific Microsoft model. Security Copilot is integrated with the company’s security products portfolio and leverages Microsoft’s vast array of threat intelligence and hyperscaler resources.

SNL Image

Nearly two-thirds of respondents (64%) to 451 Research’s Voice of the Enterprise: Information Security, Budgets & Outlook 2023 study say that responsive security measures are “very important,” but many teams are overwhelmed with data, and staffing remains a challenge. Large-model AI — in particular, large language and a growing range of multimodal models — could be a decided asset in revolutionizing the approach to such challenges. With its relationship with OpenAI and the expansive footprint of its security products, services and initiatives, Microsoft has made its bid for center stage at the forefront of such initiatives. This is part of the company’s broader generative AI strategy — with many of its new offerings also branded “Copilot” — for harnessing large language model AI to overhaul the nature of how technology is applied to problems. Microsoft may have seized the moment, but its bet on the synergies between large-model AI and cybersecurity will not be lost among the company’s security competitors, including some of the largest vendors not just in cybersecurity, but also among those who see AI as the future.

SNL Image

Context

The cybersecurity industry has pushed for greater integration of automation, machine learning and AI into security operations (SecOps) — to the point where the concept of the “autonomous security operations center” has become very visible in the space. Among the reasons:

– Security teams are overwhelmed with data. According to figures quoted by Microsoft, security teams take in data from over 100 different sources on average. Microsoft says it analyzes 65 trillion signals a day. Yet the correlation of this data is also vital to recognize a threat, and adversaries are highly motivated to keep signals obscured.

– Organizations are also hampered by the challenges of sourcing and staffing the security expertise necessary to manage security proactively, as well as to analyze and react to all this data, yet staffing remains a persistent problem for security organizations. In 451 Research’s VotE: Information Security, Organizational Behavior 2022 survey, 70% of respondents reported some level of staffing inadequacy in a continuing trend.

An organization must respond when malicious activity is detected. People who can recognize and mitigate a threat are typically the first line of defense, but the level of scale and detail required to respond effectively can also be overwhelming. If a response is not timely or effective, an incident may follow.

Readers Also Like:  Patient privacy fears as US spy tech firm Palantir wins £330m NHS contract - The Guardian

This is a large part of the push toward security operations center (SOC) automation, but people will likely remain critical to SecOps regardless, for two main reasons. First, cybersecurity is a human endeavor that requires the ability to think like an adversary and to anticipate the actions of adversaries that are highly motivated to overcome defenses. Second, even with automation in play, unsupervised automation may have unexpected consequences.

Enter emerging AI

When AI is applied to automation opportunities, machines can often get it wrong — even with advances in artificial intelligence. AI-enabled automation may be good at a number of things, but those are often well defined and narrowly scoped. Beyond those constraints, outcomes may be less predictable. People must be able to monitor, control and optimize automation.

These factors have converged on what has already become a watershed moment for generative AI across technology in general, particularly given its emphasis on human communication. ChatGPT and similar initiatives are not just interactive; they learn from their human interactions. Even when they err, they learn from the prompts and feedback of people for whom they perform tasks, such as information analysis and code generation, and they are learning quickly. In only a few months, the improvements in capability between OpenAI’s GPT-4 and its previous iterations have demonstrated how rapidly improvements are appearing. It should therefore be no surprise that generative AI has found a place in security operations.

Until now, much of the emphasis on the application of AI in cybersecurity has been on areas such as threat recognition. More recently, we have seen the advent of virtual assistants to security analysts, able to identify resources that may be useful in gathering the context of events or helping determine a course of action. These initiatives — represented by vendors such as StrikeReady Inc.’s Cyber Awareness and Response Analyst, known as CARA, MixMode Inc.’s AI-based analytics or Expel Inc.’s bots — help analysts navigate the high and diverse volume of inputs and actions required for effective threat detection and response. These initiatives have been bellwethers of the moment at hand.

Microsoft Security Copilot launches

The introduction of Microsoft Security Copilot is likely to be disruptive to security technology more broadly, and not only because of the company’s market presence, which is substantial. In 2021, the company said its security business made $10 billion in revenue over the prior 12 months — more than double its closest competitors in cybersecurity technology at that time. That claim has since grown to more than $20 billion as of early 2023. This is the business to which the company now brings its well-known relationship with OpenAI. Microsoft is bringing generative AI into a number of its offerings, with Copilot the branding for many. Given the security opportunity, it was expected that the company’s security portfolio would be a likely destination as well.

Readers Also Like:  Palo Alto acquires Israeli cyber company Talon for $625 million - CTech

Microsoft Security Copilot is a large language AI model powered by OpenAI’s GPT-4, combined with a Microsoft security-specific model that incorporates what Microsoft describes as a growing set of security-specific skills informed by its global threat intelligence and vast signals volume. Security Copilot integrates with the Microsoft Security products portfolio, which means that it offers the most value to those with a significant investment in the Microsoft security portfolio, but the company notes that it will be expanded to third-party products.

Users can give Security Copilot a prompt, to which it then responds in a manner that will be familiar to those who have already been exploring ChatGPT and similar functionality. While Security Copilot calls upon its existing security skills to respond, it also learns new skills thanks to the learning system with which the security-specific model has been equipped. Users can save prompts into a “Promptbook,” a set of steps or automations that users have developed. This builds a body of knowledge and automated functionality that both the organization and Security Copilot can build on over time.

The impact

Part of the reason this introduction is likely to be so resonant and disruptive is because of the human aspect that remains — and will remain — so vital to security operations. Generative AI produces output specifically intended to be presented to people, in human-readable or -usable fashion. The ability of large language AI models to comb through vast amounts of information and present it conversationally addresses one of the primary use cases of automation in SecOps: gathering the context of incidents and events to help analysts triage and escalate those that pose a significant threat.

Generative AI can produce other content as well, such as reverse engineering an exploit. One of the examples given by Microsoft was a visualization of the sequence of an exploit made by Security Copilot, showing how it moved through an incident, as well as the individual accounts, resources and components of the environment affected. The accompanying discussion produced by Security Copilot elaborated on its findings in a way that is readable by a wide variety of people (not just technical security personnel). It is not a stretch to envision such capability going to the next step: deployment of functionality in production to achieve an operational objective in response to such findings. It is well known that generative AI can produce code.

To project much further would be speculative at best, but more than a few are anticipating where these developments could lead. Even so, the constraints described above that will keep people involved in cybersecurity operations — the need to think about adversarial and defense tactics in ways that only people can, and the need to interact with AI and automation — will likely continue to play a role in the adoption of this technology for the foreseeable future. The more realistic near-term hope is for its ability to reduce demands on human expertise and availability and ease security operations for the personnel required.

Readers Also Like:  Enhancing TLS Security: Google Adds Quantum-Resistant ... - The Hacker News

Safeguarding innovation

There are other concerns with these developments. Aware of this, Microsoft has emphasized the steps it is taking to assure how it will deliver security AI “in a safe, secure and responsible way.” User data will be the user’s to own and control. It will not be used to train or enrich foundational AI models used by others. User data and AI models are protected by compliance and security controls. We expect the company to disclose further details on these controls as its AI offerings come to market.

Pacing the industry

Time will tell whether Microsoft’s introduction of generative AI into the security toolset becomes transformative for the industry. At a minimum, its partnership with OpenAI, along with Microsoft’s other AI investments, is bound to attract attention in the near term.

Competitively, those with a stake in generative/large-model AI and security, particularly among other hyperscalers, will feel the immediate impact. Google LLC, fresh from its $5.4 billion acquisition of Mandiant Inc. in 2022, had already answered Microsoft’s GPT challenge with its introduction of Bard. Amazon.com Inc., while not competing directly in SecOps much beyond its own estate so far, introduced Amazon Security Lake at re:Invent 2022 but has yet to elaborate significantly on its plans. Amazon’s relationships with AI companies such as Hugging Face Inc. should be watched for moves that could find their way into security.

More directly affected will be a host of contenders in a wide variety of SecOps tech, including security information and event management, extended detection and response and its contributing technologies, and security automation. Many partner with cloud providers to deliver their offerings, but not all have shown a high level of commitment to the integration of interactive AI into their offerings, which seems likely to change. The greater integration of large-model AI into cybersecurity was already poised to be a prominent factor at the upcoming RSA Conference in San Francisco. The buzz will certainly not end there.

This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.