security

Microsoft Brings Generative AI to Defenders with Security Copilot – TechDecisions


Microsoft has infused the powers of generative AI, ChatGPT and OpenAI’s language models throughout its product portfolio, and the company is now bringing the technology to its security tools with the launch of Microsoft Security Copilot.

According to Microsoft, Security Copilot will help enable defenders to leverage AI to be more efficient by combining an advanced large language model from GPT-4 with a security-specific model from Microsoft. This in turn incorporates a growing set of security-specific skills that is informed by Microsoft’s global threat intelligence and more than 65 trillion daily signals.

The Redmond, Wash. tech giant says Security Copilot runs on Azure’s hyperscale infrastructure and delivers an enterprise-grade security and privacy-compliant experience.

The news is significant, as it is among the first applications of generative AI models from ChatGPT creator OpenAI to the cybersecurity space. Copilot tools leveraging GPT-4 have already been announced for the Microsoft 365 suite, so bringing the technology to Microsoft’s security tools was expected.

Microsoft says Security Copilot works by receiving a prompt from a security professional and leveraging the security-specific model to deploy skills and queries that maximize the value of the large language model’s capabilities.

The cybersecurity-trained model adds a learning system to create and tune new skills while helping catch what other approaches might miss and augmenting a security professional’s work. This makes Security Copilot designed to help in incident response, detect threats and strengthen security postures, Microsoft says.

Like other use cases of generative AI, Security Copilot will not always bee 100% accurate, and AI-generated content can contain mistakes, writes Vasu Jakkal, Microsoft’s corporate vice president of security, compliance, identity and management, in a blog.

However, Security Copilot, currently available through private preview, is a closed-loop learning system, meaning it continually learns from users and gives them the opportunity to give explicit feedback with a feedback feature built directly into the tool.

“As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers,” Jakkal says.

Watch: Microsoft’s introduction of Security Copilot and announcement event



According to Microsoft, Security Copilot will integrate with Microsoft’s end-to-end security products and will expand over time to a growing ecosystem of third-party products. In the company’s official news release, the only Microsoft security products mentioned with native Security Copilot integrations were Microsoft Sentinel and Microsoft Defender.

Like other generative AI assistants leveraging the same models, Security Copilot is designed to simplify a user’s work. In this case, the tool will help defenders respond to security incidents within minutes instead of hours or days, Jakkal says. The tool will provide step-by-step guidance and context through a natural language-based investigation experience to speed up incident response, and will also summarize and process or event and tune reporting for specific audiences.

Security Copilot is also designed to help detect malicious activity that would otherwise go unnoticed by surfacing prioritized threats in real time and anticipating a threat actor’s next move with reasoning based on Microsoft’s threat intelligence.

Cybersecurity professionals are in short supply, but the demand for security skills continues to increase. Microsoft positions Security Copilot as an intelligent security assistant that can help organizations continue to do more with less and augment their security professionals’ skills. In addition, the tool can help professionals learn new skills and approaches, Jakkal says.

Microsoft also pledges that it is developing Security Copilot and other AI systems responsibly, including giving users control and ownership over their data and keeping an organization’s data from being used to train AI models used by other organizations.

Much has been made about autonomous security solutions, but Microsoft says security will remain a human-centric function, Jakkal writes.

“With Security Copilot, we are building a future where every defender is empowered with the technologies and expertise that enable them to reach their full potential,” Jakkal says. “Technology will play an essential role on this journey, but successful security is, and will continue to be, a human endeavor.”





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.