Illustration: Tiffany Herring/Axios
Microsoft is enlisting ChatGPT to join the front lines of the security world.
Driving the news: The tech giant unveiled its first generative AI cybersecurity tool today during the virtual Microsoft Secure event.
- Called Microsoft Security Copilot, the tool is aimed at helping network defenders streamline information about both new threats and ongoing attacks on their networks.
- Microsoft, which has invested billions of dollars into ChatGPT maker OpenAI, is now the first major cybersecurity provider to introduce a new product integrating ChatGPT into incident response.
How it works: Security Copilot looks nearly identical to the ChatGPT interface — but instead, this version is based on a small set of vetted security sources, according to a demo shared with Axios.
- Company security teams can use the tool in two ways: As a chatbot to help prioritize relevant threat research and vulnerability disclosures, and as an assistant to figure out the extent of ongoing cyber incidents on their networks.
Right now, the new tool only pulls information from Microsoft’s own threat intelligence and products, the Cybersecurity and Infrastructure Security Agency and NIST’s National Vulnerability Disclosure Database. Each response includes a citation to the source material.
- Users can provide feedback on the responses, which will help train the AI models running Security Copilot.
- Searches remain private to the company using the tool, and Microsoft doesn’t use searches to train the “foundational AI models,” Vasu Jakkal, Microsoft’s corporate vice president of security, said in a demo video shared with reporters.
The big picture: Security teams are often inundated with alerts about new vulnerabilities, attack techniques and cybercriminal gangs, making it difficult for them to keep up with everything coming through their networks.
- Seven in 10 organizations say they struggle to keep up with the number of alerts that flood their systems every day, according to a 2022 report from cybersecurity firm Kaspersky.
- One purpose of Security Copilot is to cull through those alerts and flag which ones an organization needs to prioritize based on its unique needs — leaving defenders with more time to protect networks from attacks.
What they’re saying: “This is really about simplifying the complex for defenders and helping them find things that others might miss,” Chang Kawaguchi, vice president AI security architect at Microsoft, told Axios.
Between the lines: Microsoft sees this new product focused on security investigations as the first of many ways that generative AI can help security professionals in the near future.
- “We think there are opportunities across this entire space, and certainly we envision the Security Copilot will help many different roles and not just this one,” Kawaguchi told Axios.
Zoom out: Employers across the workforce are weighing how to bring generative AI into their workflows without risking a leak of corporate secrets and intellectual property.
- Placing controls on what information it trains its models on and in what ways employees can use the technology is the best way to safeguard from misinformation and data leaks as of now.
Yes, but: Generative AI is still in the early days, and it consistently outputs misinformation, half-formed ideas or answers to technical questions.
- Even Microsoft’s Security Copilot is already making some mistakes: In the demo, the AI-generated outputs cited “Window 9,” which doesn’t exist.
What’s next: Microsoft is testing the new product with a limited number of clients now to get feedback, but it plans to start offering the the product as a separate offering to Microsoft customers in the near future.
Sign up for Axios’ cybersecurity newsletter Codebook here