security

Microsoft Security Copilot harnesses AI to give superpowers to cybersecurity fighters – ZDNet


Getty Images/Westend61

The latest wave of advances in artificial intelligence has caused AI to filter its way into most technology sectors, including workplace productivity tools, social media, and now cybersecurity. 

Microsoft is leveraging the power of GPT-4 to launch a new, generative AI security product called Microsoft Security Copilot. 

Also: These experts are racing to protect AI from hackers 

The main idea is to use conversational AI to enhance the capabilities of security professionals, who are often overwhelmed by the sheer numbers and the sophistication of today’s attacks — partly because there are over 3 million unfilled jobs for IT security experts.

Microsoft

Microsoft Security Copilot combines the power of OpenAI’s most advanced large language model (LLM), GPT-4, with a security-specific model from Microsoft. When Microsoft Security Copilot gets a prompt from a security professional, it uses its LLM and security-specific model to deploy skills and queries to help detect and respond to a security threat more quickly and accurately. It’s powered by Microsoft’s global threat intelligence system, which Microsoft claims parses more than 65 trillion daily signals.

Also: How to use the new Bing (and how it’s different from ChatGPT)

With Microsoft Security Copilot, defenders can respond to incidents within minutes, get critical step-by-step guidance through natural language-based investigations, catch what would otherwise go undetected, and get summaries of any process or event. 

Security professionals will be able to utilize the prompt bar to ask for summaries on vulnerabilities, incidents in the enterprise, and even more information on specific links and files. Using generative AI and both internal and external organizational information, Copilot generates a response with reference to sources. 

Readers Also Like:  How earth's natural processes can help solve climate change - FierceElectronics

Also: With GPT-4, OpenAI opts for secrecy versus disclosure

Like most AI models, it won’t always perform perfectly and it can make mistakes. However, Security Copilot works in a closed-loop learning system that includes a built-in tool for users to directly provide feedback. And while at launch it will incorporate Microsoft’s security products, the company claims that over time it will “expand to a growing ecosystem of third-party products” as well.

While the tool is a learning engine, Microsoft assures users that the importance of their data privacy is prioritized and that their data won’t be used to train or enrich other AI models and will be protected by unspecified enterprise compliance controls. The tech giant has yet to reveal its timeframe for rollout of Security Copilot. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.