Charlotte AI will be trained on a “continuous human feedback loop” from CrowdStrike security products
CrowdStrike has become the latest cyber security company to launch a generative AI support tool for frontline practitioners as competition in the space heats up.
The ‘Charlotte AI’ security assistant will operate across the company’s suite of security and threat intelligence platforms to support analysts in daily operations and identify emerging threats.
Charlotte AI works in a similar fashion to other generative AI tools and assistants launched in the last several months, giving security analysts real-time, prompt-based insights into security threats and providing recommended actions to mitigate risks.
“With Charlotte AI, everyone from the IT helpdesk to executives like CISOs and CIOs can quickly ask straightforward questions such as ‘what is our risk level against the latest Microsoft vulnerability?’ to directly gain real-time, actionable insights,” the company said.
CrowdStrike added that the tool will accelerate time to response for security analysts dealing with an incident and drive “better risk-based decision making”.
“Charlotte AI will empower less experienced IT and security professionals to make better decisions faster, closing the skills gap and reducing response time to critical incidents.”
CrowdStrike said the generative AI tool has been trained on up-to-date threat intelligence data and will be continuously fed by the “the trillions of security events” captured and disclosed by customers using the firm’s security products.
“Charlotte AI will uniquely benefit from a continuous, human feedback loop from across CrowdStrike Falcon OverWatch managed threat hunting, CrowdStrike Falcon Complete managed detection and response, CrowdStrike Services, and CrowdStrike Intelligence,” the company said.
This massive data set of human intelligence is “wholly unique” to the generative AI platform, CrowdStrike said, and relies heavily on human-validated content to provide insights for security practitioners.
Generative AI security tools
CrowdStrike’s announcement follows a host of similar launches by major tech players aimed at leveraging generative AI tools to support cyber practitioners and bolster operational security.
In March, Microsoft announced its Security Copilot tool, which uses GPT-4 generative AI to provide users with prompt-based security detection capabilities.
The tool analyzes an organisation’s IT environment, comparing it against the 65 trillion signals received each day by Microsoft’s global threat intelligence team.
The launch of the security assistant was met with great excitement as it could enable security practitioners to respond to incidents within a matter of minutes, rather than days.
Google has also waded into the generative AI security race, announcing its own suite of products in April.
Unveiled at the RSA conference, the Google Cloud Security AI Workbench tool is an “industry-first extensible platform” powered by a specialized, security-focused large language model (LLM) known as Sec-PaLM.
A key feature of this new suite of products was the Security Command Center AI tool – a premium version of Google Cloud’s existing Security Command Center service.
Google said the tool can provide users with “near-instant analysis of findings and possible attack paths” within cloud environments.
The tool provides analysts with easily digestible information in real time to reduce complexity and enhance their ability to react to ongoing security incidents.
Ⓒ Future Publishing