security

AI is entering the enterprise application security tool stack – Cybersecurity Dive


Dive Brief:

  • AI-based technologies are making their way to the enterprise tool stack, critical for app, cloud-native and data security, Rackspace said in a survey released Tuesday in association with Microsoft. The report is based on responses from 1,400 IT decision-makers. 
  • More than 3 in 5 IT decision makers said AI has increased the need for cybersecurity, which has led to stricter data storage and access measures, the report found. Most organizations are also paying closer attention to sensitive data exposure. 
  • The AI wave has also corresponded with more cybersecurity investments. More than 3 in 5 respondents said their cyber budgets increased over the past year and, of those, one-third raised their budgets by more than 14%, Rackspace found.

Dive Insight:

More organizations are investing in cybersecurity as the cyber crime marketplace booms and the regulatory landscape changes. High-profile attacks, as seen with the recent incidents at Caesars and MGM Resorts and the slow-moving MOVEit disaster, are just a few examples of how bad a security incident can get. 

There’s also the rise of AI and its rapid influence on the enterprise IT ecosystem to consider. 

It hasn’t even been a year since OpenAI publicly released its ChatGPT model and sparked a race for using AI technologies. Initial insights point to AI’s outsized influence across the technology tool ecosystem, but it’s early days for adoption and many organizations are still experimenting. 

Organizations are moving quickly to implement AI application security tools, Gartner data show.

In early April, the analyst firm conducted a peer community survey of 150 IT and information security leaders at organizations using generative AI or foundation models and found that more than one-third are currently using or implementing AI application security tools. 

More than half of respondents are exploring the use of AI for application security, Gartner found. 

One area of conflict is, however, who owns generative AI responsibility. While the security function can identify emerging risks, generative AI security ultimately falls under IT, more than 2 in 5 respondents said. The vast majority, 93% are at least somewhat involved in generative AI security and risk management efforts, Gartner found.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.