security

Where is the AI? – CIO


The recent mass media love affair with ChatGPT has led many to believe that AI is a “here and now” technology, expected to become pervasive in enterprise and consumer products in the blink of an eye. Indeed, Microsoft’s $10B investment in OpenAI, the company behind ChatGPT, has many people expecting a complete and thorough integration of AI into Microsoft’s product line, from Office365 to Xbox.

The company has already integrated ChatGPT into its Bing search engine and GitHub Copilot, announced that ChatGPT is now available in its Azure OpenAI service, and is looking at further integration into its Word, PowerPoint, and Outlook apps.

But is AI becoming mainstream in security? We’ve seen AI advancements in the cybersecurity world for the better part of the past decade. Companies like Cylance (acquired by Blackberry), and Darktrace, and many others, were marketing their AI-based security technology on billboards and signs at Black Hat and along the 101 near SFO in 2017 and 2018.

From my perspective in the venture world, AI penetration has barely scratched the surface of the cybersecurity market. But to do a sanity check, I recently spoke to over a dozen top CISOs, security executives, and practitioners. Their feedback confirmed my initial thoughts about AI in the early stages of the market. But more interesting to me was that these experts disagreed on where AI played a meaningful role today.

AI in the cybersecurity market

As all my experts pointed out, AI is excellent today at helping a human sort through large quantities of data, reducing “background noise,” and finding patterns or anomalies that would otherwise be very difficult and time-consuming to discover.

AI is also good at creating new threat variants and patterns based on its modeling of the past. However, AI is not adept at predicting the future, despite what some marketing materials may lead you to believe. It may help demonstrate what a future attack could look like, but it cannot produce a result with certainty showing whether a specific exploit will be unleashed.

Another broad belief among the experts was that the AI hype is ahead of reality. While every vendor talks about AI, the executives believe there little (to no) AI integration in most of the products they use today.

Readers Also Like:  Axcelis Announces Grand Opening of the Company's New Logistics ... - PR Newswire

One prominent F500 security executive stated, “While many vendors claim the use of AI, it is not transparent to me that it is there. For example, AI might be the secret sauce within SIEM technologies or complement threat detection and threat hunting activities. But my skepticism is due to the lack of transparency.” If this skilled and experienced executive doesn’t know “where the beef is,” where is the reality today?


The perceived reality

Perception is reality, they say, so what do these industry experts perceive? Or conversely, where is today’s AI reality?

The common belief among those I spoke with is that AI is and will be valuable when large datasets are available, both for training and within the actual use case. The experts view SIEM, email phishing detection, and endpoint protection as three of the most likely segments where AI plays a somewhat more significant role today and will likely continue to provide value.

In the SIEM/SOAR category, AI plays a role today, sorting through large quantities of security event data to help humans more quickly detect and respond to threats and exploits. Splunk, in particular, was mentioned as a leading AI_enabled provider in this segment. Again, this view was not universally agreed to by the experts, but most thought that AI penetration was most likely relevant here versus other categories.

In the email filtering and anti-phishing category, large amounts of email data can be used to train systems from companies like Proofpoint and Mimecast, which effectively find many phishing attacks that arrive in an inbox. Several executives I spoke to believed that some AI was powering these products. However, at the same time, a few questioned whether AI was the driving force behind the categorization and detection.

Endpoint companies have leveraged data collected from millions of machines for years to help train their systems. Formerly, these systems produced signatures for pattern-matching across their installed base. Today these products can use AI to detect more dynamic exploits.

While no AI-based system can detect every zero-day attack (as mentioned earlier, AI can’t predict the future), these newer products from companies like CrowdStrike are perceived to close the gap more effectively.

Readers Also Like:  Fears of Chinese Self-Driving Car Tech Are on the Rise - Bloomberg


One of the F500 executives I spoke to thought with 100% certainty that CrowdStrike was the best example of a company that demonstrated AI-delivered value. On the other hand, two of the CISOs mentioned that they had no proof that AI was really inside this vendor’s endpoint product, even though they were paying customers.

From just these three segments mentioned above, and the discrepancies in opinion, it is clear that the cybersecurity industry has a problem. When some of the top executives and practitioners in the industry don’t know whether AI is deployed and driving value, despite the marketing claims, how do the rest of us understand what drives our critical defenses? Or do we care?

Perhaps we just abstract away the underlying technology and look at the results. If a system prevents 99.9% of all attacks, does it even matter whether it is AI-based or not? Is that even relevant? I think it is, as more of the attacks we will see will be AI-driven, and standard defenses will not hold up.

AI as problem solver

Looking to the future and other security segments, AI will play a significant role in identity and access management, helping discover anomalous system access. One CISO hoped AI would finally help solve the insider threat problem, one of today’s thornier areas. In addition, there is a belief that AI will help partially automate some of the Red Team’s responsibilities and perhaps automate all of the Blue Team’s activities.


One topic was the threat that adversaries would use ChatGPT and other AI-based tools to create malicious applications or malware. But another suggested that these same tools could be used to build up better defenses, generating examples of malicious code, before bad actors actually use them, and these examples could then help inoculate the defensive systems.

Another concern is that AI-generated code, without proper curation, will be as buggy or buggier than the human-authored code that it was trained on. This creates vulnerable code at a wider scale than possible and will create new issues for AI-based vulnerability scanners to address.

A final key point was the belief that Microsoft, Google, Amazon, and others would provide the underlying AI algorithms. The smaller cybersecurity players will own the data and the front-end product that customers interact with.  But the back-end brain would leverage tech from one of the bigger players.  So, in theory, an AI-based security company won’t technically own the AI.

Readers Also Like:  CCTV cameras made by state-owned Chinese firms found in NZ ... - New Zealand Herald

AI in the future

We are in the early days of AI’s penetration into our security defenses. While AI has been in the research community for decades, the technologies and platforms that make it practical and deployable have just been launched in the past few years.  But where will things be in the next 5-10 years? 

I have a clear investment thesis on AI-enabled cybersecurity solutions and believe we will see much broader and deeper enterprise penetration within the next decade. From the point of view of my experts, the general beliefs are that AI will become a reality in multiple segments, including the three mentioned above.

While the experts believe AI will play an increasingly important in every segment of security, chances are higher in areas like:

  • Fraud detection
  • Network anomaly detection
  • Discovery of deep fake content, including in corporate websites and social media assets
  • Risk analysis, and
  • Compliance management and reporting (In fact, AI will likely create a new compliance headache for organizations, as more AI-focused regulations will create the need for new processes and policies)

There is so much uncertainty about where AI resides today in cybersecurity solutions and what it does or doesn’t do. But I believe this uncertainty will drive entrepreneurs to create a new wave of products to help navigate this new frontier. This will likely go well beyond cybersecurity, covering all the software products used in an organization.

AI applications over the next 5-10 years will be fascinating, to be sure. Today’s hype may be more than the reality, but plenty of surprises will be ahead as this market evolves.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.