SAN FRANCISCO – Much has changed at Secureworks and the cybersecurity industry overall during Wendy Thomas’ extended tenure at the company.
Thomas became CEO of the vendor more than a decade after she originally joined in 2008. As vice president of finance, Thomas originally left Secureworks in 2011 as it was being acquired by Dell following integration work. When she returned in 2015 to assist the security vendor’s IPO, she saw the security landscape had completely changed.
“That’s when the changes in the industry were so clear and so stark,” Thomas told TechTarget Editorial. “Many companies that we see here today started in that 2011 to 2015 space. But yet the breaches were getting worse, the spend was still growing fast in trying to stem those breaches, and it just wasn’t really working. And when we finished that IPO process, the leadership team stepped back and said, ‘We need a different answer.'”
Thomas stayed at the company, remaining in financial executive roles until 2018, when she was appointed to chief product officer. As product chief, she saw the company’s launch of its XDR platform Taegis — a two-year process. XDR, or extended detection and response, represents an evolution of traditional enterprise endpoint security.
Taegis, launched in 2019 — a year after the term was coined — was an early player in the space. Then in 2021, Thomas became president and CEO.
Thomas sat down with TechTarget Editorial at RSA Conference 2023 to discuss the XDR landscape, operational technology (OT) security and the rise of AI across the industry.
Editor’s note: This interview was edited for clarity and length.
How has the detection and response space evolved, and has Secureworks faced any challenges in adjusting to this evolution?
Wendy Thomas: I think there are a couple interesting trends in the in the space. Clearly the market has come to XDR. When we started this journey several years ago, no one knew exactly what we were talking about with the concept of an XDR platform. Part of the way you see people coming to that is to say, ‘Well, my [managed detection and response] or my [endpoint detection and response] is XDR.’ The reality is that it’s not. For us, about 40% of our detections are from the endpoint ecosystem. There’s a large gap should you not be looking across the whole estate and enterprise.
The other trend is that our approach relative to some of those other players is what we call “open without compromise.” That means you can have an EDR of choice, whether it’s Microsoft or CrowdStrike or Carbon Black or SentinelOne. Many of them have environments that are mixed, especially if they’ve done any type of acquisition or want to do one in the future. That’s true of public cloud environments or firewalls. We take an inclusive approach because we know customers don’t have the luxury of pristine single-stack environments.
That’s where we see the advantage of our position against more closed-stack type providers — that ability to give customers choice and independence from a single provider. We see customers consolidating vendors; we don’t see many of them going single stack.
What has Secureworks been working on lately?
Thomas: We have been working in the manufacturing OT space. About a quarter of our customer base is in that space. We’ve been bringing some new capabilities to bear as well as some detection and response capabilities in that space. We’ve been working in the space for a couple of reasons. One, obviously, we have a meaningful customer base in that area that we want to continue to serve. But the threat escalation against those organizations has just been elevated, and there’s so much revenue or operational capability that affects all of us that’s at stake. That’s an area where we’ve been focused in on.
Organizations that use OT famously have a hard time securing budgets for security. Have you found it challenging, compared to your non-OT customers, getting OT-centric organizations to engage with you?
Thomas: It’s less about the ability to engage. I think there have been, unfortunately, enough headlines that those organizations are looking around and saying, ‘Alright, what are we going to do?’ Or their C-suite or board is asking them, ‘What is the posture?’ That’s the real question: What does good look like? How do we think about the right level of investment relative to the assets that we’re protecting? The length of the conversation is on that piece. How do they think about enterprise risk and return on investment for their security assets versus not? But as for these organizations no longer being cyber aware that they are a potential target, I think we’re past that part of the equation.
At RSA, one of the things that surprised me was the extent to which vendors were pitching AI this year. How are you looking at this emerging trend of AI becoming such a big focus?
Thomas: [AI is] not new — just the discussion is new. It’s a funny thing, because clearly, we’ve been in the cybersecurity space for years and years. But only until about five years ago did people in my family say, ‘Okay, now I know what you’re doing.’ That’s good. I think a similar thing is going on with AI. When we built the Taegis platform, for example, it was always built with AI. We never used the term AI because I think that term can be so misused. We talked about machine learning and statistical learning and different data science tech, so that is not new for us.
I think the attention on it is a lot larger than the step function change in the technology. Now there is potential here. It’s going to continue to evolve and be impactful. It’s just not as new as the conversation may make it seem. I think the conversation is as much about the possibilities of automation, speed to protection and scaling scarce resources, which are noble things. We always talk about not having enough experts and how they shouldn’t spend time on doing things that aren’t value added.
But part of the reason I think there’s a lot of conversation about it is that it makes people a little bit nervous too. What if it does the wrong thing? What if it gives you the wrong report? If you’ve played with ChatGPT, you know it can, even with good training data, spit out bad, anomalous answers. I think part of the conversation intensity is focusing on how we protect ourselves when the machine gets it wrong. How do we know when the machine gets it wrong? I think that is part of the intensity of why people are discussing it so much.
Alexander Culafi is a writer, journalist and podcaster based in Boston.