GitLab, like its competitor GitHub, was born of the open source Git project and is still an open-core company (i.e., a company that commercializes open-source software that anyone can contribute to). It has, since its 2011 launch as an open-source code-sharing platform, seen its DevOps software package grow to over 30 million users. In May 2023, the company launched new AI capabilities in its DevSecOps platform with GitLab 16, including nearly 60 new features and enhancements, according to the company.
At the 2023 Black Hat conference this month, Josh Lemos, chief information security officer at GitLab, spoke with TechRepublic about DevSecOps and how the company infuses security features into its platform, and how AI is accelerating continuous integration and making it easier to shift security left. Lemos explains that GitLab has its roots in source code management and continuous integration and pipelines; a foundry, if you will, for building software.
Jump to:
Securing the build chain, at scale
Karl Greenberg: Can you talk about your role at GitLab?
Josh Lemos: First, when security was incorporated into DevOps and the entire lifecycle of code, it gave us an opportunity to insert security earlier in the build chain. As a CISO, I basically have a meta role in helping companies secure their build pipelines. So not only am I helping GitLab and doing what I would do for any company as CISO, in terms of securing our own product software, I am also doing that at scale for thousands of companies.
SEE: What are the implications of Generative AI for Cybersecurity? At Black Hat, Experts Discuss (TechRepublic)
Karl Greenberg: In this ecosystem of repositories, how does GitLab differentiate itself from, say, GitHub?
Josh Lemos: This ecosystem is basically a duopoly. GitHub is more toward source code management and the build phases; GitLab has focused on DevSecOps or the entire build chain, so infrastructure as code and continuous integration — the entire cycle all the way through to production.
Supply chain attacks: Less about ransom, more about persistence
Karl Greenberg: When you look at threat actors’ kill chains within that cycle, attacks that DevSecOps aims to thwart — supply chain attacks using Log4j, for example — this isn’t about some financially motivated actor seeking ransom, is it?
Josh Lemos: That would be one outcome, sure, but ransomware is a pretty finite end game. I think what’s more interesting from an attacker’s perspective is figuring out how to maintain silence, going undetected for a long period of time. Ultimately the goal [for attackers] is to either compromise data or get insights into a company, government or any organization for various reasons; it could be financially motivated, politically motivated or motivated by compromising intellectual property.
Karl Greenberg: Or, when I think of a threat actor’s persistent presence in a network, I suppose access brokers do this.
Josh Lemos: Generally, attackers don’t want to burn their access, so yeah they want to keep those persistence records as long as possible. So, going back to the first question, my goal in all of this is to create the environment in which companies can secure their build pipelines effectively, limit access to their secrets and utilize cloud security and CI/CD security controls at scale.
SEE: GitLab CI/CD Tool Review (TechRepublic)
Karl Greenberg: GitHub has been very successful with Copilot adoption. What are GitLab’s generative AI innovations?
Josh Lemos: We have over a dozen AI features, some designed to do things like code generation, an obvious use case; our version of Copilot, for example, is GitLab Duo. There are other AI features we have that are very useful in terms of making suggested changes and reviewers for projects: We can look at who has contributed to the project, who might want to review that change, then make those recommendations using AI. So all of these tools automate infusion of security into development without developers having to slow down and look for mistakes.
SEE: GitLab Report on DevSecOps: How AI is Reshaping Developer Roles (TechRepublic)
Karl Greenberg: But obviously, you want to do that early because, by the time it’s out in the wild, it’s expensive, and you are dealing with an exposure issue — a live vulnerability.
Josh Lemos: Yes, it’s shift left in terms of tightening the feedback loop early in the process, when the developer goes to commit the code, while they’re still thinking about that piece of code. And they will get feedback in terms of identifying an issue and fixing it within their process, and on our platform, so they don’t have to go to an external tool. Also, because of this tight feedback loop, they don’t have to wait for software to go into production and then get the problem identified when it’s happening at the time of build.
Shift left: Just in time, actionable feedback to developers
Karl Greenberg: What key security challenges in the software process need some sort of security solution beyond those tools you’ve talked about?
Josh Lemos: Generally, I think that a lot of shifting left terminology is really about making sure that we can secure the software pipeline regardless of the number of developers involved. We can do that by providing good, actionable and meaningful feedback to developers working in the build and development process. We want this part to be automated as much as possible so that we can start to use our security teams to do the more insightful work of design and architecture earlier in the process, before it even gets to the part where they’re building and committing code.
Karl Greenberg: Are we talking purely about ML- and AI-driven tools?
Josh Lemos: There’s a mix of tools and capabilities. Some of them are traditional static code analysis tools; some of them are container scanning that look for known CVEs (common vulnerabilities and exposures) and packages. So there’s a mix of AI and non-AI. But there’s a massive opportunity for automation. And whether that’s AI automation or traditional software, CI/CD security type automation, those can reduce the level of manual work and effort, which allows you to shift your team to focus on other problems that can’t be automated away yet. And I think that’s the big movement in security teams: How can we go automation first in order for us to scale and meet the velocity we are required to meet as a company, and the velocity we need to meet with our engineering teams?