security

Q&A: How one CSO secured his environment from generative AI risks – Computerworld


In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

Through genAI, many of Navan’s 2,500 employees have been able to eliminate redundant tasks and create code far faster than if they’d generated it from scratch. However, genAI tools are not without security and regulatory risks. For example, 11% of data employees paste into ChatGPT is confidential, according to a report from cyber security provider CyberHaven.

Navan

Navan CSO Prabhath Karanth

Navan CSO Prabhath Karanth has had to deal with the security risks posed by genAI, including data security leaks, malware, and potential regulatory violations.

Navan has a license for ChatGPT, but the company has allowed employees to use their own public instances of the technology — potentially leaking data outside company walls. That led the company to curb leaks and other threats through the use of monitoring tools in conjunction with a clear set of corporate guidelines.

One SaaS tool, for example, flags an employee when they’re about to violate company policy, which has led to greater awareness about security among workers, according to Karanth.

Computerworld spoke to Karanth about how he secured his organization from misuse and intentional or unintentional threats related to genAI. The following are excerpts from that interview.

For what purposes does your company use ChatGPT? “AI has been around a long time, but the adoption of AI in business to solve specific problems — this year it has gone to a whole different level. Navan was one of the early adopters. We were one of the first companies in the travel and expense space that realized this tech is going to be disruptive. We adopted very early on in our product workflows…and also in our internal operations.”

Product workflows and internal operations. Is that chatbots to help employees answer questions and help customers to do the same? “There are a few applications on [the] product side. We do have a workflow assistant called Ava, which is a chatbot powered by this technology. There are a ton of features on our product. For example, there’s a dashboard where an admin can look up information around travel and expenses related to their company. And internally, to power our operations, we’ve looked at how can we expedite software development from a development organization perspective. Even from a security perspective, I’m very closely looking at all my tooling where I want to leverage this technology.

“This applies across the business.”

I’ve read of some developers who used genAI technology and think it’s terrible. They say the code it generates is sometimes nonsensical. What are your developers telling you about the use of AI for writing code? “That’s not been the experience here. We’ve had very good adoption in the developer community here, especially in two areas. One is operational efficiency; developers don’t have to write code from scratch anymore, at least for standard libraries and development stuff. We’re seeing some very good results. Our developers are able to get to a certain percentage of what they need and then build on top of that.

Readers Also Like:  Biden-Harris Administration Awards First Grants from Wireless ... - US Department of Commerce

“In some cases, we do use open-source libraries — every developer does — and so in order to get that open source library to the point where we have to build on top of that, it’s another avenue where this technology helps.

“I think there are certain ways to adopt it. You can’t just blindly adopt it. You can’t adopt it in every context. The context is key.”

[Navan has a group it calls “a start-up within a start-up” where new technologies are carefully integrated into existing operations under close oversight.]

Do you use tools other than chatGPT? “Not really in the business context. On the developer’s side of the house, we also use Github Copilot to a certain extent. But in non-developer context, it’s mostly OpenAI.”

How would you rank AI in terms of a potential security threat to your organization? “I wouldn’t characterize it as lowest to highest, but I would categorize it as a net new threat vector that you need an overall strategy to mitigate. It’s about risk management.

“Mitigation is not just from a technology perspective. Technology and tooling is one aspect, but there also must be governance and policies in terms of how you use this technology internally and productize it. You need a people, process, technology risk assessment and then mitigate that. Once you have that mitigation policy in place, then you’ve reduced the risk.

“If you don’t do all of that, then yes, AI is the highest-risk vector.”

What kinds of problems did you run into with employees using ChatGPT? Did you catch them copying and pasting sensitive corporate information into prompt windows? “We always try to stay ahead of things at Navan; it’s just the nature of our business. When the company decided to adopt this technology, as a security team we had to do a holistic risk assessment…. So I sat down with my leadership team to do that. The way my leadership team is structured is, I have a leader who runs product platform security, which is on the engineering side; then we have SecOps, which is a combination of enterprise security, DLP – detection and response; then there’s a governance, risk and compliance and trust function, and that’s responsible for risk management, compliance and all of that.

“So, we sat down and did a risk assessment for every avenue of the application of this technology. We did put in place some controls, such as data loss prevention to make sure even unintentionally there is no exploitation of this technology to pull out data — both IP and customer [personally identifiable information].

“So, I’d say we stayed ahead of this.”

Did you still catch employees intentionally trying to paste sensitive data into ChatGPT? “The way we do DLP here is it’s based on context. We don’t do blanket blocking. We always catch things and we run in it like an incident. It could be insider risk or external, then we involve legal and HR counterparts. This is part and parcel with running a security team. We’re here to identify threats and build protections against them.”

Were you surprised at the number of employees pasting corporate data into chatGPT prompts? “Not really. We were expecting it with this technology. There’s a huge push across the company overall to generate awareness around this technology for developers and others. So, we weren’t surprised. We expected it.”

Readers Also Like:  Google is Killing Play Movies and TV, Will Only Have Three Video ... - Slashdot

Are you concerned about genAI running afoul of copyright infringement as you use it for content creation? “It’s an area of risk that needs to be addressed. You need some legal expertise there for that area of risk. Our in-house counsel and legal team have fully lit into this and there is guidance, and we have all of our legal programs in place. We’ve tried to manage the risk there.”

[Navan has focused on communication between privacy, security and legal teams and its product and content teams on new guidelines and restrictions as they arise and there has been additional training for employees around those issues.]

Are you aware of the issue around ChatGPT creating malware, intentionally or unintentionally? And have you had to address that? “I’m a career security guy, so I keep a very close watch on everything going on in the offensive side of the house. There’s all kinds of applications there. There’s malware, there’s social engineering that’s happening through generative AI. I think the defense has to constantly catch up and keep up. I’m definitely aware of this.”

How do you monitor for malware if an employee is using chatGPT to create code; how do you stop something like that from slipping through? Do you have software tools, or do you require a second set of eyes on all newly created code? “There are two avenues. One [is] around making sure whatever code we ship to production is secure. And then the other is the insider risk — making sure any code that is generated doesn’t leave Navan’s corporate environment. For the first piece, we have a continuous integration, continuous deployment — CICD — automated co-deployment pipeline, which is completely secured. Any code that gets shipped to production, we have static code running on that at the integration point, before developers merge it to a branch. We also have software composition analysis for any third-party code that’s injected into the environment. In addition to that, we also have CICD hardening this entire pipeline, from merge to branch to deployment is hardened.

“In addition to all of this, we also have runtime API testing and build-time API testing. We also have a product security team that [does] threat modeling and design review for all the critical features that get shipped to production.

“The second part — the insider risk piece — goes back to our DLP strategy, which is data detection and response. We don’t do blanket blocking, but we do do blocking based on context — based on a lot of context areas…. We’ve had relatively highly accurate detections and we’ve been able to protect Navan’s IT environment.”

Can you talk about any particular tools you’ve been using to bolster your security profile against AI threats? CyberHaven, definitely. I’ve used traditional DLP technologies in the past and sometimes the noise-to-signal ratio can be a lot. What Cyberhaven allows us to do is put a lot of context around the monitoring of data movement across the company — anything leaving an endpoint. That includes endpoint to SaaS, endpoint to storage, so much context. This has significantly improved our protection and also significantly improved our monitoring of data movement and insider risk.

Readers Also Like:  Life After the Tech Apocalypse: Predictions for Tech in 2023 | - Spiceworks News and Insights

“[It’s] also hugely important in the context of OpenAI…, this technology has helped us tremendously.”

Speaking of CyberHaven, a recent report by them showed about one in 20 employees paste company confidential data into just chatGPT, never mind other in-house AI tools. When you’ve caught employees doing it, what kinds of data were they typically copying and pasting that would be considered sensitive? “To be honest, in the context of OpenAI, I haven’t really identified anything significant. When I say significant, I’m referring to customer [personally identifiable information] or product-related information. Of course there have been several other insider risk instances where we had to triage and do get legal involved and do all the investigations. Specifically with OpenAI, I’ve seen it here and there where we blocked it based on context, but I cannot remember any massive data leak there.”

Do you think general purpose genAI tools will eventually be overtaken by smaller, domain-specific, internal tools that can be better used for specific uses and more easily secured? “There’s a lot of that going on right now — smaller models. But I don’t think OpenAI will be overtaken. If you look at how OpenAI is positioning their technology, they want it to be a platform on which these smaller or larger models can be built.

“So, I feel like there will be a lot of these smaller models created because of the compute resources larger models consume. Compute will become a challenge, but I don’t think OpenAI will be overtaken. They’re a platform that offers you flexibility over how you want to develop and what size platform you want to use. That’s how I see this continuing.”

Why should organizations trust that OpenAI or other SaaS providers of AI won’t be using the data for purposes unknown to you, such as training their own large language models? “We have an enterprise agreement with them, and we’ve opted out of it. We got ahead of that from a legal perspective. That’s very standard with any cloud provider.”

What steps would you advise other CSOs to take in securing their organizations against the potential risks posed by generative AI technology? “Start with the people, process, technology approach. Do a risk analysis assessment from a people, process, technology perspective. Start with an overall, holistic risk assessment. And what I mean by that is look at your overall adoption: Are you going to use it in your product workflows? If you are, then you have to have your CTO and engineering organization as key stakeholders in this risk assessment.

“You, of course, need to have legal involved. You need to have your security and privacy counterparts involved.

“There are also several frameworks already offered to do these risk assessments. NIST published a framework to do a risk assessment around adoption of this, which addresses just about every risk you need to be considering. Then you can figure out which one is applicable to your environment.

“Then have a process to monitor these controls on an ongoing basis, so you’re covering this end-to-end.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.