security

How to get a handle on shadow AI – InfoWorld


CIOs and CISOs have long grappled with the challenge of shadow IT—technology that is being used within an enterprise but that is not officially sanctioned by the IT or security department. According to Gartner research, 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022, and that number was expected to climb to 75% by 2027. Shadow IT can introduce a whole host of security impacts, for one primary reason: You can’t protect what you don’t know about.

Not surprisingly, we are seeing a similar phenomenon with AI tools. Employees are increasingly experimenting with the likes of ChatGPT and Google Bard to do their jobs. And while that experimentation and creativity can be a good thing, the problem is that these tools are being used without IT or security’s knowledge.

This leads to the challenge CISOs and other leaders face: How do you enable employees to use their preferred AI tools while also mitigating potential risks to the organization and ensuring they don’t create cybersecurity nightmares?

The rise of shadow AI

It’s little wonder that employees want to be using generative AI, machine learning, and large language models. These technologies bring multiple benefits, including the potential to significantly improve process efficiencies, personal productivity, and even customer engagement relatively quickly.

There are many areas, including within security processes, where it makes a great deal of sense to apply AI, such as in assisting SOC operations, reducing engineers’ workload and monotony, and more—it’s really about process efficiency. Such improvements are coming for other areas, industries, functions, and organizations across the board.

It’s easy to understand the benefits.

However, what often happens is that employees begin using these tools without going through the proper channels. They’re just picking the tools they think will work or that they’ve heard of and putting them to use. They haven’t gotten the organizational buy-in to understand use cases so that IT can identify the appropriate tools that should be used, as well as when it’s appropriate to use them and when it’s not. This can lead to a lot of risks.

Readers Also Like:  Tyler tech security expert talks SFA cyber attack, ransomware protection - KTRE

Understanding the risks of unsanctioned tools

When it comes to shadow IT, the risks come in a few different forms. Some of these risks have to do with AI tools overall, sanctioned or not.

Information integrity is a risk. There are currently no regulations nor standards for AI-based tools, so you may end up with a “garbage in, garbage out” problem; you can’t trust the results you get. Bias is another risk. Depending on how AI is trained, it can pick up historical biases, leading to inaccurate information.

Information leakage is also a concern. Proprietary information often gets fed into these tools, and there’s no way to get that data back. This can run afoul of regulations like GDPR, which demands strict data privacy and transparency for EU citizens. And a new EU-based AI Act means to closely regulate AI systems. Failure to comply with such laws opens your company up to further risk, whether corporate leadership knows about employees’ AI use or not.

Future compliance requirements are one of the “unknown unknowns” organizations must contend with today. Society has known for some time that this risk was coming, but now it’s evolving at a much quicker speed—and few organizations are truly prepared for it.

These challenges are further complicated when employees are using AI tools without IT and security leaders being fully in the loop. It becomes impossible to prevent or even mitigate the risk of information integrity and information leakage issues if IT or security isn’t aware of what tools are being used or how.

Readers Also Like:  United Orders 110 New Aircraft with Deliveries Starting in 2028 - PR Newswire

An AI compromise to stop rogue behavior

AI technology and adoption are evolving at breakneck speed. From an IT and security leadership perspective, you have two options. One is to ban AI use entirely and find a way to implement those restrictions. The other is to embrace AI and find ways to address it.

The knee-jerk reaction by security teams is to block AI use across the organization. This approach is almost surely destined to fail. Employees will find ways around restrictions, and this approach can also make them feel frustrated at the inability to use the tools they believe best help them get the job done.

So, the second option is better for most organizations, but it must be done carefully and conscientiously. For one thing, as noted earlier, there is currently an absence of external regulation and no clear standard to look to for guidance. Even the regulations that do exist are always going to be a bit behind the times. That’s just the nature of how fast technology is evolving compared to how fast compliance can keep up.

Best practices for enabling safer use of AI tools

A good place to start is to focus on gaining an understanding of the AI tools that would be worthwhile to deploy for your organization’s use cases. Look for vendors already in the space and get demos. When you’ve found the tools you need, create guiding principles for their use.

Getting insight into the use of these tools may be a challenge, depending on the maturity of your organization’s IT and security posture. Perhaps you have robust controls where individuals aren’t admins on their laptop, or you have solutions like data loss prevention (DLP) in place.

Having a set of guidelines and rules in place can be very helpful. This can include looking at factors like privacy, accountability, transparency, fairness, safety, and security. This step is key to helping ensure your organization is using AI responsibly.

Readers Also Like:  The Value of Data Security in the Cannabis Industry - Cannabis & Tech Today

Education of your employees is a crucial best practice. As is the case with cybersecurity, an informed staff creates a first line of defense against improper AI use and risk. Make sure employees are thoroughly versed in the AI usage guidelines you have created.

Bring AI out of the shadows

It’s likely that at least some of your employees are currently using non-sanctioned AI tools to assist with their jobs. This can create a major headache from a security and risk perspective. At the same time, you want to ensure your employees can use the tools they need to perform at their peak. The best bet for getting ahead of shadow AI’s risks is to allow the use of tools proven to be safe, and to require employees to use them within the guidelines you’ve created. This alleviates both employee frustration and organizational risk.

Kayla Williams is CISO at Devo.

Generative AI Insights provides a venue for technology leaders to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.