With the marketplace awash in new artificial intelligence (AI) tools and sparkling new AI features being added to existing tools, organizations are finding themselves lacking visibility into what AI tools are in use, how they are used, who has access, and what data is being shared. Data from Nudge Security shows organizations have an average of six AI tools in use, with ChatGPT and Jasper.ai leading the way in adoption.
As businesses try, adopt, and abandon new generative AI tools, it falls on enterprise IT, risk, and security leaders to govern and secure their use without hindering innovation. While developing security policies to govern AI use is important, it is not possible without knowing what tools are being used in the first place.
The chart, above, shows how widespread ChatGPT (OpenAI.com) adoption is among enterprises, and plenty of other contenders are scrabbling for mindshare. Some of the AI tools are not as well known as ChatGPT, such as rytr.me and wordtune.com, but security teams still have to be aware of the tool and to create policies governing their use. Huggingface.co is another fairly well-known AI tool that is solidly in the middle of the pack.
Enterprise security teams have to consider how to handle discovery – learning which generative AI tools have been introduced into the environment and by whom – as well as risk assessment. It’s important that business users set up experimental accounts to try out these services and then abandon them in order to make sure the accounts deactivate properly.