It appears that despite the security concerns surrounding generative AI, this hasn’t not stopped organizations from embracing it.
According to a new report from password manager Bitwarden, 78% of developers see such AI tools as a risk to data security, but 83% also said that their organization has already invested in the technology to manage or analyze data.
What’s more, respondents admitted to entering sensitive data into AI platforms, with 30% of this data being developer secrets, followed by customer information (28%), intellectual property (26%), and social security numbers (25%).
AI and security
Close behind these were privileged credentials, legal documentation, and sensitive health data, all at 24%.
Over a fifth of developers also admitted to engaging in risky cyber behavior more generally, such as using public computers to access data related to their work. This is despite 91% of them saying that had cybersecurity training annually.
But over a third of developers (38%) believe that AI will be the biggest threat to security in five years time, ahead of ransomware, poor security hygiene, phishing, and social engineering.
Currently, phishing scams and ransomware have been the most devastating cyberattacks this year (in part thanks to AI), affecting numerous industries with attacks on the supply chain – the most infamous being the breach of file transfer service MOVEit, the ripples of which are still being felt.
In terms of prioritizing what data should be protected by introducing new security measures, customer data came out on top with 24%, followed by integration with existing systems (17%), and meeting compliance standards (15%). Cost implications ranked the lowest, at 9%.
Most developers (94%) also believe that secure-by-design principles are very important when developing software, but over a quarter (26%) think that such implementation takes too much time, and 18% saying that they do not have enough staff to accomplish it.