A new Salesforce report of more than 14,000 global workers has uncovered some serious cybersecurity blunders that are the result of using generative AI at work.
The study found the majority (84%) of UK workers say they are yet to receive formal AI training from their employers despite considerable deployment already.
Salesforce also revealed that more than half (55%) of all workers have used unapproved generative AI tools at work, highlighting both the need for suitable tools and the need for policy reforms.
Generative AI is a cybersecurity risk
Despite widespread unapproved use, the most popular response to the meaning of safe use was ‘only using company-approved generative AI tools/programs’. Workers also noted not to use confidential company data and personally identifiable customer data in their GenAI prompts.
Conversely, fewer people noted that they should check with their IT department before using such tools.
Ethically, the study suggests that most workers are concerned about the accuracy of AI outputs, indicating that an additional layer of fact-checking should be applied on top of the produced results. Even so, two-thirds (64%) have passed off work produced by generative AI as their own.
Salesforce says it’s not entirely workers’ fault, though, because only one-fifth (21%) of the companies surveyed globally had a clearly defined policy, with nearly double (37%) having no policy at all. A further 15% had only issued loosely defined policies to their workers.
Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, said: “To realize AI’s full potential, it’s critical that we invest in the employees using the technology as much as the technology itself. With clear guidelines, employees will be able to understand and address AI’s risks while also harnessing its innovations to supercharge their careers.”
Looking ahead, with generative AI use expected to rise and workers beginning to feel the benefits, companies are being urged to address their policies and train their workers before it’s too late.