security

Company policies lag behind workforce adoption of generative AI – CIO Dive


This audio is auto-generated. Please let us know if you have feedback.

Dive Brief:

  • More than half of employees are using generative AI tools at work, according to a survey of nearly 1,100 U.S. workers published earlier this month by the Conference Board. 
  • However, only 26% of organizations currently have a policy related to generative AI use in place. Roughly one-quarter of surveyed organizations are currently working towards establishing policy.
  • Employees are mostly using generative AI at work to draft written content, brainstorm ideas and conduct background research. Less frequent applications of generative AI include technical tasks, such as analyzing data, generating code or image generation, according to the report.

Dive Insight:

The benefits of generative AI draw in employees, executives and enterprises, but there are risks to adopting the tech, especially without clear guidelines

Feeding generative AI sensitive data presents privacy and security risks. Inaccuracies can create bigger headaches. Faulty code generated by an AI tool can compromise enterprise applications. Incorrect information in reports and memos can also cause problems, even in a first draft.

Experts told CIO Dive earlier this year that policies should give employees guidance on what can be input into the model, how outputs should be used, ethical implications of use, security risks and model limitations. 

Catching false information can be even harder if an employee doesn’t expect the model to generate it. Nearly half of respondents in the survey say the quality of generated outputs equates to an experienced worker, according to the survey.

As generative AI enthusiasm grows among employees, companies need to step in with clear policies. Federal and state officials are already scrutinizing the ethical implications of the technology, and California is expected to set guidelines for public sector generative AI by January 2024.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.