enterprise

Executives fear accidental sharing of corporate data with ChatGPT: Report


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Writer, a generative AI platform for enterprises, has released a report revealing that almost half (46%) of senior executives (directors and above) suspect their colleagues have unintentionally shared corporate data with ChatGPT. This troubling statistic highlights the necessity for generative AI tools to safeguard companies’ data, brand, and reputation.

The State of Generative AI in the Enterprise report found that ChatGPT is the most popular chatbot in use amongst enterprises, with CopyAI (35%) and Anyword (26%) following closely behind as the second and third most commonly used. However, many companies have banned the use of generative AI tools in the workplace, with ChatGPT being the most frequently banned (32%), followed by CopyAI (28%) and Jasper (23%).

“There is so much hype around generative AI today that we wanted to get to the actuals — who’s using it, what tools they’re using, what they’re doing with it, and what limitations and restrictions enterprises have in place,” Waseem Alshikh, Writer cofounder and CTO told VentureBeat. “The findings were eye-opening for sure. Virtually every industry is, at the least, experimenting with generative AI, and it’s not just siloed within one function in an organization. Usage of generative AI spans IT, operations, marketing, HR, legal, L&D … you name it.”

Most common generative AI uses

According to the survey, the most common applications of generative AI are producing concise text for advertising and headings (31%), repurposing pre-existing content for various media and channels (27%) and creating extensive pieces of content such as blogs and knowledge base articles (25%).

“AI saves writers like marketers, UX designers, editors, customer service professionals and others tons of time generating new content from scratch,” said Alshikh. “But the real value comes in the other parts — the tedious parts — of the content development process: repurposing, analyzing, researching, transforming and even distributing content. That stuff kills you when you’re busy and trying to move fast, and generative AI can take care of it automatically.”

Writer conducted the survey with more than 450 enterprise executives working in organizations with more than 1,000 employees. The survey was carried out via survey platform Pollfish between April 13 and April 15, 2023.

Use of generative AI in the workplace: Boon or bane? 

The survey yielded significant findings, with one key discovery being that almost all organizations are employing generative AI in various functions, with information technology (30%), operations (23%), customer success (20%), marketing (18%), support (16%), sales (15%) and human resources (15%) being the most common areas of implementation.

According to the report, 59% of the respondents said their company has either already purchased or plans to buy a generative AI tool this year. In addition, nearly one-fifth (19%) of respondents indicated that their company currently uses five or more generative AI tools. Moreover, 56% of respondents said generative AI increases productivity by at least 50%, while 26% reported that it boosts productivity by 75% or more.

“It was surprising that construction and IT (16%) were among the top industries using generative AI,” Alshikh told VentureBeat. “They were followed by finance and insurance (8%), scientific and technical service (8%) and manufacturing (5%). At Writer specifically, we’re seeing much usage in finance and insurance.”

Readers Also Like:  Shell accuses Venture Global of wrongfully earning $3.5bn - Offshore Technology

Alshikh believes ChatGPT is valuable for most people, as it is free, easy to use, and suitable for general purposes. However, the tool’s limitations, such as its limited dataset, inaccuracies, hallucinations, bias and data privacy concerns are widely acknowledged. 

“ChatGPT itself recognizes that it isn’t particularly accurate,” said Alshikh. “Enterprises need more than the ability to generate creative stories and sonnets — they must protect their brand and reputation. Unfortunately, ChatGPT and others like it are leading to a rise in incorrect information, a major issue for enterprises that must rely on the accuracy and brand consistency above anything.”

New Writer product features

The company recently announced new product features aimed at providing its enterprise customers with the highest levels of accuracy, security, privacy and compliance throughout all stages, from data sources to all the surfaces where people work. These features include a self-hosted large language model (LLM), allowing customers to host, operate, and customize their LLM on-premises or in their cloud service.

Additionally, the company has introduced Knowledge Graph on the Writer platform, which allows customers to index and access any data source, from Slack to a wiki to a knowledge base to a cloud storage instance.

“We offer enterprises complete control – from what data LLMs can access to where that data and LLM is hosted,” May Habib, Writer CEO and cofounder said in a written statement. “If you don’t control your generative AI rollout, you certainly can’t control the quality of output or the brand and security risks.”

Key considerations to mitigate generative AI’s risks 

Alshikh stated that commercial models like ChatGPT typically gather intelligence from various public sources, which can be beneficial for creativity but detrimental to brand consistency.

He added that enterprise leaders have become aware of the benefits of implementing generative AI for a competitive edge throughout their businesses. However, they also recognize the risks of utilizing free chatbots like ChatGPT, including the possibility of generating inaccurate content and exposing confidential data.

Readers Also Like:  Why Pure Storage Shares Are Trading Lower Today - Pure Storage (NYSE:PSTG) - Benzinga

“That’s why our goal at Writer is to move past the novelty use cases and deliver real impact to businesses,” he explained. “We’re already solving problems related to accuracy and privacy, and our technology is being deployed across highly-regulated industries, including technology, healthcare and financial services for customers like Intuit and UnitedHealthcare.”

Given its popularity, he suggests that companies consider whether ChatGPT or any tool built on an OpenAI foundation fits with their data privacy, brand and regulatory policies. Additionally, he advises companies to collect functional use cases and requirements to evaluate alternatives.

“If an organization has already developed a policy on using ChatGPT, they should consider implementing an ongoing communication and training plan so everyone knows which tools are safe to use and how to use them without exposing sensitive company data,” said Alshikh. “Enterprise executives need to ask themselves important questions like: Is it secure? Does it protect our company data? Does it let us customize output based on our brand, style, messages and company facts? And can it be integrated into our business workflows?”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.