Rapid emergence of consumer-available generative AI tools like ChatGPT raises numerous concerns for corporate counsel, including how to protect company trade secrets and protect confidential or privileged information.
Some companies are trying to stop employee use of these tools entirely. Earlier this year, JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc. imposed restrictions or bans on employee use of tools like ChatGPT. And in early May, Samsung outright banned use of generative AI tools after its engineers inadvertently leaked internal source code by uploading it onto ChatGPT. Other companies, including Amazon, Verizon, and Accenture, have reportedly imposed similar restrictions.
With AI’s rapid entry into the market, a narrow focus on banning employee use of ChatGPT misses the point. More than 2,000 AI applications were released in March and April. Blocking access to them all, and the avalanche that will follow, would essentially require blocking access to the internet.
Moreover, the tech is being built into ubiquitous applications—Microsoft, Google, Salesforce, Bloomberg. Indeed, LLMs are being integrated directly into security and privacy software itself—there’s no putting this genie back in the bottle.
The primary governance challenge is more about education than legislation. With this in mind, the Ad Idem network of corporate counsel created an AI taskforce to support companies as they manage risks and seize on AI’s potential.
The challenge is akin to phishing—a genuine data security priority that can never be wholly eradicated because email is a fact of commercial life. While the rise of generative AI may be an excellent opportunity to refresh data governance policies, it’s even more of an impetus to update trainings to include how to use the tools safely.
In addition to managing risks presented by the rapid emergence of AI, companies like Ford Motor are seeking to leverage AI to improve how they conduct business internally and through outside counsel. These cutting-edge companies, as well as law firms and bar associations, have been rapidly developing AI taskforces to research the explosion of technology and find the appropriate tools to use for their business.
So where should in-house counsel start? No need to re-invent the wheel. Corporate counsel should begin by reviewing and updating existing policies governing use of technology and company information. We should also consider that these generative AI applications are just tools.
A corporate counsel AI taskforce should proactively recommend or adopt new tools geared to a systems-level approach for improving efficiencies on their work product. This should happen before major consulting firm and C-suite executives forge a company-wide modernization without participation or buy-in from the legal department. Those who resist change initially, risk facing ill-suited solutions that are thrust upon the in-house attorney’s doorstep. A proactive AI taskforce creates a seat at the table for these high-level discussions. It’s better to be on the train than run over by it.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Canby Wood is the global director of litigation solutions at LexFusion, a legal tech accelerator and advisory firm. In 2021, Canby co-founded The Ad Idem Network, a 501(C)(4) legal networking association serving 1,400 corporate counsel around the globe.
Richard Coel is senior counsel for the Washington Metropolitan Area Transit Authority, and primarily responsible for litigating complex and employment cases. He co-founded and assists in running the Ad Idem Network.
Write for Us: Author Guidelines