Are we moving too fast with AI? This is a central question both inside and outside the tech industry, given the recent tsunami of attention paid to ChatGPT and other generative AI tools.
Nearly all tech companies are moving to incorporate AI into their offerings, and industry luminaries are weighing in.
Elon Musk, who is never shy about advancing personal opinions, thrust himself into the conversation by signing an open letter suggesting that all advanced AI development should halt for six months. This letter asked, among other things:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Straw men aside, I think we all agree the answer should be no. (Although I wonder how Musk reconciles it with his other position that we are almost certainly living in a digital simulation, and we are all instances of artificial intelligence ourselves.)
Tech leaders have been imagining worst-case scenarios stemming from runaway AI for decades. Even Bill Gates, who opposes a pause and did not sign the open letter, remarked in 2015 that sufficiently intelligent machines could drive humans to extinction.
But aren’t we getting a little ahead of ourselves?
After being invited to try Google’s new AI, Bard, I decided to test it with questions designed to push it a little outside its comfort zone.
“Which three common English words start with the letters D and W?” I asked. Bard told me there were no common English words that start with D and W.
I then asked, in three more questions, if dwarf, dwindle, and dwell were common English words. Bard agreed in three consecutive answers that they were. At no time did it recognize its error. Woosh.
None of the signatories of the open letter (I assume) believe AI is ready to “control our civilization” in its present state. But I would argue even these nascent AI engines are already introducing some very real risks; risks to individual data privacy and risks to organizations’ compliance posture.
Data privacy is today’s concern
We may one day have to grapple with AI-induced consequences like ubiquitous disinformation or mass unemployment due to automation. We may even witness the worst AI doomsday thought experiments come to fruition soon. For now, though, there’s a decidedly less sexy (if just as potentially ruinous for companies) concern on our doorstep.
Consider Samsung Semiconductor, which recently decided to allow engineers to use ChatGPT for business purposes. Once the engineers came to see ChatGPT as helpful and accurate, they also began to trust it with more internal information than they shouldn’t have.
Among other info, they forked over: (1) the code for a different AI under development by a Samsung business partner, (2) test sequences intended to identify defective chips, and (3) an internal meeting recording.
One can understand why engineers might have seen all three examples as areas where ChatGPT could be of assistance. But ChatGPT is not owned or controlled by Samsung, and such information shouldn’t have come anywhere near it.
The AI functionality, in other words, was sufficiently compelling that Samsung engineers simply forgot it was hosted externally and that they were violating Samsung policy by giving it inside information. The artificial intelligence and human error combined to create a major infosec breach (no matter the quality of ChatGPT’s answers).
One wonders how many petabytes of sensitive or proprietary information have similarly been donated to ChatGPT’s creator, OpenAI, in the months since its release. One also wonders how much bigger that problem will become as AI becomes more pervasive, capable, familiar, and widely used for personal and business purposes.
No rest for the regulators (pause or no pause)
Oversharing by employees eager to reap the assistive benefits of generative AI tools are one concern. Regulators may be a larger one.
Italy has already banned ChatGPT over fears it could be compromising the data privacy of its citizens. Germany is reportedly considering following suit. If the EU adopts a bloc-wide ban, OpenAI could save itself a fortune in fines. Since 2021, tech giants, including Amazon, Google, Meta, have paid hundreds of millions of euros in GDPR-related compliance fees. Clearview AI, a company that uses artificial intelligence to develop facial recognition software, has already been fined by regulators in France, Italy, and Greece.
The same regulatory retribution could come for companies using (or allowing the use of) the large language model-based (LLM) conversational AI platforms taking the internet by storm today. Large-scale data leaks enabled by tools like ChatGPT or Google’s Bard won’t require planning from criminal masterminds akin to today’s hacking groups. They will happen organically when well-meaning employees try to harness the enormous productivity potential of these technologies for their own applications.
Balancing performance with risk
From a security perspective, it’s both appealing and daunting to imagine an ultra-smart, cloud-hosted, security-specific AI beyond anything available today.
In particular, the sheer speed offered by an AI-powered response to security events is appealing. And the potential for catastrophic mistakes and their business consequences is daunting.
As an industry observer, I often see this stark dichotomy reflected in marketing, like that of the recently-launched Microsoft Security Copilot. One notices Microsoft’s velocity-driven pitch – “triage signals at machine speed” and “respond to incidents in minutes, instead of hours or days.”
But one also notices the cautious conservatism of the product name: it’s not a pilot, it’s merely a copilot. Microsoft doesn’t want people getting the idea that this tech can, all by itself, handle the complex job of creating and executing a company’s cybersecurity strategy.
That, it seems to me, is the approach we should all be taking to these tools, while carefully considering what type of data can and should be fed to these algorithms. Just as regulators in the US, UK, and EU will devise standards for AI use, organizations must establish and communicate acceptable use cases for these technologies. CISOs and CIOs will want to begin considering the data privacy implications of generative AI – and formalizing their policies surrounding them – immediately.
The promise of conversational AI to assist with tedious tasks or augment human workforces is astronomical. But first, we’ll have to navigate a series of unknown dangers. And I’m not talking about a rogue paperclip machine bent on global domination.
Yet.