Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
This month, we saw top government officials meet with leading tech executives, including the Alphabet and Microsoft CEOs, to discuss advancements in AI and Washington’s involvement. But as quickly as the ChatGPT, Bard and other well-known generative AI models are advancing, American businesses have to know that malicious actors representing the world’s most successful hacking groups and aggressive nation-states are building their own generative AI replicas — and they won’t stop for anything.
There’s ample reason for experts to be concerned about the overwhelming speed with which generative AI could transform the technology industry, the medical industry, education, agriculture and nearly any other industry in not only America, but the world. Movies like The Terminator, for example, provide plenty of (fictional) precedent for being scared of the effects of a runaway AI, fueling more realistic concerns like AI-induced mass layoffs.
But it’s exactly because AI has the power to revolutionize society as we know it that America cannot afford a private or government-ordered pause on developing it, and why doing so would cripple our ability to defend individuals and businesses from our enemies. Because AI development happens so quickly, any amount of delay that regulators put on that development would set us back exponentially in comparison with our adversaries who are also developing their own AI.
AI advances quickly, government regulates slowly
Regulators aren’t used to moving at the speed that AI necessitates, and even if they were, there’s no guarantee that it would make a difference in how we’re able to use AI to successfully defend ourselves from adversaries. For example, legislators have attempted for decades to regulate and penalize the recreational drug trade in America, but criminals pushing dangerous, illicit substances don’t follow those rules; they’re criminals, so they don’t care. The same behavior will occur among our geopolitical rivals, who will disregard any attempt America makes to place guardrails around AI development.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
In the past eight months, hackers have claimed to be developing or investing heavily in artificial intelligence, and researchers have already confirmed that attackers could enable OpenAI’s tools to aid them in hacking. How effective these methods are currently and how advanced other nations’ AI tools are doesn’t matter as long as we know that they’re developing them — and will certainly use them for malicious purposes. Because these attackers and nations won’t adhere to any moratorium that we place on AI development in America, our country cannot afford to pause our research, or we risk falling behind our adversaries in multiple ways.
In cybersecurity, we’ve always referred to our ability to create tools to thwart attackers’ exploits and scams as an arms race. But with AI as advanced as GPT-4 in the picture, the arms race has gone nuclear. Malicious actors can use artificial intelligence to find vulnerabilities and entry points and generate phishing messages that take information from public company emails, LinkedIn, and organizational charts, rendering them nearly identical to real emails or text messages.
On the other hand, cybersecurity companies looking to bolster their defensive prowess can use AI to easily identify patterns and anomalies in system access records, or create test code, or as a natural language interface for analysts to quickly gather info without needing to program.
What’s important to remember, though, is that both sides are developing their arsenal of AI-based tools as fast as possible — and pausing that development would only sideline the good guys.
The need for speed
That isn’t to say we should let private companies develop AI as a fully unregulated technology. When genetic engineering evolved to become a reality in the healthcare industry, the federal government regulated it within America to enable more effective medicine while recognizing that other countries and independent adversaries might use it unethically or to cause harm — creating viruses, for example.
I believe we can do the same for AI by recognizing that we have to create protections and standards for ethical use but also grasp that our enemies will not be following those regulations. In order to do so, our government and technology CEOs need to operate swiftly without delay. We have to operate at the pace of AI’s current development, or in other words, the speed of data.
Dan Schiappa is chief product officer at Arctic Wolf.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!