Executives, researchers and engineers at big tech companies and startups alike working on artificial intelligence face a growing threat from criminal and nation-state hackers looking to pilfer intellectual property or data that underlies powerful chatbots, the FBI warned on Friday.
The growing risk coincides with increasing availability of AI tools and services to the general public in the form of products such as OpenAI’s ChatGPT, or Google’s Bard, for instance, as well as the increasing ease and ability for many companies to develop AI language models.
The warning comes two days after FBI Director Christopher Wray and Bryan Vorndran, the agency’s assistant director, cyber division, warned about the distinct AI-related threats from China, which political leaders in the U.S. and Europe have long warned wants to dominate all aspects of AI research and implementation.
Officials on Friday warned of the likely increase in “targeting and collecting against US companies, universities and government research facilities for AI advancements,” including the transfer of “AI information including algorithms, data expertise and computing infrastructure through a multitude of technology acquisition methods,” both illegal and legal, such as through foreign commercial investments.
“In the field of AI it is clear that US talent is one of the most desirable aspects in the AI supply chain that our adversaries need,” an official said. “The US sets the gold standard globally for the quality of research development, and nation states are actively using lucrative as well as diverse means to recruit such talent and transfer cutting edge AI research and development to aid their military and civilian programs.”
The FBI has a “productive and ongoing relationship” with AI-related companies, the official said.
The U.S. government and the Biden administration have sought to address aspects of the AI race with China by banning the export of certain high-end GPUs to China and the chip making equipment involved with some of those processes, CyberScoop’s Elias Groll reported recently.
Anne Neuberger, a top White House official on cybersecurity and emerging technology, told Groll that the U.S. government has also carried out defensive cybersecurity briefings to leading AI firms regarding their data models, particularly as those firms move away from open source models and seek to close access.
Also in Friday’s call, FBI officials warned about the threat facing the public as cybercriminals begin to use AI to supercharge more traditional crimes such as fraud and extortion.
“Tools from AI are readily and easily applied to our traditional criminal schemes, whether ransom requests from family members, or abilities to generate synthetic content or identities online, attempts to bypass banking or other financial institutions’ security measures, attempts to defraud the elderly, that’s where we’ve seen the largest part of the activity,” a senior FBI official said in the call with reporters.
In June, for instance, an FBI alert warned of sexually related “synthetic content” — known more commonly as “deepfakes” — created for extortion purposes. “The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content,” the alert read. “The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes.”
Other threats include hackers using AI to develop and sharpen convincing phishing emails or malware, or even trying to refine recipes and instructions for explosives, the officials said.