The remarks came a day after a similar warning from tech leaders.
A top U.S. official for cybersecurity said Wednesday that humanity could be at risk of an “extinction event” if tech companies fail to self-regulate and work with the government to reign in the power of artificial intelligence.
The remarks came a day after hundreds of tech leaders and public figures backed a similar statement that compared the existential threat of AI to a pandemic or nuclear war.
Among the 350 signatories of the statement were Sam Altman, the chief executive of OpenAI, the company behind the popular conversation bot ChatGPT, and Demis Hassabis, the CEO of Google DeepMind, the tech giant’s AI division.
Responding to questions about the joint statement, Cybersecurity and Infrastructure Security Agency Director Jen Easterly urged the signatories to self-regulate and work with the government.
“I would ask these 350 people and the makers of AI — while we’re trying to put a regulatory framework in place — think about self-regulation, think about what you can do to slow this down so we don’t cause an extinction event for humanity,” Easterly said.
“If you actually think that these capabilities can lead to [the] extinction of humanity, well, let’s come together and do something about it,” Easterly added.
Industry leaders on Tuesday sounded a sobering alarm. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the one-sentence statement released by the San Francisco-based nonprofit Center for AI Safety.
Supporters of the statement also featured a range of public figures like musician Grimes, environmental activist Bill McKibben and neuroscience author Sam Harris.
Altman, a top executive within the AI industry, said in Senate testimony roughly two weeks ago that he supports government regulation as a means of averting the harmful effects of AI.
“If this technology goes wrong, it can go quite wrong,” Altman said.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he added, suggesting the adoption of licenses or safety requirements necessary for the operation of AI models.
Like other AI-enabled chat bots, ChatGPT can immediately respond to prompts from users on a wide range of subjects, generating an essay on Shakespeare or a set of travel tips for a given destination.
Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.
The rise of vast quantities of AI-generated content has raised fears over the potential spread of misinformation, hate speech and manipulative responses.
During comments on Wednesday, Easterly described Chinese-backed hackers and artificial intelligence as “the defining challenges of our time.”
Easterly walked a familiar fine line between touting the possibilities of AI and warning against its harms.
“At the end of the day, these capabilities will do amazing things. They’ll make our lives easier and better,” she said. “They’ll make lives easier and better for our adversaries who will flood the space with disinformation who will be able to create cyber-attacks and all kinds of weapons.”