security

ChatGPT creators and others plead to reduce risk of global extinction from their tech – Computerworld


Hundreds of tech industry leaders, academics, and others public figures signed an open letter warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by San Francisco-based Center for AI Safety.

The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.

Ironically, the most prominent signatories at the top of the letter included Sam Altman, CEO of OpenAI, the company that created the wildly popular generative AI chatbot ChatGPT, as well as Kevin Scott, CTO of Microsoft, OpenAI’s biggest investor. A number of OpenAI founders and executives were also joined by executives, engineers, and scientists from Google’s AI research lab, DeepMind.

Geoffrey Hinton, considered the father of AI for his contributions to the tech over the past 40 years or so, also signed today’s letter. During a Q&A at MIT earlier this month, Hinton went so far as to say humans are nothing more than a passing phase in the development of AI. He also said it was perfectly reasonable back in the ’70s and ’80s to do research on how to make artificial neural networks. But today’s technology is as if genetic engineers decided to improve grizzly bears, allowing them to speak English and improve their “IQ to 210.”

Readers Also Like:  5 trends in digital security transformation to watch for in 2023 - SecurityInfoWatch

Hinton, however, said he felt no regrets over being instrumental in creating AI. “It wasn’t really foreseeable — this stage of it wasn’t foreseeable. Until very recently, I thought this existential crisis was a long way off. So, I don’t really have any regrets over what I did,” Hinton said.

Earlier this month, leaders of the Group of Seven (G7) nations called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security. US Senate hearings earlier this month, which included testimony from OpenAI’s Altman, also illustrated many clear and present dangers emerging from AI evolution.

“The statement signed by the Center for AI Safety is indeed ominous and without precedent in the tech industry. When have you ever heard of tech entrepreneurs telling the public that the technology they are working on can wipe out the human race if left unchecked?” said Avivah Litan, a vice president and distinguished analyst at Gartner. “Yet they continue to work on it because of competitive pressures.”

While a tertiary to extinction, Litan also pointed out that businesses also face “short-term and imminent” risks from the use of AI. “They involve risks in misinformation and disinformation and the potential of cyberattacks or societal manipulations that scale much more quickly than what we saw in the past decade with social media and online commerce,” she said. “These short-term risks can easily spin out of control if left unchecked.”

The shorter-term risks posed by AI can be addressed and mitigated with guardrails and technical solutions. The longer-term existential risks can be addressed through international government cooperation and regulation, she noted. 

Readers Also Like:  Organizations Rush to Use Generative AI Tools Despite Significant ... - MarTech Series

“Governments are moving very slowly, but technical innovation and solutions — where possible — are moving at lightning speed, as you would expect,” Litan said. “So, it’s anyone’s guess what lies ahead.”

Today’s letter follows a similar one released in March by the Future of Life Institute. That letter, which was signed by Apple co-founder Steve Wozniak, SpaceX CEO Elon Musk, and nearly 32,000 others, called for a six-month pause in the development of ChatGPT to allow better controls to be put in place.

The March letter called for oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Dan Hendrycks, director of the Center for AI Safety, wrote in a follow-on tweet thread today that there are “many ways AI development could go wrong, just as pandemics can come from mismanagement, poor public health systems, wildlife, etc. Consider sharing your initial thoughts on AI risk with a tweet thread or post to help start the conversation and so that we can collectively explore these risk sources.”

Hendrycks also quoted Robert Oppenheimer, theoretical physicist and father of the atomic bomb: “We knew the world would not be the same.” Hendrycks, however, didn’t mention that the atomic bomb was created to stop the tyranny the world was facing from dominance by the Axis powers of World War II.

Readers Also Like:  How the Evolving Role of the CISO Impacts Cybersecurity Startups - Dark Reading

The Center for AI Safety is a San Francisco-based nonprofit research organization whose stated mission is “to ensure the safe development and deployment of AI.”

“We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely,” the group’s web page states.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.