Artificial Intelligence & Machine Learning
,
Governance & Risk Management
,
Next-Generation Technologies & Secure Development
Just Because AI Can Doesn’t Mean AI Should, Say Letter Signatories
A slew of top tech executives and artificial intelligence researchers called for a minimum half-year pause on advanced artificial intelligence systems, saying the moment has come to develop safety protocols before resuming work.
See Also: LIVE Webinar | Stop, Drop (a Table) & Roll: An SQL Highlight Discussion
Coordinated by the Future of Life Institute, more than 1,000 signatories urged AI labs to consider guardrails for ensuring that the technology is a positive development. Letter participants include Turing Prize recipient Yoshua Bengio, University of California-Berkeley computer scientist Stuart Russell, Conjecture CEO Connor Leahy and Stability AI CEO Emad Mostaque. Other signatories include Twitter CEO Elon Musk and failed U.S. presidential and New York mayoral candidate Andrew Yang.
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter states.
The moratorium it calls for would apply to systems more powerful than GPT-4, the latest natural language mode developed by ChatGPT creator OpenAI. GPT-4 can analyze images for their content and is capable of longer conversations and long-form content creation. OpenAI says it’s also safer than ChatGPT – less likely to respond to requests for “disallowed content” and less prone to making factual mistakes.
Tech giants already have fallen into a race to see who can be the quickest to incorporate ChatGPT-like properties into their products. Microsoft incorporated OpenAI language models into its Bing search engine and into a new product announced Tuesday dubbed Security Copilot. Adobe this month introduced an AI model for images, and Salesforce debuted an artificial intelligence tool for sales and marketing professionals.
Security researchers and cybercriminals were quick to probe the limits of OpenAI natural language models, quickly finding that they could enlist ChatGPT in generating malware scripts and the GPT-3 model into devising phishing email texts.
Microsoft-backed OpenAI on March 23 unveiled plug-ins for ChatGPT, which will allow it to access real-time information from the internet via third-party applications. Plug-ins are currently being tested in a limited group and will roll out on a gradual basis. OpenAI will maintain a waitlist.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter says. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
In an interview with The Wall Street Journal, OpenAI CEO Sam Altman said development on GPT-5 has not yet begun.
“In some sense, this is preaching to the choir,” Altman said of the letter signatories. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”