Turning talk into action on AI safety
The alarm bells about AI began ringing last spring, when more than 1,000 tech leaders, including Elon Musk, signed an open letter warning that AI tools present “profound risks to society and humanity,” and urging a pause in companies’ development of the most powerful, advanced tech. “Should we let machines flood our information channels with propaganda and untruth?” the letter suggested we ask ourselves. “Should we risk loss of control of our civilization?”
A few months later, the White House elicited voluntary commitments from some major tech companies like Amazon, Google, Meta, Microsoft and OpenAI to double down on security efforts and publicly report any safety issues, as well as work to prevent AI-generated fraud and deception.
This week’s announcement takes a firmer step toward regulating the technology — displeasing antiregulatory organizations like NetChoice, which issued a statement calling the order “Biden’s AI red tape wishlist” that “will result in stifling new companies and competitors from entering the marketplace.”
But a spokesperson for the Federal Trade Commission, which is focused on preventing a handful of leading AI companies from controlling the marketplace, praised the executive order in a statement to AARP as “a major step forward,” adding, “We’re encouraged to see it recognize the importance of a whole-of-government approach to promoting competition in AI.”
In general, tech insiders and advocates sound cautiously optimistic about the White House announcement. Karen Gullo, an analyst from the Electronic Frontier Foundation, which advocates for freedom and justice in the digital world, says she’s glad to see proposals for discrimination protections and “strengthening privacy-preserving technologies and cryptographic tools.” But, she adds, the executive order is “full of ‘guidance’ and ‘best practices,’ so only time will tell how it’s implemented.”