Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific
Microsoft, Google, Meta, Amazon Among Companies Making ‘Voluntary Commitments’
With both excitement and fear swirling around the opportunities and risks offered by emerging AI, seven technology companies – including Microsoft, Amazon, Google and Meta – have promised the White House they would ensure the development of AI products that are safe, secure and trustworthy.
See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense
The Biden administration on Friday said it had gotten “voluntary commitments” from the companies, which also include OpenAI, Anthropic and Inflection, to ensure that their AI products will be built upon three fundamental principles critical to the future of AI – safety, security and transparency.
In ensuring that their products safe, the White House said the companies have committed to perform internal and external security testing of their AI systems before they are publicly released. This includes testing to guard against “some of the most significant sources of AI risks,” including threats involving cybersecurity, biosecurity and national security, the White House said.
In committing that their products are secure, the companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, the White House said.
“These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.”
The big tech firms also pledged to facilitate third-party discovery and reporting of vulnerabilities in their AI systems.
“Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly,” the White House said.
Will the Government Regulate AI?
Some lawmakers on Capitol Hill applauded the voluntary commitments to security, but said the administration also needs to “some degree of regulation” over AI technologies.
“While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse,” said Sen. Mark Warner, D-Va., chairman of the Senate Select Committee on Intelligence, in a statement Friday.
Senate Majority Leader Chuck Schumer, D-N.Y., in June introduced a bill for the government to adopt a framework for regulating AI, which he described as a new “regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.”
In terms of ensuring the trustworthiness of their AI technologies, the companies have committed to developing “robust technical mechanisms,” such as watermarking systems, so that users know when content is generated by AI. White House said this will help creativity flourish while fighting fraud and deception.
Questions About AI Trustworthiness
Brad Smith, Microsoft vice chair and president, said in a blog that his company has committed to broad-scale implementation of the NIST AI Risk Management Framework and is adopting AI cybersecurity practices. “We know that this will lead to more trustworthy AI systems that benefit not only our customers, but the whole of society,” Smith said.
In addition to the companies’ commitments, the White House said the Biden administration is also developing an executive order and will pursue bipartisan legislation “to help America lead the way in responsible innovation.”
The White House also said that the Biden administration will work with other countries to establish an international framework to govern the development and use of AI. So far the administration has obtained voluntary commitments for that effort from Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the United Kingdom, the White House said.
Microsoft’s Smith said the industry as a whole needs to be committed to ethical development of AI technologies.
“Establishing codes of conduct early in the development of this emerging technology will not only help ensure safety, security and trustworthiness, it will also allow us to better unlock AI’s positive impact for communities across the U.S. and around the world,” Smith said.