Key Takeaways
- President Joe Biden issued an executive order on Monday with guidelines for artificial intelligence (AI) regulation that could affect the investments of tech giants like Google, Amazon, and Microsoft.
- The order sets standards meant to protect security, safety, privacy, and equity while promoting innovation and American competitiveness.
- The Biden administration has been focused on addressing the risks of AI, including meeting with 15 leading tech companies that agreed to certain disclosures and requirements set by the White House.
President Joe Biden issued an executive order Monday that establishes artificial intelligence (AI) guidelines meant to mitigate the risks of the emerging technology.
The wide-reaching executive order is intended to protect safety, national security, consumer privacy, and equity while promoting American competitiveness.
The order’s AI safety and security standards require that all companies “developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” to perform and report the results of safety testing to the federal government.
These requirements would likely impact companies investing in the development of AI, such as Google parent Alphabet (GOOGL), Amazon (AMZN), and Microsoft (MSFT).
In addition, the federal Departments of Homeland Security and Energy are to address instances where AI could threaten critical infrastructure in accordance with the Defense Production Act.
In the executive order, President Biden called on “Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids” against potential dangers associated with AI tech using “privacy-preserving techniques.”
The order expanded upon previous suggestions made by the Biden administration to preserve equity within AI advancement, including the “Blueprint for an AI Bill of Rights,” which was released by the White House in October 2022.
The Biden administration previously announced that 15 companies leading AI development agreed to voluntary disclosure, safety, and security requirements for AI tools and services. These companies include Google, Amazon, Microsoft, Meta Platform (META), ChatGPT parent company OpenAI, and Nvidia (NVDA), among others.