security

White House AI executive order adds safety requirements for next … – CIO Dive


This audio is auto-generated. Please let us know if you have feedback.

President Joe Biden issued an executive order Monday aimed at improving the safety, security and trustworthiness of AI in the public and private sectors.

The EO requires developers of “any foundation model that poses a serious risk to national security, national economic security or national public health and safety” to share safety results with the federal government. It also ordered the development of programs and working documents addressing AI cybersecurity, safety and risk.

“It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” Bruce Reed, White House deputy chief of staff, said in a statement.

The mandate to disclose safety results is only applicable if a model surpasses a specific computing performance threshold, set at 10 to the power of 26 floating point operations per second, or FLOPS. 

“My understanding is that [the threshold] will not catch any system currently on the market, so this is a primarily forward-looking action for the next generation of models,” a senior administration official said.

The threshold applies to all the safety provisions outlined in the executive order, including the provision tasking the National Institute of Standards and Technology to set standards for extensive red teaming before models are publicly released.   

The White House did not consider restrictions, such as removal from marketplaces, for systems that were already public, according to a senior administration official.

Technology leaders and their business partners in legal, privacy and risk departments have closely tracked the evolving regulatory landscape. The actions laid out in the document set guardrails around the technology’s risk, but stop short of imposing consequences on businesses that fail to comply, something analysts and industry watchers have called for. 

“For this executive order to have teeth, requirements must be clear, and actions must be mandated when it comes to ensuring safe and compliant AI practices,” Alla Valente, Forrester senior analyst, said via email Friday. “Simply put, we don’t need more ‘voluntary’ frameworks for regulating AI — we need clear direction and mandated requirements.”

While the executive order says the administration will encourage the Federal Trade Commission, Department of Justice and other federal agencies to exercise their authority, the White House cannot direct the FTC or DOJ to carry out specific actions. 

“We are going, we think, as far as is appropriate in the executive order … but we are not giving them very detailed step-by-step directions for how to carry out their enforcement missions for entirely appropriate legal reasons,” a senior administration official said. 

The most immediate deliverable to follow the executive order will likely be the completion of the Office of Management and Budget memorandum on AI governance, according to the senior administration official. 

The White House plans to have recurring principal-level meetings with agency heads, chaired by Reed, to ensure they are following the timeline. 

“I would push back on any notion that we are behind anyone,” a senior administration official said in reference to the speed of regulatory action in the U.S. compared to other nations. 

Vice President Kamala Harris will attend an AI summit in the United Kingdom Tuesday and deliver a speech on the administration’s vision for the future of AI. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.