enterprise

Biden administration plans to monitor open weight models


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The Biden administration called for the monitoring of open-weight models as it looks to collect more information for future regulations. However, the U.S. government did not specify how it plans to regulate models. 

In a report, the National Telecommunications and Information Administration (NTIA) said it was important to look at the dangers of public AI models with the capability of disrupting current systems to understand how to prevent any disasters. 

However, the NTIA admitted the U.S. “does not currently have the capacity to monitor and effectively respond to many of the risks arising from foundation models.” Knowing this NTIA suggested three main areas to focus on: collecting evidence on the capabilities of the model to monitor specific risks, evaluate and compare indicators and adopt policies that target these risks. 

NTIA defines open-weight models as foundation models whose weights, or parameters, were publicly released, and users can download the models. These are different from open-source models, which is where models are under an open license and can be replicated, despite what AI model developers want you to think. 

“The consideration of marginal risk is useful to avoid targeting dual-use foundation models with widely available weights with restrictions that are unduly stricter than alternative systems that pose a similar balance of benefits and risks,” the NTIA said. 

The agency further added that it understands both open and closed models have risks that need managing, but open models “may pose unique opportunities and challenges to reduce risks.” 

Readers Also Like:  Robersonville earns praise from LGC | The Enterprise News ... - reflector.com

That the Biden administration is looking into the risks posed by open models could point to a regulatory approach similar to how the European Union started outlining its AI Act. 

The EU’s AI Act, formally adopted by its parliament in March, regulates AI models based on how risky the use cases are, rather than the models themselves. For example, the EU set hefty fines for companies that use AI for facial recognition. The EU had considered regulating models, so the U.S., carefully considering the potential dangers of public AI models, could follow in the EU’s footsteps.

Kevin Bankston, senior advisor on AI governance with the Center for Democracy and Technology, applauded the NTIA for taking its time on how to police AI models. In an email, Bankston said, “The NTIA correctly concluded that there is not yet enough evidence of novel risks from open foundation models to warrant new restrictions on their distribution.

Model developers still have to wait

The Biden administration has proactively created guidelines around AI use with the AI Executive Order. But that order is not yet regulated. Some lawmakers and states have proposed potential policies. In one example, California state senators, home to many AI developers, including OpenAI, introduced the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” which many believe could stifle AI development by punishing smaller companies working with foundation models. 

Developers of AI models don’t have much to worry about just yet because the NTIA is still very much on a fact-finding mission. 

Readers Also Like:  Ten years in: Deep learning changed computer vision, but the classical elements still stand

Assaf Melochna, founder of AI company Aquant, said in an email to VentureBeat the NTIA’s observations don’t really change much for model developers.

“Developers can still release their model weights at their own discretion, but they will be under more scrutiny,” Melochna said. “The sector changes every day, so federal agencies need to stay flexible and adapt based on what they find.” 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.