enterprise

White House gets AI firms to agree to voluntary safeguards, but not new regulations


Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Today, the Biden-⁠Harris Administration announced that it has secured voluntary commitments from seven leading AI companies to manage the short- and long-term risks of AI models. Representatives from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft are set to sign the commitments at the White House this afternoon.

The commitments secured include ensuring products are safe before introducing them to the public — with internal and external security testing of AI systems before their release as well as information-sharing on managing AI risks.

In addition, the companies commit to investing in cybersecurity and safeguards to “protect proprietary and unreleased model weights,” and to facilitate third-party discovery and reporting of vulnerabilities in their AI systems.

>>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands.<<

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Finally, the commitments also include developing systems such as watermarking to ensure users know what is AI-generated content; publicly reporting AI system capabilities, limitations and appropriate/inappropriate use; and prioritizing research on societal AI risks including bias and protecting privacy.

Notably, the companies also commit to “develop and deploy advanced AI systems to help address society’s greatest challenges,” from cancer prevention to mitigating climate change.

Mustafa Suleyman, CEO and cofounder of Inflection AI, which recently raised an eye-popping $1.3 billion in funding, said on Twitter that the announcement is a “small but positive first step,” adding that making truly safe and trustworthy AI “is still only in its earliest phase … we see this announcement as simply a springboard and catalyst for doing more.”

Readers Also Like:  Letter: Time to fight for fire tax relief - Chico Enterprise-Record

Meanwhile, OpenAI published a blog post in response to the voluntary safeguards. In a tweet, the company called them “an important step in advancing meaningful and effective AI governance around the world.”

AI commitments are not enforceable

These voluntary commitments, of course, are not enforceable and do not constitute any new regulation.

Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, called the voluntary industry commitments “an important first step,” highlighting the commitment to thorough testing before releasing new AI models, “rather than assuming that it’s acceptable to wait for safety issues to arise ‘in the wild,’ meaning once the models are available to the public.

Still, since the commitments are unenforceable, he added that “it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections and stepped-up research on the wide range of risks posed by generative AI.”

For its part, the White House did call today’s announcement “part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.” It said the Administration is “currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.”

Voluntary commitments precede Senate policy efforts this fall

The industry commitments announced today come in advance of significant Senate efforts coming this fall to tackle complex issues on AI policy and move towards consensus around legislation.

According to Senate Majority Leader Chuck Schumer (D-NY), U.S. senators will be going back to school — with a crash course in AI that will include at least nine forums with top experts on copyright, workforce issues, national security, high-risk AI models, existential risks, privacy, and transparency and explainability, as well as elections and democracy.

Readers Also Like:  TSM Semiconductor Earnings a Breath of Fresh Air for US Big Tech's 2024 Outlook | investing.com - Investing.com

The series of AI “Insight Forums,” he said this week, which will take place in September and October, will help “lay down the foundation for AI policy.” Schumer announced the forums, led by a bipartisan group of four senators, last month, along with his SAFE Innovation Framework for AI Policy.

Former White House advisor says voluntary efforts ‘have a place’

Suresh Venkatasubramanian, a White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop The Blueprint for an AI Bill of Rights) and professor of computer science at Brown University, said on Twitter that these kinds of voluntary efforts have a place amidst legislation, executive orders and regulations. “It helps show that adding guardrails in the development of public-facing systems isn’t the end of the world or even the end of innovation. Even voluntary efforts help organizations understand how they need to organize structurally to incorporate AI governance.”

He added that a possible upcoming executive order is “intriguing,” calling it “the most concrete unilateral power the [White House has].”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.