security

Leading Tech Firms Agree to White House's AI Safeguards – WilmerHale


On Friday, July 21, 2023, the White House announced that seven US technology companies at the forefront of generative artificial intelligence (AI) agreed to eight voluntary commitments to “promote the safe, secure, and transparent development use of AI technology” (the “White House commitments” or the “commitment(s)”). 

“We must be clear eyed and vigilant about the threats emerging technologies can pose,” President Biden said when announcing the commitments, adding that the seven companies—Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI—have a “fundamental obligation” to ensure their products are safe. 

The White House commitments build on prior Administration initiatives to promote “responsible” AI innovation and likely herald further action by the Executive Branch to monitor and regulate the AI industry.  White House Chief of Staff Jeff Zients said on Friday that the Administration will push Congress for more authority to have “capacity,” “experts,” and “regulatory authority to hold the private sector accountable—and to hardwire these actions so they’re enduring.”  He drew parallels to government’s failure to anticipate the downsides of social media, as President Biden had in his announcement, saying “[o]ne of the lessons learned is that we’ve got to move fast—we cannot chase this technology.”

The White House Commitments Focus on Principles of “Safety,” “Security,” and “Trust”

The White House outlines three principles that must be “fundamental to the future of AI” and underlay the voluntary commitments: 

  • Safety: Companies have a duty to ensure their products are safe before introducing them for broad use, including by subjecting them to external testing;
  • Security: Companies have a duty to build systems that are secure from cyberattacks and insider threats; and
  • Trust: Companies have “a duty to do right by the public and earn the people’s trust,” by making it easy to identify AI-manipulated media, strengthening protections for privacy and children, and ensuring AI models do not promote discrimination. 

The below summarizes each White House commitment

Safety

1) Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.

Red-teaming is a term of art that refers to testing for potential security attacks or misuse by simulating the same in order to develop prevention strategies, a practice in which many AI companies already engage.  A White House fact-sheet that accompanied the release of the commitments said that “independent experts” will carry out the testing in part and singled out that such testing will guard against “the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.”  This commitment is consistent with prior White House statements; in May 2023, the President urged in a meeting with AI executives that companies had a responsibility to ensure their products are validated for safety and security before they are deployed.  The commitment does not specify a common series of tests or benchmarks against which the AI systems will be tested.

Readers Also Like:  Skill gap plagues cyber security industry as jobs go unfilled - TechCircle

2) Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards

Companies agree to share standards and best practices for AI safety, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, issued in January 2023.  “In this work, companies will engage closely with governments, including the U.S. government, civil society, and academia, as appropriate,” the commitment states.

Security

3) Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights

A “model weight” refers to the mathematical instructions that allow AI models to function.  Model weights constitute valuable information that adversaries—whether rival companies or nation-states—may try to pilfer from AI firms.  Hence, under this commitment, companies “will treat unreleased AI model weights for models in scope as core intellectual property for their business, especially with regards to cybersecurity and insider threat risks.”  

4) Incentivize third-party discovery and reporting of issues and vulnerabilities

Because “AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming,” the companies commit to establishing for relevant AI systems, “bounty systems, contests, or prizes” to incentivize third party disclosure of weaknesses and unsafe behaviors.  The foregoing may include the companies’ existing bug bounty programs, which in the traditional cybersecurity context have proven valuable for detecting and remediating cybersecurity vulnerabilities.  

Trust

5) Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content 

The rapid proliferation of AI-produced content, whether text, audio, imagery, or video (often called “deepfakes”), threatens to undermine society’s shared sense of reality.  Accordingly, under this commitment, companies “recognize that it is important for people to be able to understand when audio or visual content is AI-generated” and “agree to develop robust mechanisms, including provenance and/or watermarking systems for audio or visual content created by any of their publicly available systems within scope introduced after the watermarking system is developed.”  The commitment does not set a precise standard for this watermarking, saying only that companies commit to work with standard-setting bodies, as appropriate, to develop an appropriate framework. The commitment also excepts content that is easily recognizable as AI-created.

Readers Also Like:  Fitzgerald delivers presentation on information technology security ... - Boise State University

6) Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias

Companies acknowledge that users should understand the capabilities and limitations of the AI systems they use and with which they interact.  These companies commit to publishing reports on AI models they release that address the models’ limitations, effects on societal risks like fairness and bias, and the results of adversarial testing.  The commitment does not stipulate hoe often the companies will need to release their reports or how detailed they will need to be.

7) Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy

 Under this commitment, companies recognize the importance of avoiding harmful biases from being propagated by, and discrimination enacted by, AI systems.  Therefore, the companies commit to empowering trust and safety teams, advancing AI safety research and privacy, protecting children, and managing AI risks.

8) Develop and deploy frontier AI systems to help address society’s greatest challenges

Pursuant to this commitment, companies agree to support the development of AI systems that meet “society’s greatest challenges,” including climate change, cancer, and cyber dangers.  They also agree to support initiatives that foster education and training of students and workers and to raise public awareness of the nature, limitations, and likely impact of AI technology. 

Context 

These commitments are part of a broader efforts by the Biden Administration to push for safe and responsible AI development that minimizes AI-related harms.  These other efforts include:

  • An announcement in June 2023 of a new NIST Public Working Group on AI;
    A convening in May 2023 of Google, Anthropic, Microsoft, and Open AI coupled with initiatives to promote “responsible AI innovation;
  • The Administration’s Blueprint for an AI Bill of Rights to safeguard Americans’ rights and safety;
  • An Executive Order signed by the President in February 2023 to remove bias from the design of new technologies and protect the public from algorithm-enabled discrimination;
  • An investment of $140 million to establish seven new National AI Research Institutes; 
  • A National AI R&D Strategic Plan to advance responsible AI; and
  • Draft policy guidance expected this summer from the Office of Management and Budget for federal departments and agencies to follow to ensure their development, procurement, and use of AI systems centers on safeguarding individual rights and safety.
Readers Also Like:  Data breach at healthcare tech firm impacts 4.5 million patients - BleepingComputer

Takeaways 

While the White House Commitments are often broadly worded and lack any enforcement mechanism, they nonetheless constitute the most serious step yet by the Administration to establish the beginnings of a regulatory structure around AI.

The Administration expects further efforts to follow.  The White House openly advocates for legislation to provide it with greater regulatory powers, because “[r]ealizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement.”  Congress has held a number of hearings in just the past two months on AI, touching on issues related to AI and intellectual property, human rights, oversight, and the principles for regulating AI.  Senate Majority Leader Chuck Schumer (D-NY) is urging swift action to codify AI regulations, and the White House says it “will work with allies and partners on a strong international code of conduct to govern the development and use of AI worldwide.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.