security

Eight more tech companies join White House commitment to … – FedScoop


An additional eight companies on Tuesday announced their voluntary commitment to the White House to support safe, secure, and trustworthy development of artificial intelligence.

The companies — Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability — join an initial seven that convened at the White House in July to sign on to the set of voluntary commitments overseeing how the emerging technology is developed and used. Representatives from the cohort met with Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and other senior administration officials at the White House on Tuesday.

The first companies to accept the commitments were Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

The Biden administration says these commitments are an “immediate step and an important bridge to government action,” according to a fact sheet, as the White House develops an upcoming executive order and lawmakers consider legislation focused on AI. The fact sheet acknowledges the in-the-works executive order, saying the Office of Management and Budget will “soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.”

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI,” the White House said in the fact sheet. “As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to take decisive action to keep Americans safe and protect their rights.”

Readers Also Like:  West Virginia Governor's Coal Empire Sued by the Federal ... - ProPublica

In total, the companies agreed to eight commitments, which range from opening their algorithms to security testing and sharing information across industry on them prior to launch to a variety of measures that put security, transparency and responsibility at the forefront of their products.

“We applaud the Administration for making this a priority — open discussions between industry and policymakers like today are foundational to enacting safeguards without stopping AI development,” Akash Jain, president of Palantir U.S. government and an attendee at the meeting, said in a statement. “Today, Palantir, along with other leading AI companies, made a set of voluntary commitments to advance effective and meaningful AI governance, which is essential for open competition and maintaining US leadership in innovation and technology.”

These commitments complement the actions of the U.S.’s allies, such as Japan’s G-7 Hiroshima Process, the United Kingdom’s Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI. 

Short of forthcoming policy or legislation, the Biden administration in the past year issued its foundational Blueprint for an AI ‘Bill of Rights,’ which is meant to work in tandem with the AI Risk Management Framework published by the National Institute of Standards and Technology. However, some policy and tech experts say those leading frameworks are inherently contradictory and provide confusing guidance for tech companies working to develop innovative products and the necessary safeguards around them.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.