security

What They Are Saying: Tech Companies Applaud Federal Artificial … – Senator Jerry Moran


WASHINGTON – U.S. Senators Jerry Moran (R-Kan.) and Mark Warner (D-Va.) recently introduced the Federal Artificial Intelligence Risk Management Act to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. This legislation received praise and support from businesses, universities and foundations. Statements in support of the Federal Artificial Intelligence Risk Management Act can be found below.

“As a long-standing champion and early adopter of the NIST AI Risk Management Framework, Workday welcomes today’s introduction of the Federal AI Risk Management Framework Act,” said Chandler C. Morse, Vice President of Public Policy, Workday. “This bipartisan proposal would advance responsible AI by directing both federal agencies and companies selling AI in the federal marketplace to adopt the NIST Framework. Leveraging the buying power of the federal government will also send an important message to the private sector and go a long way towards building trust in AI. We congratulate Senators Moran and Warner for their leadership and encourage Congress to act in support of the bill’s adoption.”

“Implementing a widely recognized risk management framework by the U.S. Government can harness the power of AI and advance this technology safely,” said Fred Humphries, Corporate Vice President, U.S. Government Affairs, Microsoft. “We look forward to working with Senators Moran and Warner as they advance this framework.”

“IBM thanks Senators Warner and Moran for their bipartisan leadership on the Federal Artificial Intelligence Risk Management Act of 2023 – a critical step in ensuring that AI used by U.S. government agencies is trusted and responsible,” said Christoper Padilla, Vice President, Government and Regulatory Affairs, IBM. “By leveraging the proven NIST Risk Management Framework, this bill provides consistent standards and a valuable roadmap for agencies to follow as they put AI to work for the public good.”

Readers Also Like:  Dfyn Version 2 Goes Live With On-Chain Limit Orders and Enhanced DEX Security - CoinDesk

“The NIST Artificial Intelligence Risk Management Framework (RMF) is a comprehensive tool available to help the government and private sector organizations identify and counter AI-related risks,” said BSA- The Software Alliance. “The Federal Artificial Intelligence Risk Management Act demonstrates the growing bipartisan consensus in favor of further cementing the NIST AI RMF as the best available tool for managing AI risks. The legislation will ensure that private sector organizations are required to use established best practices to reduce risk to manage AI systems within the federal government.

“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”

“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”

Readers Also Like:  Military, tech experts raise concerns about AI weaponization: ‘We have to be very concerned’ - The Hill

“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”

“Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology’s development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively.”

“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition’s commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States’ leadership in the responsible use and development of artificial intelligence on the global stage.”

Readers Also Like:  Today's top 5 from Purdue University - Purdue University

A one-page explanation of the legislation of the legislation can be found HERE.

# # #



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.