security

Sens. Moran, Warner Introduce Legislation to Establish AI … – Senator Jerry Moran


WASHINGTON – U.S. Senators Jerry Moran (R-Kan.) and Mark Warner (D-Va.) today introduced legislation to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. U.S. Representative Ted W. Lieu (D-Los Angeles County) plans to introduce companion legislation in the U.S. House of Representatives.

Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier this year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.

The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.

“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order capitalize on the benefits while mitigating risks.”

Readers Also Like:  Is $68M a fair price for city purchase of Thermo Fisher building ... - Port City Daily

“As a long-standing champion and early adopter of the NIST AI Risk Management Framework, Workday welcomes today’s introduction of the Federal AI Risk Management Framework Act,” said Chandler C. Morse, Vice President of Public Policy, Workday. “This bipartisan proposal would advance responsible AI by directing both federal agencies and companies selling AI in the federal marketplace to adopt the NIST Framework. Leveraging the buying power of the federal government will also send an important message to the private sector and go a long way towards building trust in AI. We congratulate Senators Moran and Warner for their leadership and encourage Congress to act in support of the bill’s adoption.”

“Implementing a widely recognized risk management framework by the U.S. Government can harness the power of AI and advance this technology safely,” said Fred Humphries, Corporate Vice President, U.S. Government Affairs, Microsoft. “We look forward to working with Senators Moran and Warner as they advance this framework.”

“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”

Readers Also Like:  WhatsApp to enable messaging in internet blackouts - BBC

“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”

“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”

“Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology’s development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively.”

Readers Also Like:  Security for embedded devices is ignored by too many companies ... - FierceElectronics

“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition’s commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States’ leadership in the responsible use and development of artificial intelligence on the global stage.”

A one-page explanation of the legislation of the legislation can be found HERE.

# # #



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.