security

Top Tech Firms Make Pledge to White House Over Development of AI – U.S. News & World Report


Leaders of the country’s biggest tech firms are set to join President Joe Biden at the White House on Friday to formalize a voluntary commitment that they will prioritize safety, security and transparency as their teams develop artificial intelligence software – a controversial, rapidly-growing industry that the federal government has yet to regulate.

“To make the most of AI’s potential the Biden-Harris administration is holding this industry to the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” a White House official told reporters ahead of the event. “The companies developing these emerging technologies have an obligation to behave responsibly and ensure their products are safe.”

The companies, including Microsoft, Google, Meta, Amazon, OpenAI, Inflection and Anthropic, agreed to ensure their products are safe before introducing them to the public by testing the safety of their AI systems, subjecting them to external testing, assessing potential biological and cyber security risks and making the results of those assessments public.

They also volunteered to safeguard their products against cyber and insider threats and to share best practices to prevent misuse, reduce risks and protect national security. In addition, they promised to make it easy to tell whether audio and digital content is in its original form or has been altered or generated by AI, as well as to prevent bias and discrimination in its content and shield children from any potential danger.

Political Cartoons

The announcement comes with the 2024 presidential election fast approaching and zero federal regulations in place to combat false AI-generated political stunts, Consumer advocacy groups and campaigns alike are holding their breath as they wait for what many describe as a nightmare scenario in which it becomes even more difficult for American to identify misinformation.

Notably, the commitments from the company are voluntary, meaning there’s little accountability baked into the announcement. White House officials underscored that the companies plan to use external verification components, including red-teaming of their software in which a group pretends to be an enemy and attempts to hack into or control the software for their personal gain – though many of them already incorporate such safety measures.

“This is pushing the envelope on what companies are doing and raising the standards for safety, security and trust of AI,” one official told reporters.

More than driving any meaningful change in the way AI is being developed, the voluntary agreements represent a placeholder of sorts until the administration is ready to unveil an executive order that it’s been crafting to regulate the industry. White House officials said there is no timeline for that executive action, but that it’s a top priority and will cut across several departments.

Senate Majority Leader Chuck Schumer of New York has been hosting mandatory learning sessions for Democrats and Republicans about the way AI works – its promise and pitfalls – and how Congress might seek to regulate it.

“Legislation is going to be critical to establish a legal and regulatory regime to make sure these technologies are safe,” the White House official said. “These commitments do not change the need for legislation and further executive action by the president.”

The White House is also working with more than a dozen other countries to develop best practices for use of AI around the world.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.