security

Biden warns tech execs that sell A.I. products that they must ‘address the risks to society, the economy, and national security’ – Fortune


U.S. President Joe Biden holds a meeting with his science and technology advisors at the White House

U.S. President Joe Biden holds a meeting with his science and technology advisors at the White House Kevin Dietsch/Getty Images

President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.

Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.

“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.

Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it’s the work of humans.

While tech companies should always be responsible for the safety of their products, Biden’s reminder reflects something new — the emergence of easy-to-use AI tools that can generate manipulative content and realistic-looking synthetic media known as deepfakes, said Rebecca Finley, CEO of the industry-backed Partnership on AI.

The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.

Readers Also Like:  Draft Technical Standards for DORA Now Available | Perspectives & ... - mayerbrown.com

Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.

By contrast, “the U.S. has had more a laissez-faire approach to the commercial development of AI,” said Russell Wald, managing director of policy and society at the Stanford Institute for Human-Centered Artificial Intelligence.

Biden’s Tuesday remarks won’t likely change that, but Biden “is setting the stage for a national dialogue on the topic by elevating attention to AI, which is desperately needed,” Wald said.

The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.

Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.

Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.