security

White House, tech CEOs meet over AI risks and responsibilities – wpde.com


FILE – In this Nov. 29, 2019, file photo, a metal head made of motor parts symbolizes artificial intelligence, or AI, at the Essen Motor Show for tuning and motorsports in Essen, Germany. (AP Photo/Martin Meissner, File)

The White House announced new steps, including $140 million in additional funding, as it continues laying the groundwork for taming a technological gorilla: artificial intelligence.

“AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges,” Vice President Kamala Harris said in a White House release. “At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.”


There are few corners of life AI hasn’t touched already or could touch in the coming years.

A senior administration official told reporters this week that the risks are “quite diverse, as you might imagine, for a technology that has so many different applications.”

There are concerns over security, both in the physical world and online. AI-powered systems, including autonomous vehicles and defense systems, could fail or conceivably be hacked.

There are worries AI could be used to infringe on civil rights or privacy.

AI-powered deepfake images and videos, or written misinformation that can be quickly generated and spread, threatens to erode public trust.

And the administration official said there’s potential for “job displacement from automation now coming into fields that we previously thought were immune.”

Readers Also Like:  AI can make healthcare more accurate, accessible, and sustainable - World Economic Forum

“So, it’s a very broad set of risks that need to be grappled with,” said the official, who spoke on background with the reporters before Harris met with the leaders of companies at the forefront of AI innovation to discuss the risks and opportunities of this fast-evolving technology.

FILE – In this April 7, 2021 file photo, a Waymo minivan moves along a city street as an empty driver’s seat and a moving steering wheel drive passengers during an autonomous vehicle ride in Chandler, Ariz. (AP Photo/Ross D. Franklin)

The Biden administration previously released the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, intended to be guideposts “to promote responsible innovation.”

In that spirit, the White House announced new efforts Thursday.

They’re investing $140 million to stand up seven new National AI Research Institutes, which will make 25 such sites operating off half a billion dollars in funding.

The White House also announced that guidance for government use of AI is in the works. The draft policy should be released this summer for public comment.

Finally, the White House says leading AI developers have committed to opening up their models to an independent, public evaluation. Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI have pledged to take part in this community testing.

Tony Dahbura, co-director of the Johns Hopkins Institute for Assured Autonomy, said Friday that he was “fairly bullish” on these White House efforts.

He commended what he said was a balanced approach of trying to foster innovation while trying to mitigate the risks.

Readers Also Like:  Online 'Information War' in Africa Rages on Social Media - Slashdot

And he doesn’t see the risks going away anytime soon, he said.

AI is only going to be used in more places and with more challenging problems, he said.

People shouldn’t be lulled into a false sense of security because of all these efforts,” Dahbura said. “I mean, they’re attacking the right problems, but these are really long-term problems, and there are inherent uncertainties and risks in AI.”

Artificial intelligence four-part series:

The CEOs of Google, Microsoft, OpenAI and Anthropic met with Harris and other administration officials Thursday for what the White House called a “frank and constructive discussion” around their responsibilities to ensure their AI-powered products are safe before they’re given to the public.

Microsoft and Google have been in an AI arms race since ChatGPT was released and caught everyone’s attention late last year, though the companies have been working on AI for years.

Google’s CEO has called AI “the most profound technology we are working on today.”

The speed at which AI is advancing has made some folks, including some technology experts, uncomfortable.

An open letter that was signed by Elon Musk and Apple co-founder Steve Wozniak called on labs to pause work on advanced AI systems and to develop shared safety protocols.

Biden administration officials discussed with the tech CEOs the importance of transparency and safety validation, according to the White House.

FILE – Microsoft CEO Satya Nadella speaks during the introduction of the integration of Microsoft Bing search engine and Edge browser with OpenAI on Tuesday, Feb. 7, 2023, in Redmond, Wash. (AP Photo/Stephen Brashear)

Readers Also Like:  Attorney brings tech background to AI group - Minnesota Lawyer

And administration officials stressed these systems must be secured from “malicious actors and attacks.”

Can AI systems effectively be secured?

We really don’t know,” Dahbura said. “I mean, it’s heartening that the administration is including security as a concern. That’s wise. But it’s a work in progress. And as we know from cybersecurity itself over the past 25 years or so, it really is a moving target. And the attackers work as quickly and stealthily as the researchers trying to mitigate the attacks.”

Dahbura said it’s good that the administration is working with these companies, but he said AI will grow bigger than “a room with four people.”

And he thinks it’s time for Congress to step up with wider regulations and policies.

“I think this is a unique moment, and the window is pretty tight for them to really establish a sensible framework,” he said.


View This Story on Our Site



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.