security

Experts: Feds Need Tech Talent to get AI Regs Right – MeriTalk


As AI technologies continue to advance at a rapid pace, experts with the technology told members of Congress on May 16 that having the right technical talent in place within the Federal government is crucial to ensure the United States is prepared to address the potential risk and harms that AI systems can present.

At a Senate Homeland Security and Governmental Affairs Committee hearing, AI experts offered ways that Federal agencies can better leverage and govern the responsible use of AI – with a digital-ready workforce topping that list.

“Government agencies are of course critical for effective regulation of the risks of AI and striking the right balance between innovation and safeguards requires expertise in government,” said Daniel E. Ho, a professor at Stanford Law School and associate director of the Stanford Institute for Human-Centered Artificial Intelligence.

“Getting technical talent into the Federal workforce is the single biggest obstacle for effective regulation,” he added. “Government cannot govern AI if it doesn’t understand AI.”

Ho explained that progress in this area is already being made, such as with recent legislation from HSGAC Chairman Gary Peters, D-Mich., to create an AI training program for Federal supervisors and management officials. However, Ho emphasized, “we still have a long way to go.”

“Congress should establish new pathways and trajectories for technical talent in government,” Ho recommended. “We need better models building on the U.S. Digital Service, public-private partnerships, and academic-agency partnerships to attract AI talent to public service, build cross-functional teams, and provide pathways for career advancement.”

Readers Also Like:  HiddenLayer, Inc. Selected as Finalist for RSA Conference 2023 ... - PR Newswire

Lynne E. Parker, associate vice chancellor at the University of Tennessee, Knoxville, and director of the AI Tennessee Initiative, explained that while AI can often increase the productivity of individuals, the government is still in dire need of bringing technical talent into the Federal government.

“We desperately need to work on getting more expertise into government,” she told the committee. “One quick way of doing that, I think, is to leverage these programs like the Intergovernmental Personnel Act and the Presidential Innovation Fellows program to get people from industry and from academia into government.”

Parker, who was founding director of the White House’s National Artificial Intelligence Office (NAIIO) and Deputy U.S. Chief Technology Officer (CTO) before leaving Federal government service last year, went on to offer a number of other recommendations to Congress to promote the use of responsible AI.

She recommended that Congress direct each Federal agency to hire and resource a chief AI officer, direct the creation of an interagency Chief AI Officers Council, and direct the proposed council to “review the agency AI use case inventories and identify dozens of key agency processes that could be transformed with AI in a manner consistent with privacy, civil rights, and civil liberties.”

“The Federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills – the very skills needed to design, develop, deploy, and monitor AI systems,” added Taka Ariga, the chief data scientist at the Government Accountability Office (GAO).

“Ultimately, having a robust cadre of a digital-ready Federal workforce ensures humans can successfully remain in and never out of the AI loop,” he added.

Readers Also Like:  “Oversight of AI: Rules for Artificial Intelligence” and “Artificial ... - Gibson Dunn

Ariga said that his “number one priority” is disclosure where discretionary decision-making has been impacted. However, he echoed Ho’s recommendation of developing a Federal digital-ready workforce because, “fundamentally, it is the digital-ready workforce that will make such disclosure effective.”

“We have to have humans in charge to understand the limitations of these kinds of systems, who know when humans should override them in order to really work effectively and safely with these kinds of systems,” Ho said.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.