security

Experts call for AI regulation during Senate hearing – TechTarget


As businesses, consumers and government agencies look for ways to take advantage of artificial intelligence tools, experts this week called on Congress to craft AI regulations addressing challenges facing the technology.  

AI concerns run the gamut from bias in algorithms that could affect decisions such as who is selected for housing and employment opportunities, to the use of deep fake AI that can artificially generate images and sounds that can imitate real human beings’ appearances and voices.

Yet AI has also led to the development of lifesaving drugs, advanced manufacturing and self-driving cars. Indeed, the increased adoption of artificial intelligence has led to the rapid growth of advanced technology in “virtually every sector,” said Sen. Gary Peters (D-Mich.), chairman of the U.S. Senate Committee on Homeland Security and Governmental Affairs. Peters spoke during a committee hearing on AI risks and opportunities Wednesday.

The European Union is considering AI regulations, but policymakers in the U.S. have yet to advance AI legislation. The White House released a Blueprint for an AI Bill of Rights last year, which guides businesses on implementing ethical AI systems. During the hearing, expert witnesses cited the need for AI regulation to protect consumers from the risks posed by the technology and ever-growing national security concerns.

The U.S. Chamber of Commerce this week also called for AI regulation in its AI report. According to the report, policymakers “must debate and resolve the questions emanating from these opportunities and concerns to ensure that AI is used responsibly and ethically.”

Readers Also Like:  AT&T Security Conference: Secure Connections - AT&T Newsroom

Peters said during the hearing that, “Artificial intelligence certainly holds great promise.” But, he added, it also “presents potential risks that could impact our safety, our privacy and our economic and national security.”

Establishing AI guardrails

One of the greatest challenges presented by AI is the lack of transparency, accountability and reasoning behind how the algorithms reach their results, meaning the technology needs safeguards to be sure it’s used appropriately, Peters said.

“This lack of visibility into how AI systems make decisions creates challenges for building public trust in their use,” Peters said.

During the hearing, Alexandra Reeve Givens, president and CEO of the nonprofit Center for Democracy and Technology, said AI use by the government could lead to denial of housing, rejection from job opportunities and allegations of fraud, which is what happened in Michigan.

When AI systems are used in these high-risk settings without responsible design and accountability, it can devastate people’s lives.
Alexandra Reeve GivensPresident and CEO, Center for Democracy and Technology

Givens said the state’s unemployment insurance system wrongly classified more than 34,000 individuals’ unemployment applications as fraudulent from 2013 to 2015.

“When AI systems are used in these high-risk settings without responsible design and accountability, it can devastate people’s lives,” she said.

Givens said there needs to be a government-led effort to provide “robust, use-specific guidance” to navigate such issues, citing the White House’s Blueprint for an AI Bill of Rights as a good start. Givens called for increased transparency around the use of AI systems, and their design and testing. AI systems also need ongoing testing in their applied environment to ensure that they’re working as intended, she said.

Readers Also Like:  Meta increases personal security spending for Mark Zuckerberg to ... - TechSpot

Suresh Venkatasubramanian, professor of computer science and data science at Brown University and former assistant director of the White House Office of Science and Technology Policy, echoed the need for safeguards around AI systems. He said such precautions could include the following:

  • testing to make sure an AI system works as described;
  • ensuring algorithms don’t demonstrate discriminatory behavior;
  • limiting algorithmic use of personal data; and
  • requiring transparency and human supervision.

Venkatasubramanian, who helped develop the Blueprint for an AI Bill of Rights, told lawmakers to “enshrine these ideas in legislation, not just for government use of AI, but for private-sector use of AI.”

“AI needs guardrails so we can be protected from the worst failures while still benefitting from the progress AI offers,” he said.

AI national security, competitiveness implications

AI applications pose significant national security challenges, including the development of novel cyber weapons and advanced biological weapons, and deployment of large-scale disinformation attacks, said Jason Matheny, president and CEO of research firm RAND Corp., during the hearing.

“By most measures, the United States is currently the global leader in AI,” Matheny said. “However, this may change as the People’s Republic of China seeks to become the world’s primary AI innovation center by 2030, an explicit goal of China’s AI national strategy.” China and Russia are also pursuing militarized AI technologies, which is intensifying the challenges, he said.

Indeed, China will further accelerate AI development to compete with the U.S. military, according to Koichiro Takagi, visiting fellow with the research organization Hudson Institute’s Japan Chair. Takagi said researchers and Chinese People’s Liberation Army officers have stated that China’s new military strategy focused on AI will “make it possible to overtake the U.S. military.”

Readers Also Like:  SVB's Collapse Is Sobering for US National Security: Hal Brands - Bloomberg Law

“Future military confrontations between the U.S. and China are likely to focus on artificial intelligence and data for machine learning,” he said. “From these perspectives, it is crucial for the United States to promote investments in artificial intelligence.”

In conjunction with the National Science Foundation, the White House released a detailed report last month on establishing a national AI research infrastructure that could help set the U.S. on a path to compete with China.

Matheny said national security organizations, including the Department of Homeland Security, should track developments in AI that could affect cyber defense; participate in crafting international standards for AI that prioritize safety, security and privacy; and create a regulatory framework for AI “informed by an evaluation of risks and benefits of AI to U.S. national security, civil liberties and competitiveness.”

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget,
she was a general reporter for the
Wilmington StarNews and a crime and education reporter at
the
Wabash Plain Dealer.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.