security

Congress's week of AI – Nextgov


Artificial intelligence policy dominated Capitol Hill this week — even amid the looming threat of a government shutdown – with a high-profile closed-door roundtable of tech leaders held in the Senate and multiple hearings focused on AI risks and opportunities.

Billionaire tech tycoons like Elon Musk and Mark Zuckerberg descended on Capitol Hill, along with computer scientists, data researchers and executives from major software companies, for a wide array of meetings, hearings and summits all exploring artificial intelligence and its potential impact on government, industry and national security. 

The Senate hosted its first-ever “AI Insights Forum” on Wednesday, a rare, closed-door meeting with the nation’s leading tech executives to find common ground on safeguards that Congress can put in place to promote responsible AI research and development. The House held a variety of hearings focusing on how the government procures AI tools and technologies, and what regulations could gain bipartisan support while maintaining U.S. competitiveness in the rapidly evolving field. 

Civic hackers could even be found throughout the halls of the Capitol building for the fifth congressional hackathon, which assessed how emerging technology, AI and automation can help streamline government operations and enhance public-facing digital services across federal agencies.

While it remains unclear what set of safeguards Congress may establish following its week of AI, technologists said they were encouraged by the focus on developing crucial regulations amid other major controversies and contentious debates, including the looming budget crisis. 

“I think it is a very positive sign that Congress is taking an interest in learning more about this important technology,” Mike Daniels, senior vice president of the public sector for the software firm UiPath, told Nextgov/FCW. “The willingness to dive in and learn and collaborate with industry to provide appropriate guardrails while encouraging this engine of economic innovation is important for our society as a whole going forward.”

“Collaboration between the tech industry and the government is key as AI takes hold,” said Chris Wysopal, chief technology officer and founder of the software firm Veracode, which recently participated in a cybersecurity forum hosted by the Office of the National Cyber Director. “As the use of AI continues to grow and unfold, cooperation by both parties is key for instilling a secure space for innovation and safety to thrive.”

Readers Also Like:  Can Randall Ward’s Appfire Be The Next Tech Unicorn To IPO? - Forbes

The importance of enhanced regulations and security measures around AI development was discussed in virtually every meeting between lawmakers and tech executives this week, with Musk calling for a “referee” to “ensure that companies take actions that are safe and in the interest of the general public,” and other leaders encouraging Congress to enact strict safeguards. 

Democrats, Republicans, and technologists alike all seemed to express their support for implementing additional legislative measures to guard against the adverse effects of unregulated AI. However, the starting point and which regulations could garner widespread backing still remains uncertain.

“While wariness and caution are necessary, we cannot stagnate innovation to create perfect rules and regulations either,” Rob Lee, technical advisor to the FBI and chief curriculum director for the information security firm SANS, told Nextgov/FCW. “U.S. competitors will be gloves-off to gain the upper hand within the world order using AI.”

Industry leaders have expressed support for requiring companies to implement pro-innovation regulatory frameworks developed with input from the private sector. 

Rob Strayer, executive vice president of policy for the Information Technology Industry Council, testified Wednesday before the Senate Committee on Commerce, Science and Transportation that the framework “provides companies with a comprehensive way to think about risk management practices,” including with AI and emerging technologies. 

Policy experts told Nextgov/FCW that the collaborative method NIST leveraged to develop its framework is crucial for building support around new AI regulations within the private sector. 

“The process to develop the framework was open and transparent, and stakeholders across the ecosystem had the opportunity to provide feedback,” said Courtney Lang, ITIC vice president of policy, trust and technology. “This process ultimately yielded a comprehensive document that organizations can use to help them operationalize AI risk management.”

Readers Also Like:  Securitas Technology officially launches - SECURITY SYSTEMS NEWS

Lang added that upcoming conferences like the G7 meeting in Japan and the AI Safety Summit in the United Kingdom will offer more opportunities to collaborate on international shared commitments for AI deployment and development. 

Earlier this year, thousands of software developers — as well as industry leaders like Musk and others — signed an open letter demanding a six-month pause in AI research and development to allow for the establishment of new safeguards. 

Officials have since pushed back on those calls, warning that foreign adversaries would continue to develop their own AI capabilities as the U.S. lost its competitive edge in the field. 

“If we stop, guess who’s not going to stop: potential adversaries overseas,” Defense Department Chief Information Officer John Sherman said at a cyber conference earlier this year. “We’ve got to keep moving.” 

Industry experts are instead calling for mandates like impact assessments of high-risk AI systems, which can help provide real-time insights on emerging threats and vulnerabilities as organizations continue to develop AI tools. 

“If there is one truism in Washington, it’s that technology will always outpace public policy,” Bill Wright, head of global government affairs for the software company Elastic, told Nextgov/FCW. “Without an inclusive approach, public policy could run the risk of being not just ineffective but potentially damaging.”

In the absence of clear, comprehensive guidance regulating the use of AI across the federal government, agencies have resorted to ad hoc measures to collaborate with the private sector and research organizations on AI implementation and ethics.

The CIA has indicated that it’s exploring ways to utilize chatbots and generative AI, which may be able to assist agency officers in their daily functions and mission planning, according to Rosa Smothers, an executive at the security firm KnowBe4 and former CIA cyber threat analyst.

Readers Also Like:  Treasury Hardens Sanctions With 130 New Russian Evasion and ... - Treasury

The CIA is “using a deliberative approach to AI” and “reaching out to academia and industry to evaluate the many AI-related tools and commercial products out there that could potentially support mission goals,” Smothers told Nextgov/FCW

On Tuesday, the White House announced that it had secured commitments from eight major companies to advance responsible AI research and development, including firms like Adobe, IBM, Nvidia and Palantir. 

The commitments — which follow similar pledges from seven other companies earlier this summer — include enhancing information sharing between the public and private sectors, investing in cybersecurity and insider threat safeguards, incentivizing third-party discovery and vulnerability reporting and adopting public reporting models for AI limitations and risks, among others. 

As Congress continues to explore new regulatory measures around AI, experts say there’s a fine line between implementing effective security rules and red tape that hinders innovation.

“It is essential to strike a balance between promoting innovation and ensuring societal safety of AI technologies,” said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists. “An overly stringent regulatory environment might stifle progress, while a lax one may lead to misuse or unforeseen negative consequences.”

The ideal set of regulations would ensure greater transparency in AI algorithmic development to further clarify how an AI system learns and produces output, Kaushik said. He added that new laws should mandate routine impact assessments prior to deploying AI systems in high-risk environments to avoid “catastrophic” consequences in the digital networks of critical infrastructure, national security and space-based infrastructure.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.