Senate Majority Leader Charles Schumer, D-NY, on Monday reiterated his support for a new framework for the regulation of AI that focuses on making innovation a “North Star” for the United States’ approach to the technology.
“Even when companies are good and want to have some of the protections that we need, if their competitors aren’t doing it, they’re going to be under huge pressure not to do it themselves,” said Schumer. “That’s why Congress must join the AI revolution. The federal government — we have no choice.”
Speaking alongside IBM CEO Arvind Krishna at an event in downtown New York, the senator repeated his support for his SAFE Innovation Framework. That framework, which was introduced last month, aims to focus on regulatory questions related to competition, open-source technology, and federal financial incentives that are raised by AI.
Schumer said he was particularly concerned with AI explainability — the idea that the technology must be able to articulate why it makes one decision, and not another, which he called one of the most “difficult” technical issues in AI. “You want the system to spit back some kind of satisfying answer,” he remarked.
Stalled immigration reform has also exacerbated technology workforce challenges, Schumer added.
The Senate Majority leader is now planning nine different forums, which will occur later this fall, that will focus on potential avenues for regulating the technology. The idea is to include members of private industry, but also skeptics and critics of the technology. These panels, called “Insight Forums,” will focus on issues including national security, privacy, and high-risk applications and bias, and the implications of AI for the workforce.
Schumer played a critical role in passing the Chips and Science package passed last year — Krishna cited that legislation as an critical milestone for US tech competitiveness. Notably, IBM’s semiconductor business, along with several New York fabs, upstate could be major beneficiaries of that package.
Schumer’s comments also come as federal officials, along with Congress, weigh myriad approaches to regulating AI. There’s growing pressure on the US to catch up to the European Union, which recently passed a draft law called the AI Act. At the same time, federal officials are also searching for ways to push US companies to the forefront of global AI technology development — particularly as China continues to invest in the technology, too.
As the quest to regulate the tech ramps up, AI experts, activists, and civil rights groups have continued to highlight the myriad harms that artificial intelligence can create or exacerbate, including misinformation, bias and discrimination, intellectual property issues, and data privacy and cybersecurity risks.
Amid calls to both accelerate and rein in AI development, tech companies have — unsurprisingly — advocated for their own preferred regulatory paradigms. IBM has extensively promoted a “precision regulation” approach to artificial intelligence, which would involve focusing on particular tools and particular applications. The company has supported frameworks developed by agencies like NIST — and has opposed the notion of creating a new federal agency to focus on the technology.
“We also believe one must not try to regulate the actual algorithms — or what we call the underlying computer science — all that is going to make it go to a place where the regulations are not there,” said Krishna. “But you must regulate use cases because those are what drive the benefit and the harm that is there.”