security

States Should Take a Light-Touch, Free-Market Approach to AI … – American Legislative Exchange Council



On Wednesday, November 15, 2023, I was invited to testify at the third hearing of the Wisconsin Assembly Speaker’s Task Force on Artificial Intelligence on the campus of University of Wisconsin – Stout.

In my testimony, I encouraged legislators to pursue free market policies that support private sector innovation in AI, expand economic opportunity for Wisconsinites and all Americans, and avoid the hysteria of AI “existential threats to humanity” used to justify new draconian regulations. Instead of following the Biden Administration’s example to wrap AI tools in more red tape, states should turn to existing regulations and statutory authority to protect consumers and mitigate harms.

You can read my full remarks below:

————

Chairman Gustafson, Vice Chair Doyle, and Members of the Task Force:

My name is Jake Morabito, and I serve as the Director of the Communications and Technology Task Force at the American Legislative Exchange Council. Thank you for the opportunity to speak before the panel today and I hope you find this testimony valuable as you consider how to proceed on potential artificial intelligence legislation in the State of Wisconsin.

ALEC is a Virginia-based, nonprofit, nonpartisan membership organization of state legislators from across the country dedicated to discussing and developing model public policies rooted in the Jeffersonian traditions of limited government, free markets, and federalism.

Members of ALEC’s Communications and Technology Task Force include legislative leaders and policy experts who believe in the proven light-touch regulatory approach on emerging technology that solidified the United States as the clear global leader in these sectors. ALEC legislators lead the charge in their home states to address today’s pressing technology issues with pragmatic policies that grow the digital economy and economic opportunity for all, expand broadband internet access in rural areas through free market solutions, and encourage the next wave of tech innovation to happen here on American shores. ALEC members have played a critical role in the governance of new artificial intelligence technology and will continue to do so in the years ahead.

In the relatively small amount of time since ChatGPT and generative AI tools first took the world by storm almost exactly one year ago, the AI policy conversation has taken several twists and turns. We now seem to be at a critical juncture as Congress, states, and international governments determine what—if anything—the regulatory response should look like as AI technology continues to develop.

First and foremost, policymakers should keep in mind that, fundamentally, artificial intelligence is a tool, and a tool that computer scientists continue developing and changing every day. Our nation’s world-class technology entrepreneurs and academic researchers have been working for decades on the thoughtful implementation of AI, algorithms, and process automation. In the early 2010s, voice activated virtual assistants like Amazon’s Alexa and Apple’s Siri served as a sort of precursor to today’s generative AI chatbots, but depending on how you define it, artificial intelligence and algorithms underpin virtually all modern software applications, and they have done so for many years, from navigation apps, to Zoom video calls, online marketplaces, and even in the filtering tools used to protect our email inboxes and mobile devices from spam and cyberattacks.

Consumers have benefitted greatly from this incredible innovation in the software and technology ecosystem, resulting in a vibrant American startup culture that encourages innovators and risk-takers to try new ideas on the free market. It’s not difficult to imagine how, if properly channeled, generative AI products will ultimately empower individuals with supercharged digital assistants at our disposal.

Readers Also Like:  The top tech investment areas for 2023 - Emerging Tech Brew

A new wave of tech entrepreneurs—many choosing to set up shop here in the United States—are now developing a suite of highly accessible generative AI products that anyone can use and require no coding experience. These powerful new tools will help workers complete tasks, support small business owners, and empower individuals across sectors unlike anything we could have only dreamed of a decade ago. Some of the leading AI companies are the familiar household names like Google and Microsoft, while some new names like Midjourney, Anthropic, and of course OpenAI, have emerged as leaders in this space.

Good actors are already hard at work using these groundbreaking innovations in AI and data science to benefit society and local communities around the world, accomplishing previously unthinkable breakthroughs in health care, diagnostics, and medical research. The advent of AI-guided precision agriculture will allow farmers to increase their crop yields, using less water and pesticides. And first responders are adopting the latest developments in AI and emerging tech to save lives, fight forest fires, and respond to natural disasters.

I personally am excited for the opportunity generative AI brings to fundamentally transform customer service and constituent service, allowing businesses and government to respond to customers quickly and more effectively. One example I like to talk about is the dread of rebooking delayed or canceled flights. Instead of waiting for hours on hold after a severe weather event disrupts travel plans across the country, an intelligent chatbot trained on internal company data, previous successful call logs, and predictive analytics could quickly rebook the easiest ticket requests for customers in seconds, while automatically identifying more difficult cases and escalating them to the proper customer service agent the first time—not bouncing between different departments.

Similarly, AI could dramatically improve government services, allowing agencies to better serve Wisconsinites. Many state systems still depend on outdated IT systems, a problem that was exposed during the COVID-19 pandemic as many legacy networks were either crushed by demand or failed entirely. AI could help government agencies do more work with fewer resources and more accurately, reducing the footprint of government while actually serving the public better. Strategically modernizing government IT in this way will improve employee efficiency, save taxpayer dollars on the maintenance costs for legacy systems, and reduce processing times for constituent services.

Naturally, in addition to the positive use cases, malicious actors will also attempt to leverage advancements in AI to commit crimes, launch increasingly sophisticated cyberattacks, distribute phishing scams, and perpetuate fraud. Many stakeholders across government and industry have expressed a wide range of concerns regarding the use of AI, from how U.S. copyright and patent law applies to AI-generated content, to combatting a growing epidemic of deepfakes, or even more general concerns about how AI will transform education and the workforce.

In response, some are already advocating for the swift adoption of strict controls and restrictions on the use of AI technology in the name of AI safety, proposing a new regime of mandatory licensing for AI large language model developers and operators, installing a tiered system of “high-,“ “medium-,” and “low-risk” use cases of AI with mandatory government risk assessments, and at the extreme end of the spectrum, banning advanced AI entirely in certain sectors, and even calling for the nationalization of the compute resources, advanced chips, and hardware components necessary to build high-powered AI models. Some have called for a moratorium or ban on the development of AI tools more capable than OpenAI’s GPT-4 large language model.

Striking the proper balance between AI doomerism and AI utopianism is the critical task for lawmakers in 2024 and beyond. As Thomas Sowell once said famously: “There are no solutions, only trade-offs.” Overly broad general regulations and restrictions on AI will only serve to tie up America’s software industry in regulatory red tape in the name of safety and security, increasing the barrier to entry for new companies hoping to enter the AI marketplace or integrate AI tools.

Readers Also Like:  Shaheen Joins Bipartisan Legislation to Establish a China Grand ... - Senator Jeanne Shaheen

To preserve the positive benefits of AI while mitigating the risks for harm, it is therefore critical for lawmakers to avoid overly strict regulations on the technology of artificial intelligence itself in this early phase, welcoming the private sector to experiment and pioneer the next wave of technological marvels, and allowing consumers and the marketplace to decide which products are winners.

Where it is absolutely necessary for government to step in, regulation should be narrowly tailored and focused on specific harmful conduct, not the technology itself. In many cases, the federal government, states, and localities already have sufficient laws on the books designed to address many of these concerns in a technology-neutral way. Leveraging existing laws is preferable to enacting duplicative statutes or brand-new regulatory agencies, when existing agencies and the judicial system are already well suited to adjudicate AI-related disputes.

To this end, as the Task Force considers how to proceed on potentially legislating AI in Wisconsin, I am respectfully submitting the following three recommendations for you to consider for a free-market, limited government approach to artificial intelligence that will position your State and our nation as a whole for success.

First, as an early due diligence step before considering any new regulations targeted at artificial intelligence, lawmakers should inventory the State’s existing statutory and regulatory authority to address potential harms caused by AI. Contrary to popular belief, AI is not an unregulated “Wild Wild West,” and states in particular already enjoy plenty of existing authority to prevent fraud, address discrimination, protect civil rights, stop cybercrime, ensure strong consumer protection, and address privacy violations.

Depending on how you count it, somewhere between 100 and 200 AI-related bills have been introduced at the state level this year alone. At the federal level, Members of Congress have demonstrated a keen interest in acting quickly on AI, with Senate Majority Leader Chuck Schumer of New York launching a series of “AI Insight Forums” to inform a future legislative framework. And just a few weeks ago, President Biden signed a massive 100+ page Executive Order launching a whole-of-government effort to ensure safety, security, and trust in AI.

There are some bright spots in this Executive Order, including efforts to support educators applying AI tools in the classroom and highlighting the need to attract and retain the skilled AI workforce required for the U.S. to remain competitive. However, I am concerned by the Administration’s overall approach to expand government and depart from the previous strategy of voluntary commitments, working in partnership with private sector AI leaders.

Lacking direct legislative action from Congress, President Biden invoked the Defense Production Act to require new mandatory safety tests and reporting requirements on AI companies developing large language models (also referred to as “foundation models”). If the U.S. government determines there to be a risk to national security, economic security, or national public health, companies must ask permission from the federal government, submit safety test results, and provide “other critical information” to the government.

While many at present expect that only the largest LLM developers will be affected by these provisions, depending on the definitions of “AI” and “security risk,” this could lay the groundwork for a more intrusive reporting or licensing regime on more common AI tools and computer software that use any AI or algorithms. This scattershot approach creates more regulatory red tape and makes compliance more difficult for startups. Similarly, the Department of Commerce’s forthcoming guidance on AI content authentication and watermarking could be problematic or simply not helpful to consumers depending on the final implementation.

Readers Also Like:  UK Space Agency Announces £50 Million For Satellite ... - Space Ref

There absolutely is a role for government in the AI conversation, and several states like Wisconsin are doing the right thing by launching select committees, task forces, or commissions to study AI and bring legislators up to speed on this fast-moving issue. However, the federal government’s pessimistic view of AI as a technology to be strictly controlled and heavily regulated—copying and pasting the European Union’s approach to technology policy—will necessarily have an adverse effect on innovation.

In a recent annual ranking of the world’s largest tech companies, Forbes found that 81 were based in the United States (including 7 of the top 10), far more than any other competing nation. Only one European company made the top 20 on Forbes’ list. The U.S. is therefore well-positioned to continue leading on AI and emerging tech into the future, but only if policymakers follow the proven roadmap of light-touch regulation and the embrace of free market solutions.

Second, in cases where there are genuine gaps in the law, lawmakers must narrowly tailor any legislative or regulatory remedies with sufficient guardrails to prevent government overreach and protect innovation. As a specific example, ALEC members are currently considering proposed model legislation addressing AI as it pertains to child sexual abuse material (CSAM) and non-consensual intimate deepfake images. Both cases involve updating existing criminal and civil statutes already in effect. ALEC members have also adopted a model resolution on artificial intelligence, supporting the permissionless innovation approach to AI and recognizing that the free market is best equipped to advance innovation, mitigate harms, and ensure robust market competition.

 And finally, don’t let emotional appeals of “existential threats to humanity” and unfounded science-fiction references guide your state’s AI policy. AI is not sentient. AI is not alive. Today’s AI tools are not so-called artificial general intelligence (AGI). And perhaps most importantly, AI tools still make a lot of mistakes and “hallucinate” facts. Therefore, AI will work better as a tool to augment humans, not replace them entirely.

Fortunately, some in the AI community are starting to more vocally push back against this narrative, including Stanford University professor and AI thought leader Andrew Ng. As Ng puts it: “Overhyped fears about AI leading to human extinction are causing real harm […] Hype about harm is also being used to promote bad regulation worldwide, such as requiring licensing of large models, which will crush open-source and stifle innovation.”

While dystopian novels and feature films starring a rogue AI may make for good entertainment, they are a poor substitute for a nuanced and logical analysis of the issues as they appear before us today. Lawmakers should take care to ground any regulatory proposals in the facts, and not rely on hypothetical fears to determine your state’s future.

Thank you again for the opportunity to present this afternoon and I welcome any questions.

 

Sincerely,

Jake Morabito

Director, Communications and Technology Task Force

American Legislative Exchange Council (ALEC)



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.