security

Transcript: House Hearing on AI- Advancing Innovation in the … – Tech Policy Press


On Thursday, July 22, 2023 the US House of Representatives Committee on Science, Space and Technology hosted a hearing titled Artificial Intelligence: Advancing Innovation Towards the National Interest. Chaired by Rep. Frank Lucas (R-OK) with Ranking Member Zoe Lofgren (D-CA), the hearing featured testimony from:

  • Dr. Jason Matheny, President & CEO, RAND Corporation (Testimony)
  • Dr. Shahin Farshchi, General Partner, Lux Capital (Testimony)
  • Mr. Clement Delangue, Co-founder & CEO, HuggingFace (Testimony)
  • Dr. Rumman Chowdhury, Responsible AI Fellow, Harvard University (Testimony) 
  • Dr. Dewey Murdick, Executive Director, Center for Security and Emerging Technology Committe (Testimony)

What follows is a flash transcript- compare to video for exact quotations.

Rep. Frank Lucas (R-OK):

The committee will come to order without objection. The chair is authorized to declare recess of the committee at any time. Welcome to today’s hearing entitled Artificial Intelligence Advancing Innovation in the National Interest. I recognize myself for five minutes for an opening statement. Good morning and welcome to what I anticipate will be one of the first of multiple hearings on artificial intelligence that the Science, Space and Technology Committee will hold this Congress. As we’ve all seen AI applications like ChatGPT have given, taken the world by storm. The rapid pace of technological progress in this field, primarily driven by American researchers, technologists, and entrepreneurs, presents a generational opportunity for Congress. We must ensure the United States remains the leader in a technology that many experts believe is as transformative as the internet and electricity. The purpose of this hearing is to explore an important question, perhaps the most important question for Congress regarding AI.

How can we support innovation development in AI so that it advances our national interest? For starters, most of us can imagine agree that this is in our national interest to ensure cutting edge ai. AI research continues happening here in America and is based on our democratic values. Although the United States remains the country where the most sophisticated AI research is happening, this gap is narrowing. A recent study by Stanford University ranked universities by the number of AI papers they published. The study found that nine of the top 10 universities were based in China. Coming in at 10th was the only US institution, the Massachusetts Institute of Technology Chinese published papers received nearly the same percentage of citations as US researchers papers showing the gap in research quality is also diminishing. It is in our national interest to ensure the United States has a robust innovation pipeline that supports fundamental research all the way through to real world applications.

The country that leads in commercial and military applications will have a decisive advantage in global economic and geopolitical competition. The front lines of the war in the Ukraine are already demonstrating how AI is being applied to the 21st century warfare. Autonomous drones, fake images, audio used for propaganda and real time satellite imagery analysis are all a small taste of how AI is shaping today’s battlefields. However, while it’s critical that the US support advances in AI, these advances do not have to come at the expense of safety, security, fairness, or transparency. In fact, embedding our values in AI technology development is central to our economic competitiveness and national security. As members of Congress, our job is never to lose sight of the fact that our national interest ultimately lies with what is best for the American people. The science committee has and can continue to play a pivotal role in the service on this mission.

For starters, we can continue supporting the application of AI in advanced science and new economic opportunities. AI is already being used to solve fundamental problems in biology, chemistry, and physics. These advances have helped us develop novel therapeutics, design advanced semiconductors, forecast crop yields, saving countless amounts of time and money. The National Science Foundation’s AI Research Institutes, the Department of Energy’s world class supercomputers and the National Institutes of Standards and Technologies, Risk Management Framework and precision measurement expertise are all driving critical advances in this area. Pivotal to our national interest is ensuring these systems are safe and trustworthy. The committee understood that back in 2020 when it introduced the bipartisan National Artificial Intelligence Initiative Act of 2020. This legislation created a broad national strategy to accelerate investments in responsible AI research development and standards. It facilitated new public private partnerships to ensure the US leads the world in the development and use of responsible AI systems.

Our committee will continue to build off of this work to establish and promote technical standards for trustworthy ai. We are also exploring ways to mitigate risks caused by AI systems through research and development of technical solutions such as using all automation to detect AI generated media. As AI systems proliferate across the economy, we need to develop our workforce to meet changing skill requirements. Helping US workers augment their performance with AI will be a critical pillar in maintaining our economic competitiveness. And while the United States currently is the global leader in AI research development technology, our adversaries are catching up. The Chinese Communist Party is implementing AI industrial policy at a national scale, investing billions through state financed investment funds, designating national AI champions, and providing preferential tax treatment to grow AI startups. We cannot and should not try to copy China’s playbook, but we can maintain our leadership role in AI and we can ensure its development with our values of trustworthiness, fairness, and transparency. To do so, Congress needs to make strategic investments, build our workforce and establish proper safeguards without overregulation, but we cannot do it alone. We need the academic community, the private sector, and the open source community to help us figure out how to shape the future of this technology. I look forward to hearing the recommendations of our witnesses on how this committee can strengthen our nation’s leadership and artificial intelligence and make it beneficial and safe for all American citizens. I now recognize the Ranking Member, the gentlewoman from California for her statement.

Rep. Zoe Lofgren (D-CA):

Thank you. Thank you Chairman Lucas for holding today’s hearing, and I’d also like to welcome a very distinguished panel of witnesses. Artificial intelligence opens the door to really untold benefits for society, and I’m truly excited about its potential. However, AI could create risks including with respect to misinformation and discrimination. It will create risk to our nation’s cybersecurity in the near term, and there may be medium and long term risks to economic and national security. Some have even posited existential risks to the very nature of our society. We’re here today to learn more about the benefits and risks associated with artificial intelligence. This is a topic that has caught the attention of many lawmakers in both chambers across many committees. However, none of this is new to the science committee as the Chairman has pointed out. In 2020, members of this committee developed and enacted the National AI Initiative Act to advance research workforce development and standards for trusted ai.

The federal science agencies have since taken significant steps to implement this law, including notably NIST work on the AI risk management framework. However, we’re still in the early days of understanding how AI systems work and how to effectively govern them even as the technology itself continues to rapidly advance in both capabilities as well as applications. I do believe regulation of AI may be necessary, but I’m also keenly aware that we must strike a balance that allows for innovation and ensures that the US maintains leadership. While the contours of a regulatory framework are still being debated, it’s clear we will need a suite of tools. Some risks can be addressed by the laws and standards already on the books. It’s possible others may need new rules and norms. Even as this debate continues, Congress can act now to improve trust in AI systems and assure America’s continued leadership in ai.

At a minimum, we need to be investing in the research and workforce to help us develop the tools we need going forward. Let me just wrap up with one concrete challenge I’d like to address in this hearing. One is the intersection of AI and intellectual property. Whether addressing AI based inputs or outputs, it’s my sincere hope that the content creation community and AI platforms can advance their dialogue and arrive at a mutually agreeable solution. If not, I think we need to have a discussion on how the Congress should address this. Finally, research and infrastructure workforce challenges are also top of mind. One of the major barriers to developing an AI capable workforce and ensuring long-term US leadership is a lack of access to computing and training data for all but large companies and the most well-researched institutions. There are good ideas already underway at our agencies to address this challenge and I’d like to hear the panel’s input on what’s needed. In your view, it’s my hope that Congress can quickly move beyond the fact finding stage to focus on what this institution can realistically do to address the development and deployment of trustworthy ai. At this hearing, I hope we can discuss what the science committee should focus on. I look forward to today’s very important discussion with stakeholders from industry, academia and venture capital. And as a representative from Silicon Valley, I know how important private capital is today to the US R&D ecosystem. Thank you all for being with us and I yield back.

Rep. Frank Lucas (R-OK):

Ranking Member yields back. Let me introduce our witnesses for today’s hearing. Our first witness today is Dr. Jason Matheny, chair president and CEO of Rand Corporation. Prior to becoming CEO of Rand the doctor led the White House policy on technology and national security at the National Security Agency in the Office of Science and Technology Policy. He also served as director of the Intelligence at Advanced Research Projects activity, and I’d also like to congratulate him for being selected to serve on the selection committee for the Board of Trustees for the National Semiconductor Technology Center. Our next witness is Dr. Shahin Farshchi the general partner of the Lux Capital, one of Silicon Valley’s leading frontier science and technology investors. He invests in the intersection of artificial intelligence and science and is co-founded and invested in many companies that have gone on to raise billions of dollars.

Our third witness of the day is Dr. Clement Delangue, co-founder and CEO of Hugging Face, the leading platform for the open source AI community. It has raised over a hundred million dollars with Lux Capital leading their last financing round and counts over 10,000 companies and a hundred thousand developers as users. Next we turn to Dr. Rumman Chowdhury, responsible AI fellow at Harvard University, and she is a pioneer in the field of applied algorithmic ethics, which investigates creating technical solutions for trustworthy AI. Previously, she served as director of machine learning accountability at Twitter and founder of Parity, an enterprise algorithmic auditing company.

And our final witness is Dr. Dewey Murdick, the executive director of Georgetown Center for Security and Emerging Technology. He previously served as the chief analytics officer and deputy chief Scientist within the Department of Homeland Security and also co-founded an office in predictive intelligence at IARPA. Thank you all witnesses for being here today and I recognize Dr. Matheny for the first five minutes to present your testimony and overlook my phonetic weaknesses.

Dr. Jason Matheny:

<laugh>, No problem at all. Thanks so much, Chairman Lucas, Ranking Member Lofgren and members of the committee. Good morning and thank you for the opportunity to testify. As mentioned, I’m the president and CEO of Rand, a nonprofit and nonpartisan researcher organization and one of our priorities is to provide detailed policy analysis relevant to AI in the years ahead. We have many studies that we have underway relevant to AI. I’ll focus my comments today on how the federal government can advance AI in a beneficial and trustworthy manner for all Americans. Among a broad set of technologies, AI stands out both for its rate of progress and for its scope of applications. AI holds the potential to broadly transform entire industries, including ones that are critical to our future prosperity. As noted, the United States is currently the global leader in AI.

However, AI systems have security and safety vulnerabilities and a major AI related accident in the United States or a misuse could dissolve our lead. Much like nuclear accidents set back the acceptance of nuclear power in the United States. The United States can make safety a differentiator for our AI industry just as it was a differentiator for our early aviation and pharmaceutical industries. Government involvement and safety standards and testing led to safer products, which in turn led to consumer trust and market leadership. Today, government involvement can build consumer trust in AI that strengthens the US position as a market leader. And this is one reason why many AI firms are calling for government oversight to ensure that AI systems are safe and secure, it’s good for their business. I’ll highlight five actions that the federal government could take to advance trustworthy AI within the jurisdiction of this committee.

First is to invest in potential research moonshots for trustworthy AI, including generalizable approaches to evaluate the safety and security of AI systems before they’re deployed. Second fundamentals of designing agents that will persistently follow a set of values in all situations. And third, microelectronic controls embedded in AI chips to prevent the development of large models that lack safety and security safeguards. A second recommendation is to accelerate AI safety and security research and development through rapid high return on investment techniques such as prize challenges, prizes pay only for results and remove the costly barrier to entry for researchers who are writing applications, making them a cost effective way to pursue ambitious research goals while opening the field to non-traditional performers such as small businesses. A third policy option is to ensure that US AI efforts conduct risk assessments prior to the training of very large models as well as safety evaluations and red team tasks prior to the deployment of large models.

A fourth option is to ensure that the National Institute of Standards and Technology has the resources needed to continue applications of the NIST Risk Management Framework and fully participate in key international standards relevant to AI such as ISO SC 42. A fifth option is to prevent intentional or accidental misuse of advanced AI systems by requiring that companies report the development or distribution of very large AI clusters training runs and trained models such as those involving over 10 to the 26 operations. Second, include in federal contracts with cloud computing providers, requirements that they employ. Know your customer screening for all customers before training large AI models. And third, include in federal contracts with AI developers know your customer screening as well as security requirements to prevent the theft of large AI models. Thank you for the opportunity to testify and I look forward to your questions later.

Rep. Frank Lucas (R-OK):

Thank you. And I recognize Dr. Farshchi for five minutes to present his testimony.

Dr. Shahin Farshchi:

Thank you, Mr. Chairman. Chairman Lucas, Ranking Member Lofgren and members of the committee. My name is Dr. Shahin Farshchi and I’m a general partner at Lux Capital, a venture capital firm with 5 billion of assets under management. Lux specializes in building and funding tomorrow’s generational companies that are leveraging breakthroughs in science and engineering. I have helped create and fund companies pushing the state-of-the-art and semiconductors, rockets, satellites, driverless cars, robotics, and ai. From that perspective, there are two important considerations for the committee today. One, preserving competition in AI to ensure our country’s position as a global leader in the field. And two, driving federal resources to our most promising AI investigators. Before addressing directly how America can reinforce its dominant position in AI, it is important to appreciate how some of Lux portfolio companies are pushing the state of the art in the field. Hugging Face, who’s founder, Clem Delangue is a witness on this panel; Mosaic ML, a member of the Hugging Face community is helping individuals enterprises train, tune and run the most advanced AI models. Mosaic’s language models have exceeded the performance of open AI’s GPT-3. Unlike open AI mosaics models are made available to their customers entirely as opposed to through an application programming interface, thereby allowing customers to keep all of their data private. Mosaic built the most downloaded LLM in history, MPT-7B, which is a testament to the innovations coming into the open source from startups and researchers. Runway ML is bringing the power of generative AI to consumers to generate videos from simple text and images. Runway is inventing the future of creative tools with AI thereby re-imagining how we create so individuals can achieve the same expressive power of the most powerful Hollywood studios. These are just a few examples of lux companies advancing the state of the art in ai in large part because of the vital work of this committee to ensure America is developing its diverse talent, providing the private sector with helpful guidance to manage risk in democratizing access to computing resources that fuel AI research and the next generation of tools to advance the US national interest.

To continue America’s leadership, we need competitive markets to give entrepreneurs the opportunity to challenge even the largest dominant players. Unfortunately, there are steep barriers to entry for AI researchers and founders. The most advanced AI generative models cost more than a hundred million to train. If we do not provide open, fair and diverse access to computing resources, we could see a concentration of resources in a time of rapid change reminiscent of standard oil during the industrial revolution. I encourage this committee to continuous leadership in democratizing access to AI r and d by authorizing and funding the National AI research resource. This effort will help overcome the access divide and ensure that our country is benefiting from diverse perspectives that will build the future of AI technology and help guide its role in our society. I am particularly concerned that Google, Amazon and Microsoft, which are already using vast amounts of personal data to train their models, have also attracted a vast majority of investments in AI startups because of the need to access their vast computing resources to train AI models further entrenching their preexisting dominance in the market.

In fact, Google and Microsoft are investing heavily in AI startups under the condition that their invested dollars are spent solely on their own compute resources. One example is OpenAI’s partnership with Microsoft through efforts such as NAIRR, we hope that competitors at Google, Microsoft and Amazon will be empowered to offer compute resources to fledgling AI startups while perhaps even endowing compute resources directly to the startups and researchers as well. This will facilitate a more competitive environment that will be more conducive to our national dominance at the global stage. Furthermore, Congress must rebalance efforts towards providing resources with deep investment into top AI investigators. For example, the DOD has taken a unique approach by allocating funding to top investigators as opposed to the National Science Foundation, which tends to spread funding across a larger number of investigators when balance appropriately. Both approaches have value to the broader innovation ecosystem.

However, deep investment has driven discoveries at the frontier leading to the creation of great companies like OpenAI, whose dollars were initially funded, whose founders were initially funded by relatively large DOD grant dollars. Building on these successes is key to amer’s continued innovation, continued success in AI innovation. Thank you for the opportunity to share how Lux Capital is working with founders to advance AI in our national interest, bolster our national defense and security, strengthen our economic competitiveness and foster innovation right here in America. Lux is honored to play a role in this exciting technology at this pivotal moment. I look forward to your questions. Thank

Rep. Frank Lucas (R-OK):

You. And I recognize Mr. Delangue for five minutes for his testimony.

Clement Delangue:

Chairman Lucas Ranking Member Lofgren and members of the committee, thank you for the opportunity to discuss AI innovation with you. I deeply appreciate the work you are doing to advance and guide it in the us. My name is Cleman DeLong and I’m the co-founder and CEO of Hugging Face. I’m French, as you can hear from my accent and moved to the US 10 years ago, barely speaking English with my co-founders, Julian and Thomas. We started this company from scratch here in the US as a US startup, and we are proud to employ team members in 10 different US states today. I believe we could not have created this company anywhere else. I am leaving proof that the openness and culture of innovation in the US allows for such a story to happen. The reason I’m testifying today is not so much the size of our organization or the cuteness of our emoji name Hugging Face, and contrary to what you said, Chairman Lucas, I don’t hold a PhD like all the other witnesses but the reason I’m here today is because we enable 15,000 small companies, startups, nonprofits, public organizations and companies to build AI features and workflows collectively on our platform.

They have shared over 200,000 open models, 5,000 new ones just last week, 50,000 open data sets and a hundred thousand applications ranging from data anonymization for self-driving cars, speech recognition from visual lip movement for people with hearing disabilities, applications to detect gender and racial biases, translation tools in low resource languages to share information globally, not only with large language models and generative ai, but also with all sorts of machine learning algorithms with usually smaller, customized, specialized models in domains as diverse as social productivity platforms, finance, biology, chemistry and more. We are seeing firsthand that AI provides a unique opportunity for value creation, productivity, boosting and improving people’s lives potentially at the larger scale on higher velocity than the internet or software before. However, for this to happen across all companies and at the sufficient scale for the US to keep leading compared to other countries, I believe open science and open source are critical to incentivize and are extremely aligned with American values and interests.

First, it’s good to remember that most of today’s progress has been powered by open science and open source, like the attention is all you need. Paper, the BERT paper, the latent diffusion paper, and so many others, the same way without open source, PyTorch, transformers, diffusers, all invented here in the US, the US might not be the leading country for AI. Now, when we look towards the future, open science and open source distribute economic gains by enabling hundreds of thousands of small companies and startups to build with AI. It fosters innovation and fair competition between all thanks to ethical openness. It creates a safer path for development of the technology by giving civil society, nonprofits, academia and policy makers the capabilities they need to counterbalance the power of big private, of big private companies. Open science and open source, prevent black box systems, make companies more accountable and help solving today’s challenges like mitigating biases, reducing misinformation, promoting copyrights, and rewarding all stakeholders, including artists and content creators in the value creation process.

Our approach to ethical openness combines institutional policies such as documentation with model cards pioneered by our own Dr. Margaret Mitchell. Technical safeguards such as staged releases and community incentives like moderation and opt-in opt-out, data sets. There are many examples of safe AI thanks to openness like bloom, an open model that has been assessed by Stanford as the most compliant model with the EU AI Act or the research advancement in watermarking for AI content. Some of that you can only do with open models and open data sets. In conclusion, by embracing ethical AI development with a focus on open science and open source, I believe the US can start a new era of progress for all, amplify its worldwide leadership and give more opportunities to all that it like it gave to me. Thank you very much.

Rep. Frank Lucas (R-OK):

Absolutely, and thank you Mr. Delangue and I would note probably that there would be some of my colleagues who would note that your version of English might be more understandable than my Okie dialect, but setting that issue aside, I now recognize Dr. Chowdhury for five minutes for her testimony.

Dr. Rumman Chowdhury:

Thank you, Chairman Lucas, Ranking Member Lofgren and esteemed members of the committee. My name is Dr. Rumman Chowdhury and I’m an AI developer, data scientist and social scientist. For the past seven years, I’ve helped address some of the biggest problems in AI ethics, including holding leadership roles and responsible AI at Accenture, the largest tech consulting firm in the world, and at Twitter. Today, I’m a responsible AI fellow at Harvard University. I’m honored to provide testimony on trustworthy AI and innovation. Artificial intelligence is not inherently neutral, trustworthy, nor beneficial, concerted and directed effort is needed to ensure this technology is used appropriately. My career in responsible AI can be described by my commitment to one word governance. People often forget that governance is more than the law. Governance is a spectrum ranging from codes of conduct standards, open research and more. In order to remain competitive and innovative, the United States would benefit from a significant investment in all aspects of AI governance.

I would like to start by dispelling the myth that governance stifles innovation much to the contrary. I use the phrase brakes help you drive faster to explain this phenomenon. The ability to stop your car in dangerous situations is what enables us to feel comfortable driving at fast speeds. Governance is innovation. This holds true for the current wave of artificial intelligence. Recently, a leaked Google memo declared there is no moat. In other words, AI will be unstoppable as open source capabilities meet and surpass closed models. There is also a concern about the US remaining globally competitive. If we aren’t investing in AI development at all costs, this is simply untrue. Building the most robust AI industry isn’t just about processors and microchips. The real competitive advantage is trustworthiness. If there is one thing to take away from my testimony, it’s that the US government should invest in public accountability and transparency of AI systems.

In this testimony, I describe how I make the following four recommendations to ensure the US advances in innovation in the national interest. First, support for AI model access to enable independent research and audit. Second investment in and legal protections for red teaming and third party ethical hacking. Third, the development of a non-regulatory tech technology body to supplement existing US government oversight efforts. Fourth, participation in a global AI oversight. CEOs of the most powerful AI companies will tell you that they spend significant resources to build trustworthy ai. This is true. I was one of those people. My team and I held ourselves to the highest ethical standards and as, and my colleagues who remain in these roles still do so. Today, however, a well-developed ecosystem of governance also empowers individuals whose organizational missions are to inform and protect society. The DSA Article 40 creates this sort of access for Europeans.

Similarly, the UK government has announced Google DeepMind and open AI will allow model access. My first recommendation is the United States should match these efforts. Next, new laws are mandating third party algorithmic auditing. However, there is currently a workforce challenge in identifying sufficiently trained third party algorithmic investigators. Two things can fix this: first, funding for independent groups to conduct red teaming and adversarial auditing and second legal protection. So these individuals operating the public good are not silenced with litigation. With the support of the White House, I am part of a group designing the largest ever AI red teaming exercise. In collaboration with the largest open and closed source AI companies, we’ll provide access to thousands of individuals who will compete to identify how these models may produce harmful content. Red teaming is a process by which invited third party experts are given special permission access by AI companies to find flaws in their models.

Traditionally, these practices happen behind closed doors and public information sharing is at the company’s discretion. We want to open those closed doors. Our goals are to educate, address vulnerabilities, and importantly grow a new profession. Finally, I recommend investment in domestic and global government institutions in alignment with this third party robust ecosystem. A centralized body focused on responsible innovation could assist existing oversight by promoting interoperable licensing, conducting research to inform AI policy and sharing best practices and resources. Parallels and other governments include the UK Center for Data Ethics and Innovation of which I’m a board member, and the EU Center for algorithmic transparency. There’s also a sustained and increasing call for global governance of AI system among them, experts like myself, OpenAI CEO Sam Altman, and former New Zealand Prime Minister Jacinda Ardern. A global governance effort should develop empirically driven enforceable solutions for algorithmic accountability and promote global benefits of AI systems in some innovation in the national interest starts with good governance. By investing in and protecting this ecosystem, we will ensure AI technologies are beneficial to all. Thank you for your time.

Rep. Frank Lucas (R-OK):

Thank you, doctor. And I now recognize Dr. Murdick for five minutes to present his testimony.

Dr. Dewey Murdick:

Thank you, Chairman Lucas, Ranking Member Lofgren and everyone on the committee to be, have this opportunity to talk about how we can make AI better for our country. there are many actions Congress can do to support AI innovation, protect key technology for misuse and ensure customers or consumers are safe. I’d like to highlight three today. First, we need to get used to working with AI as a society and individually we need to learn what, what, when we can trust our AI teammates and when to question or ignore them. I think this takes a lot of training and time. Two, we need skilled people to build future AI systems and to increase AI literacy. Three, we need to keep a close eye on the policies that we do enact to may need to make sure that every policy is being action, is being monitored, and make sure it’s actually doing what we think it’s doing and update that as we need to.

This is especially true when we’re facing peer innovators, especially in a rapidly changing area like artificial intelligence. China is such a peer innovator, but we need to remember they’re not 10 feet tall and they have different priorities for their AI than we do. China’s AI leadership is evident through aggressive use of state power and substantial research investments, making it a peer innovator for us and our allies never far ahead and never far behind either. China focuses on how AI can assist military decision making and mass surveillance to help maintain societal control. This is very unlike the United States. Thankfully, managing that controlled means they’re not letting AI run around all willy-nilly. In fact, they deploying of, the deployment of large language models does not appear to be a priority for China’s leadership. Precisely for that reason, we should not let the fear of China surpassing the US deter oversight of the AI industry and AI technology.

Instead, the focus should be on developing methods that allow enforcement of AI risk and harm management and guiding the innovation advancement of AI technology. I’d like to return to my first three points and expand on them a little bit. Going back to my opening points. The first one was, we must get used to working with AI via effective human machine teaming, which is central to AI’s evolution in my opinion in the next decade. Understanding what an AI system can and cannot do and should and shouldn’t do and when to rely on them and when to avoid using them should guide our future innovation and also our training standards. One thing that keeps me up at night is when human partners trust machines when they shouldn’t, and there’s interesting examples of that. they fail to trust AI when they should or are manipulated by a system, and there’s some instances of that.

Also. We’ve witnessed rapid AI advancements and the convergence between AI and other sectors promises widespread innovation in areas from medical imaging to manufacturing. Therefore, fostering AI literacy across the population is critical for economic competitiveness, but also, and I think even more importantly, it is essential for democratic governance. We cannot engage in a meaningful societal, societal debate about AI if we don’t understand enough about it. This means an increasingly large fraction of the US citizens will encounter AI daily, so that’s the second point. We need skilled people working at all levels. We need innovators from technical and non-technical backgrounds. We need to attract and retain diverse talent from across our nation and internationally and separately from those who are building the AI systems, these future and current ones. We need comprehensive AI training for the general population, K through 12 curricula certifications. There’s a lot of good ideas there.

AI literacy is the central key though, so what else can we do? I think we can promote better decision making by gathering information now that we need to make decisions. For example, tracking AI harms via incident reporting is a good way to learn where things are breaking, learning how to request key model and training data for oversight to make sure it’s being used in important applications correctly. We don’t know how to do that. encouraging and developing third party auditing and ecosystem and the red teaming ecosystem. Excellent. If we are going to license AI software, which is a common proposal we hear, we’re probably going to need to update existing authorities for existing agencies, and we may need to create a new one, a new agency or organization. This new organization could check how AI is being used and overseen by existing agencies.

It could be the first to deal with problems directing those in need to the right solutions either in the government or private sector and fill gaps in sector specific agencies. My last point, and I see I’m going too long. We need to make sure our policies are monitored and effectively implemented. There’s really great ideas in the House and Senate on how to increase the analytic capacity to do that. I look forward to this discussion because I think this is a persistent issue that’s it’s just not going to go away and CSAT has and I have dedicated our professional lives to this, so thank you so much.

Rep. Frank Lucas (R-OK):

Thank you, doctor, and thank you to the entire panel for some very insightful opening comments. continuing with you Dr. Murdick. Making AI systems safer is not only a matter of regulations, but also requires technical advances in making the systems more reliable, transparent and trustworthy. It seems to me that the United States would be more likely than China to invest in these research areas given our democratic values. So Doctor, can you compare or expand on your comments earlier, can you compare how the Chinese Communist Party in the United States political values influence their research and development priorities for AI systems?

Dr. Dewey Murdick:

Sure. I think that the large language models, which is the obsession right now of a lot of the AI system provides a very interesting example of this. China’s very concerned about how it can destabilize their societal structure by having a system that they can’t control and might say things that would be offensive about it might bring up the TNM in Square or might associate the, the president with Winnie the Pooh or something, and that could be very destructive to their way of of their societal control. Because of this, they’re really limiting control. They’re passing regulations and laws that are very constraining of how that’s doing. So that is a difference between our societies and how we, what we view as acceptable. I do think the military control the command and control emphasis as well as the desire to maintain control through mass surveillance. If you look at their research portfolio, it most of where they’re leading is could be very well associated with those types of areas. So I think those are differences that are pretty significant. We can go on, but I’m gonna just pause there to make sure other people have opportunities.

Rep. Frank Lucas (R-OK):

Absolutely. Dr. Matheny, some advocate advocated for a broad approach to AI regulations such as restricting entire categories of AI systems in the United States. Many agencies already have the existing authorities to regulate the user cases of AI in their jurisdiction. For example, the Department of Transportation can perform set performance benchmarks that autonomous vehicles must meet to drive on US roads. What are your opinions regarding outcomes of user case driven approach to AI regulation versus an approach that places broad restrictions on AI development or deployment?

Dr. Jason Matheny:

Thank you, Mr. Chairman. I think that in many of the cases that we’re most concerned about related to risks of AI systems, especially these large foundation models, we may not know enough in advance to specify the use cases. and those are ones then where the kind of testing and red teaming that’s been described here is really essential. So having terms and conditions in federal contracts with compute providers actually might be one of the highest leverage points of governance is that we could require that models trained on infrastructure that is currently federally contracted involve red teaming and other evaluation before models are trained or deployed.

Rep. Frank Lucas (R-OK):

Mr. Delangue, in my opening statement, I highlighted the importance of ensuring that we continue to lead in AI research and development because American built systems are more likely to reflect democratic values, given how critical it is to ensure we maintain our leadership in AI. How do you recommend Congress ensure that we do not pass regulations that stifle innovation?

Clement Delangue:

I think a good point that you made earlier is that AI is so broad and the impact of AI could be so widespread across use cases, domains, sectors, is leads to the point that for regulation to be effective and not stiff stifle innovation at scale, you need to be you know, customized and focused on specific domains, use cases and sectors where there are more risks and then kind of like empower the whole ecosystem to, to keep growing and keep innovating. The parallel that I like to draw, draw to draw is with software, right? It’s such a broad applicable technology that the important thing is to regulate the final use cases and the specific domains of application of software, rather in software in general.

Rep. Frank Lucas (R-OK):

In my remaining moments Dr. Farshchi advances in civilian AI. Civilian R&D often helps progress defense technologies and advance national security. How have civilian R&D efforts in AI transformed in advances in defense applications, and how do you anticipate that relationship evolving over the next few years?

Dr. Shahin Farshchi:

I expect that relationship to evolve in a positive way. I expect it to, I expect there to be a further strengthening of the relationship between the private sector and dual use and government targeted products. thanks to the open source, there are many technologies that otherwise would have not been available, would it have to be, would’ve had to be reinvented are now made available to build on top of the venture capital community is very excited about funding companies that have dual use products and companies that sell to the US government. Palantir was an example where investors profited greatly from investing in a company that was targeted, that was targeting the US government as a customer. Same with SpaceX. And so it’s my expectation that this trend will continue, that more private dollars will go into funding companies that are selling into the US government and leveraging technologies that come out of the open source to build on top of to continue innovating in this sector.

Rep. Frank Lucas (R-OK):

Thank you. My time’s expired and I recognize the Ranking Member Mrs. Lofgren thank you Mr. Chairman, and thanks to our panelists. This is wonderful testimony and I’m glad this is the first of several hearings because there’s a lot to wrap our heads around. Dr. Chadri, as you know, large language models have basically vacuumed up all the information from the internet and there’s a dispute between copyright holders who feel they ought to be compensated. Others who feel it’s a fair use of the information, you know, it’s in court, they’ll probably decide it before Congress will will, but here’s a question I’ve had. What techniques are possible to identify copyrighted material in the training corpus of the large language models? Is it even possible to go back and identify protected material?

Dr. Rumman Chowdhury:

Thank you for the excellent question. it’s not easy. It is quite difficult. What we really need is protections for the individuals generating this artwork because they’re at risk of having not only their work stolen, but their entire livelihood taken from.

Rep. Zoe Lofgren (D-CA):

I understand, but the question is retroactively, right? Is it possible to identify?

Dr. Rumman Chowdhury:

It’s difficult to, but yes, it can. One can use digital imaging matching but it’s also important to think through, you know, what we are doing with the data, what it is being used for.

Rep. Zoe Lofgren (D-CA):

No, I understand that as well. Absolutely. Thank you. One of the things I’m interested in, you mentioned CFAA and that’s a barrier to people trying to do third party analysis. I had a bill actually after Aaron Swartz’s untimely and sad demise. I had a bill named after him to allow those who are making non-commercial use to actually do what Aaron was doing, and Jim Sensenbrenner was my co-sponsor since retired and we couldn’t get anywhere. There were large vested interests. Who opposed that? Do you think if we approached it just as those who register rather than be licensed with the government as non-commercial entities that are doing red shirting, whether that would be a viable model for what you’re suggesting?

Dr. Rumman Chowdhury:

I think so. I think that would be a great start. I do think there would need to be follow up to ensure that people are indeed using it for non-commercial purposes.

Rep. Zoe Lofgren (D-CA):

Correct. I’m interested in the whole issue of licensing versus registration. I’m mindful that the Congress really doesn’t know enough in many cases to create a, a licensing regime, and the technology is moving so fast that I fear we might make some mistakes or federal agencies might make some mistakes, but licensing, which is giving permission would be different than registration, which would allow for the capacity to oversee and prevent harm. What’s your thought on the two alternatives? Anyone who wants to speak?

Dr. Rumman Chowdhury:

I can speak to that. I think overly onerous licensing would actually prevent people from doing one-off experimental or fun exercises. What we have to understand is, you know, there is a particular scale of impact number of people being, you know, using a product that maybe would trigger licensing rather than saying, everybody needs to license. There are college students, high school kids who want to use these models and just do fun things and we should allow them to do it.

Rep. Zoe Lofgren (D-CA):

I, you know, I guess I have some qualms and other members may, you know, we’ve got large model ais and to some extent they are a black box. Even the creators don’t fully understand what they’re doing. And the idea that we would have a licensing regime I think is very daunting. as opposed to a registration regime where we might have the capacity for third parties to do auditing and the like, I’ll just lay that out there. I wanna ask anybody who can answer this question. When it comes to large language models, generative ai, the computing power necessary is so immense that we have ended up with basically three very large private sector entities who have the computing capacity to actually do that. Mr. Altman was here I think last month and opined when he met with us at the Aspen Institute breakfast, and it might not even be possible to catch up in terms of the pure computing power. We’ve had discussions here on whether the government should create the computing power to allow not only the private sector, but academics to be competitive. Is that even viable at this point? Whoever could answer that.

Dr. Shahin Farshchi:

I can take that real quick. I believe so. I think it is possible, bear in mind that the compute resources that were created by these three entities were initially meant for internal consumption. Correct. And they have been repurposed now for training AI in semiconductor development, which is the core of this technology. There is ultimately a trade off between narrow functionality and or breadth of functionality and performance at a single use case. And so if there was a decision made to build resources that were targeted at training large language models, for example, I think it would be possible to quickly catch up and build that resource the same way we built specific resources during World War II for a certain type of, of warfare warfare and then again during the Persian Gulf War. So I think it’s, we as a nation are capable of doing that.

Rep. Zoe Lofgren (D-CA):

I see. My time is expired. I thank you so much and my additional questions we’ll send to you after the hearing and I yield back.

Rep. Jay Obernolte (R-CA):

Gentleman yields back. I will recognize myself for five minutes for my questions, and I wanna thank you for the really fascinating testimony. Dr. Hanni, I’d like to start with you. You’d brought up the concept of trustworthy ai and I think that that’s an extremely important topic. I actually dislike the word trustworthy AI because it imparts to AI something that doesn’t have human qualities. It’s just a piece of software. I was interested, Dr. Murdick, when you said, sometimes human partners trust AI when they shouldn’t and fail to trust it when they should. And I think that that is a better way of expressing what we mean when we talk about ai. but this is an important conversation to have because we in Congress, as we contemplate establishing a regulatory framework for AI, we need to be explicit when we say we want it to be trustworthy.

You know, we can talk about efficacy or robustness or repeatability, but we need to be very specific when we talk about what we mean by trustworthy. It’s not helpful to use evocative terms. Like, well, AI has to share its values, which is something that’s in the framework of other countries’ approach to AI. Well, that’s great. You know, values, that’s a human term. What does that mean for AI to have values? So the question for you, doctor, is what, what do we mean when we say trustworthy AI? And, you know, in what context should us as, as lawmakers think about AI as trustworthy?

Dr. Jason Matheny:

Thank you. When we talk about trustworthiness of engineered systems, we usually mean, do they behave as predicted? Do they behave in a way that is safe, reliable, and robust given a variety of environments? So for example do we have trust in our seatbelt? Do we have trust in the anti-lock braking system of a car? Do we have trust in the accident avoidance system on an an airplane? So those are the kinds of properties that we want in our engineered systems: are they safe, are they reliable, are they robust?

Rep. Jay Obernolte (R-CA):

I would agree. And I think that we can put metrics on those things. I just don’t think that calling AI trustworthy is helpful because we’re already having this perceptual problem that people are thinking of it in an anthropomorphic way, and it isn’t. It is just software. And you know, we could talk about the intent when we deploy it. We can talk about the intent when we create it, of us as humans, but to impart those qualities to the software, I think is is misleading to people. Dr. Chowdhury, I want to continue the Ranking Member’s line of questioning which I thought was excellent on the on the, the intersection between copyright holders and, and content creators and the training of AI because I think that this is going to be a really critical issue for us to grapple with.

And I mean, here’s, here’s the problem. The, if, if we say, as some content creators have suggested that no copyrighted content can be used in the training of ai, you know, which from their point of view is, is completely the real reasonable thing to be saying. But if we do that, then we’re gonna wind up with AI that is fundamentally useless in a lot of different domains. And let me give you a specific example. Cause I’d like your thoughts on it. I mean, one trademarked term is right. The NFL would not like someone using the word Super Bowl. In a commercial sense, if you want a bar, you have to talk about, you know, the party to watch the big game, nudge, nudge, wink, wink, right? Which is from their point of view, completely reasonable. But if you prohibited the use of the word Super Bowl in training AI, you’d come up with a large language model that if you said, what time is the Super Bowl? You would’ve no idea what you were talking about. You know, it would lack the, you know, the context to be able to answer questions like that. So how do we navigate that space?

Dr. Rumman Chowdhury:

I think you’re bringing up an excellent point. I think these technologies are gonna push the upper limits of many of the laws we have, including protections for copyright. I don’t think there’s a good answer. I think this is what we are negotiating today. The answer will lie somewhere in the spectrum. There will be certain terms. I think a similar conversation happened about the term Taco Tuesday and the ability to use it widely, and it was actually decided you could use it widely. I think some of these will be addressed in a case by case basis, but more broadly, I think the thing to keep an eye on is whether or not somebody’s livelihood is being impacted. It’s not really about a word or picture. It is actually about whether someone is taken out of a job because of a model that’s being built.

Rep. Jay Obernolte (R-CA):

Hmm. I partially agree. You know, I think it is, it gets into a very dicey area when we talk about if someone’s job is being impacted, because AI is gonna be extremely economically disruptive. And our job as lawmakers is to make sure that disruption is largely positive, okay, for the median person in our society. But, you know, jobs will be impacted. We hope that most will be positive and not negative. I actually think, and I’m gonna run outta time here, but I, we have a large body of legal knowledge already on this topic around the concept of fair use. And I think that that really is the solution to this problem. There’ll be fair use of intellectual property and ai, and there’ll be things that are clearly are not fair use or infringing, and I think that we can use that as a foundation, but love to continue the discussion later.

Dr. Rumman Chowdhury:

Absolutely.

Rep. Jay Obernolte (R-CA):

Next we’ll recognize the gentleman from Oregon. Ms. Bonamici, you are recognized for five minutes.

Rep. Suzanne Bonamici (D-OR):

Thank you, Mr. Chairman, Ranking Member, thank you to the witnesses for your expertise. I acknowledge the tremendous potential of AI, but also the significant risks and concerns that I’ve heard about, including what we just talked about, potential job displacement, privacy concerns, ethical considerations, bias, which we’ve been talking about in this committee for years. Market dominance by large firms, and in the hands of scammers and fraudsters, a whole range of nefarious possibilities to mislead and deceive voters and consumers. also the data sets take up an enormous amount of energy to run. We know we acknowledge that. So we need responsible development with ethical guidelines to maximize the benefits and, and minimize the risks, of course. So, Dr. Chowdhury, as AI systems move forward, I remain concerned about the lack of diversity in the workforce. So could you mention how increasing diversity will help address bias?

Dr. Rumman Chowdhury:

What an excellent point. Thank you so much, Congresswoman. So first of all, we need a wide range of perspectives in order to understand the impact of artificial intelligence systems. I’ll give you a specific example. For my time at Twitter, we held the first algorithmic bias bounty. That meant we opened up a Twitter model for public scrutiny, and we learned things that my team of highly educated PhDs wouldn’t think of. For example, did you know if you put a single dot on a photo, you could change how the algorithm decided where to crop the photo? We didn’t know that somebody told us this. Did you know that algorithmic cropping tended to crop out people in camouflage because they blended in with their backgrounds. It did what camouflage was supposed to do. We didn’t know that. so we, we learn more when we bring more people in. So, you know, open more open access independent researcher funding, red teaming, et cetera, opening doors to people will be what makes our systems more robust.

Rep. Suzanne Bonamici (D-OR):

A absolutely appreciate that so much. And, and I wanna continue Dr. Chowdhury, I wanna talk about the ethics. and I expect that those in this room will all agree that ethical AI is important to align the systems with values, respect fundamental rights, contribute positively to the, to society while minimizing potential harms, and get, gets to this trustworthiness issue, which you mentioned and that, that we’ve been talking about. So, who defines what ethical is is, is there a universal definition? Does Congress have a role? Is this being defined by industry? I know there’s a bipartisan proposal for a blue ribbon commission to, to develop a strategy for regulating ai. Would this be something that they would handle or would near be involved? and, and also I’m gonna tell you the second part of this question and, and then let you respond. In your testimony, you talk about the ethical hackers and in your testimony explains the role that they play, but how can they help design and implement ethical systems? And how can policy differentiate between bad hackers and ethical hackers?

Dr. Rumman Chowdhury:

Both great questions. So first I wanna adjust, the first part of what you brought up is who defines ethics? And you know, fortunately, this is not a new problem with technology. We have grappled with this in the law for many, many years. So you know, I recognize that previously someone mentioned that we seem to think a lot of problems are new with technology. This is not a new problem. And usually what we do is we get at this by a diversity of opinions and input and also ensuring that our AI is reflective of our values. And we’ve articulated our democratic values, right? For the US, we have the Blueprint for an AI bill of Rights. We have the NIST AI Risk Management Framework. So we’ve actually, as a nation, sat down and done this. so to your second question on ethical hackers.

Ethical hackers are operating in the public good, and there is a very clear difference. So what an ethical hacker will do is, for example, identify a vulnerability in in some sort of a system. And often they actually go to the company first to say, Hey, can you fix this? So, but often these individuals are silenced with threats of litigation. So we need to do is actually have in, in increasing protections for these individuals who are operating the public good, have repositories where people can share this information with each other and also allow companies to be part of this process. For example, what might responsible disclosure look like? How can we make this not an adversarial environment where it’s the public versus companies, but the public as a resource for companies to improve what they’re doing above and beyond what they’re able to do themselves?

Rep. Suzanne Bonamici (D-OR):

I appreciate that. And I wanna follow up on the earlier point about yes, and I, I’m aware of the work that’s been done so far on the ethical standards. However, I’m just questioning whether this is something that, does it need to be put into law to regulation? Does everyone agree? And, and could this, is there hope that there could be some sort of universal standard?

Dr. Rumman Chowdhury:

I do not think a universal ethical standard is possible. We live in a society that reflects diversity of opinions, thought, and we need to respect that and encourage that. But how do we prevent, how do we create the safeguards and identify what is harmful in social media? We would think a lot about what is harmful content, toxic content. And all of this lives on a spectrum. And I think any governance that’s created has to respect that. Society changes, people change the words and terms we use to reflect things change, and our ethics will change as well. So creating a flexible framework by which we can create ethical guidelines to help people make smart decisions is a better way to go.

Rep. Suzanne Bonamici (D-OR):

And just to confirm, that would be voluntary, not mandatory. Oh, see, my time is expired. I must yield back. Thank, can you, could you answer that question, just yes or no?

Rep. Frank Lucas (R-OK):

Certainly.

Dr. Rumman Chowdhury:

Yes.

Rep. Suzanne Bonamici (D-OR):

Thank you. <laugh>,

Rep. Jay Obernolte (R-CA):

The gentleman yields back. We’ll go next to my colleague from California. Congressman Issa, you’re recognized for five minutes.

Rep. Darrell Issa (R-CA):

Thank you. I’ll continue right where she left off. Dr. Chowdhury, the last answer got me just perfect because so it’s gonna be voluntary. So who decides who puts that.in a place that affects the cropping?

Dr. Rumman Chowdhury:

I’m gonna continue my answer <laugh>. So it can be voluntary, but in certain cases, high in, in high risk cases and high impact.

Rep. Darrell Issa (R-CA):

We don’t do voluntary here at Congress. You know that <laugh>?

Dr. Rumman Chowdhury:

I do. But I also do recognize that the way that we have structured regulation in the United States is context specific. You, you know, financial authorities, regulate….

Rep. Darrell Issa (R-CA):

Well, let’s, let’s get a context, cuz I, and it’s for a couple of you, if you’ve, if you’ve got a PhD, you’re, you’re eligible to answer this question.

And if you’ve published a book, you’re eligible to answer this question. Now, the most knowledgeable person on AI up here at the dais that I know of is sitting in the chair right now. So, but she, he happened to say “fair use.” And of course, that gets my hackles up as the Chairman of the subcommittee that determines what fair use is. Now, how many of you went to college and studied from books that were published for the pure purpose of your, your reading them usually by the professor who wrote them? Raise your hand. Okay. Nothing has changed in academia. So is it fair use to not pay for that book, absorb the entire content and add it to your learning experience?

So then the question, and I’ll start with you because both Twitter time and academia time and so on today, everyone assumes that the learning process of AI is fair use. Is there any basis for it being fair use rather than a standard essential copyright one that must be licensed. But if you’re going to absorb every one of your published books and every one of the published books that gave you the education you have and call it fair use, are we in fact turning upside down the history of learning and fair use just because it’s a computer?

Dr. Rumman Chowdhury:

I think the difference here is there’s a difference between me borrowing a book from my friend, learning and my individual impact on society.

Rep. Darrell Issa (R-CA):

Oh, because it’s cumulative. Because you’ve borrowed everyone’s book and read everyone’s book.

Dr. Rumman Chowdhury:

Well, and because it’s at scale, I cannot impact hundreds of millions of people around the world the way these models can, the things I say will not change the shape of democracy.

Rep. Darrell Issa (R-CA):

So, so I, I’ll accept that. Anyone want to have a slightly different opinion on stealing 100% of all the works from, of all time both copyrighted and not copyrighted and calling it fair use and saying, because you took so much, because you stole so big that in fact, you’re fine. Anyone wanna defend that?

Dr. Rumman Chowdhury:

I’m sorry. I don’t think that at all. I think it’s wrong to do that. I think it’s wrong…

Rep. Darrell Issa (R-CA):

So what is one of the essential items that we must determine the rights of the information that goes in to be properly compensated, assuming it’s under copyright?

Dr. Rumman Chowdhury:

Yes.

Rep. Darrell Issa (R-CA):

You all agree with that? Okay. Put ’em in your computers and give us an idea of how we do it, because obviously this is something we’ve never grappled with. We’ve never grappled with universal copyright absorption. And hopefully since all of you do publish and do think about it, and the Rand Corporation has the dollars to help us please begin the process because it’s one of the areas that is so, it is going to emerge incredibly quickly. Now obviously we could switch from that to we’re talking about job losses and, and so on. One of my questions, of course is if for anyone who wants to ate this, if we put all of the information in and we turn on a computer and we give it the funding of the Chinese Communist Party and we say patent everything, do we in fact eliminate the ability to, or to independently patent something and not be bracketed by a massive series of patents that are all are in part produced by artificial intelligence? I’m not looking at 2001 a Space Odyssey or, or Terminator. I’m looking at destruction of the building blocks of intellectual property that have allowed for innovation for hundreds of years.

Most courageous, please.

Dr. Dewey Murdick:

Yeah. I don’t know why I’m turning the mic on at this moment, but I I think there’s a core for the, both two questions you asked. I think money actually matters. The reason that we haven’t litigated fair use fully is there hasn’t been much money made in these models that have been adjusting, incorporating all the copyrighted material.

Rep. Darrell Issa (R-CA):

But trust me, Google was sued when they were losing money too. Yeah,

Dr. Dewey Murdick:

<laugh>. But I do think that the fact that there’s a view that there’s a market will change the dynamics about how, because people will start saying, wait, you’re, you’re profiting off my content and there’ll be, so I do think money changes this a little bit. And I think and, and I, it goes to the second question too. I I think the pragmatics of money will change the dynamics. Anyway, it’s a fairly simple observation on this point.

Rep. Darrell Issa (R-CA):

Thank you. And I apologize for dominating on, on this, but I’m going to Nashville this weekend and meet with a whole bunch of songwriters that are scared stiff. So I, anyone that wants to follow up? yes. If the Chairman doesn’t mind.

Clement Delangue:

Yes. I think an interesting example some people have been describing Hugging Face as some sort of, kind of like a giant kind of library. And, and in a sense we accept that people can run books at the library because it contributes to kind of like public and global conflict progress. I think we can see the same thing there where if we kind of like give access to this content for, for open source, for open science, I think it should be accepted in our society because it contributes to public good. but when it’s for private commercial interest, then it should be approached differently. And actually in that sense, you know, open science and open source is some sort of a solution to this problem because it gives transparency. There’s no way to know what copyright content is used in black box systems, like most of the systems that we have to today in order to take a decision

Rep. Jay Obernolte (R-CA):

That that, that knocking says the rest. For the record, he’s been very kind. Thank you.

Clement Delangue:

Thank you.

Rep. Jay Obernolte (R-CA):

Gentleman yields back. We’ll go next to the gentlewoman from Michigan. Ms. Stevens, you recognize for five minutes.

Rep. Haley Stevens (D-MI):

And thank you Mr. Chair. I’m hoping I can also get the extra minute given that I am so excited about what we’re talking about here today. And I want to thank you all for your individual expertise and leadership on a topic that is transforming the very dialogue of society and the way we are going to function and do business. Yet again at the quarter 21st century mark, after we’ve lived through many technological revolutions already before us and, and to my colleague who mentioned some very interesting points, I’ll, I’ll just say we’re in a race right now. I’m also on the select committee on competitiveness with the Chinese Communist Party. And I come from Michigan, if you can’t tell from my accent and artificial intelligence is proliferating with autonomous vehicle technology and we’re either gonna have it in this country or we’re not.

That’s the deal, right? We, we, we can ask ourselves what happened to battery manufacturing and why we’re overly reliant on the 85% of battery manufacturing that takes place in China when we would like it to be Michigan manufacturers when we’d like it to be United States manufacturers. And we’re catching up. So invest in the technology. I mean, certainly our witness here from the venture capital company with 5 billion is gonna be making those decisions. But our federal government has gotta serve as a good steward and partner of the technology proliferation through the guardrails that Dr. Char’s talking about, or our lunch will be eaten. This is a reality. We’ve got one AV manufacturer that is wholly owned in the United States, that’s crews a subsidiary of General Motors, and they’ve gotta succeed and we’ve gotta pass the guardrails to enable them to, to make the cars. Cuz right now they can only do 2,500. So the question I wanted to ask is, cuz yesterday I sent a note, it’s a letter to the secretary of state Mr. Blinken and I asked about this conversation, this point that has come up multiple times in the testimony about how we are going to dialogue at the international level to put up these proper AI guardrails. And so, Dr. Chowdry, you’ve brought this up in your testimony, and what do you think are the biggest cross-national problems in need of global oversight with regard to artificial intelligence right now?

Dr. Rumman Chowdhury:

Thank you. What a wonderful question, Congresswoman. I don’t think every problem is a global problem. The important thing to ask ourselves is simply what, what is the climate change of AI? What are the problems that are so big that a given country or a given company can’t solve it themselves. These are things like information integrity, preserving democratic values and democratic institutions cs, a child sexual abuse material radicalization. And we do have organizations that have been set up that are extra governmental to address these kinds of problems. but there, there’s more. What we need is a rubric or a way of thinking through the biggest problems and how we’re gonna work on them in a multi-stakeholder fashion.

Rep. Haley Stevens (D-MI):

Right. And Dr. Murdick, could you share with which existing authorities you believe would be best to convene the global community to abide by responsible AI measures?

Dr. Dewey Murdick:

Yeah. Well, I’m, I’m not sure I can name treaties, chapter and verse treaties. Well, s=so I think just one point about the core goal, the US plus its allies is stronger than China. If you, by a lot of different measures, you look at, you know, everything from research production to technology to innovation to talent, space size, to companies to, you know, investment levels. I think it’s important that we’re stronger together. If we can work together, if we’re ever going to implement any kind of export controls effectively, we have to do them together as opposed to individually. So there’s a lot of venues, international multi-party bodies that work on a variety of things. There’s a lot of treaties. There’s plural lateral, there’s multilateral agreements. And I think we do have to work together to be able to implement these. Yeah, and I think any and all of them are relevant.

Rep. Haley Stevens (D-MI):

Well, and allow me to do, say Mr. Delangue. We have a great French American presence in my home state of Michigan, and we’d love to see you look at expanding to my state. We’ve got a lot of exciting technology happening. And as I mentioned, with autonomous vehicles, just to level set this, we’ve got CCP owned autonomous vehicle companies that are testing in San Francisco and all along the West coast. But we cannot test our technology in China. We cannot sell our technology, our autonomous vehicle technology in China. That’s an example of a guardrail. I love what you just said, Dr. Murdick, about working with our allies, working with open democratic societies to make, produce and innovate and win the future. Thank you so much for Mr. Chair. I yield back.

Rep. Jay Obernolte (R-CA):

Gentlewoman yields back. We’ll go next to the gentleman from Ohio, Mr. Miller, you’re recognized for five minutes.

Rep. Max Miller (R-OH):

Thank you Mr. Chairman, and Ranking Member Lofgren for holding this hearing today. Dr. Murdick, as you highlighted in your written testimony, AI is just not a field to be left to PhDs and engineers. We need many types of workers across the AI supply chain, from PhD computer scientists to semiconductor engineers to lab technicians. Can you describe the workforce requirements needed to support the entire AI ecosystem? And what role will individuals with technical expertise play?

Dr. Dewey Murdick:

No, this is a great question and I will try to be brief because it’s such a fascinating conversation. When you look at the AI workforce, it’s super easy to get fixated on your PhD folks. And I think that it’s a big mistake. One of the first applications that I started seeing discussed was a lawn, a landscaping business using the proprietor which had trouble writing. and they started using large language models to help communicate with customers to give them proposals when they couldn’t do it before. That proliferation, no, I don’t mean that in a proliferation way, sorry. That expansion of capabilities into and users that are much wider than your standard users is really the space we are living today. So in the deployment of any AI capability, you have your tech core tech team, you have your team that surrounds them or implements software engineers and other things like that.

You have your product managers. When I worked in Silicon Valley, the product managers were my favorite human beings because they could take a technical concept and connect it to reality in a way that a lot of people couldn’t. And then you have your commercial and marketing people, you have a variety of different technologies showing up at the intersection of energy manufacturing. And so I think this AI literacy is actually relevant because I think all of us are going to be part of this economy in some way or another. So the size of this community is growing and I think we just need to make sure that they’re trained well and they have that expertise to be able to implement implementations since they are respectful of the, the, the values that we’ve been discussing.

Rep. Max Miller (R-OH):

Thank you for that answer. I, I’m a big believer that technical education and CTE should be taught at a k through 12 level for every American to go ahead for an alternative career pathway. And I believe that the American dream has been distorted within this reality for younger generations of what they think that that is. It used to be, you know, a nice house, a white picket fence and to provide for your family and a future generation. Now it’s a little bit different and distorted in my view. So thank you very much for that answer. another question. What role will community and technical colleges play in AI workforce development in the short term? you know, more recently than we think down the road, what about your recommended two year programs are well-suited to recruit and trained students in AI related fields?

Dr. Shahin Farshchi:

I’m a product of the California Community College system, so I can comment on that. So I feel like the emphasis in the community college system right now, or at least what I experienced back in the late nineties was, was two things, was preparation for transfer into a four year institution and vocational skills to enter into an existing kind of technical workforce. And I feel like there’s an opportunity in the community college system, which is excellent, by the way, which is the reason why I’m sitting here today. to also focus on preparing an existing workforce instead of just being vocational into existing jobs, into jobs that are right over the horizon. So in the middle, in between the kind of get ready to become a mechanic, to get ready to transfer to uc, Berkeley, in between those two things, there are jobs that are going to be evolving as a result of ai, therefore upskilling you as an existing worker to these new jobs that will be emerging in the next couple years. So I think that in between kind of service to the community, through the community college system, I think would be extremely valuable to the community.

Rep. Max Miller (R-OH):

Dewey.

Dr. Dewey Murdick:

Just to add to that I think that the I spent after high school, 12 years in school, that’s a lot of time to grind away at a particular technology and concept as the world accelerates. I don’t think that that kind of investment is for everyone, and it never really was for everyone. But I think it needs to be more and more focused. And I think community colleges provide that opportunity to pick up skills. Maybe you graduated as an English major and now you’re like, wait, I’d literally like to do a data science and, and, and that kind of environment of quick training, picking up the skills, stacking up certifications so that you can actually use that is a perfect venue for community colleges to be able to execute rapid training. And that adaption adaptation that’s necessary in a very quick wording world working, wow, excuse me. A rapidly moving economy.

Dr. Rumman Chowdhury:

If I may add quickly, at our generative AI red teaming event, we’re working with, funded by the Knight Foundation, working with seed AI to bring hundreds of community college students from around America to hack these models. So I absolutely support your endeavors.

Rep. Max Miller (R-OH):

Nice. Well, thank you all very much. Thank you, Mr. Chairman. And I have a few more questions I’m gonna enter into the record. But thank you very much. This is a joy. I yield back.

Rep. Jay Obernolte (R-CA):

Gentlemen, yields back, we’ll hear next from the gentleman from New York. Mr. Bowman, you’re recognized for five minutes.

Rep. Jamaal Bowman (D-NY):

Thank you so much, Mr. Chairman, and thank you to all the witnesses. Hey y’all, I’m over here. Thank you to all the witnesses for being here today and for sharing your, your expertise and your testimony. So America has a series of challenges that I would describe as national security challenges. We have millions of children going to bed hungry every single day. We have gun violence as the number one killer of children in our country. We have crippling inequality in a racial and economic caste system that persists. We have issues with climate change. We have issues of children who go to school and historically marginalized and neglected communities not receiving the same education as children who go to school schools in wealthy or more suburban communities. My question is, and I’ll start with Dr. Murdick, can you speak to the risk that we face with people intentionally and unintentionally using AI to exacerbate these issues? What must be done to ensure that these critical areas are improved? Because to my mind, if we’re not using AI to better humanity, what the heck are we using it for? It’s not just about commercialization. It’s not just about military B might, it’s about our collective humanity and solving the crippling challenges that persist in our country. Before we even think about China or another foreign adversary.

Dr. Dewey Murdick:

I really appreciate your stepping back and saying, you know, as in governance, you have to look at the full priorities of what we’re trying to do. And just obsessing about a little gnat of a technology is not the right way of thinking. Looking at where integrates with our entire societal needs is really relevant. And I think there are a lot of benefits to AI that can help with some of those issues. However, your point about intentional misuse of technology to tear apart our society, I think both from an internal as well as external, there’s a, the attack surface, if you may, is, is higher. And being able to have tools that can make those attacks easier to operate or more cost effective is a very risk, big risk. So it’s often referred to in, in the category of disinformation misinformation about how this is used.

And it can be done in different modalities, it can be done in text, it can be done in images, it can be done in audio. and I think being able to figure out how to manifest this is actually a democratic process discussion, which is why I think that the AI literacy needs to be brought up. We have to be engaging in this. And because there’s a lot of creativity needed it’s really, really hard to detect fake text. It’s hard to know what was created by a system and what isn’t. There’s stories in the news where people thought they had a method and they tried to punish all their students because they thought they had figured out a tool to be able to determine whether or not they could detect it. It didn’t work so well. So the problem is, it’s, it’s gonna be requires our full stakeholder of all the people in the United States to be participatory, giving ideas, figuring out how to do this. Also, that awareness of images. And I think image and audio really is a problem because we’re not, we’re used to trusting everything we hear and we see everything we read, maybe not so much. so I think there’s a situational awareness that we need to bring up across our entire population to help. I can go on, but hopefully that’s helpful,

Dr. Rumman Chowdhury:

If I may.

Rep. Jamaal Bowman (D-NY):

Please, yes.

Dr. Rumman Chowdhury:

What you’re bringing up is a concept of algorithmic bias and why this is critically important to talk about today. So build on a point from earlier, this is where regulation comes in. We already have laws today that prohibit certain implementations and certain outcomes from happening independent of the technology or the solution being used. But in addition, we’re gonna have to identify where the gaps are that existing regulation is insufficient and create new ones. So a couple of a couple of reasons this happens. One is just a lack of diversity in creating these technologies. These technologies are gate kept, not just by education, but literally geographically. Gate kept. People who sit in California try to dictate to school districts in New York how their schools should be run, and that’s not how this should work. and second is techno solutionism. This idea that humanity is flawed and technology will save us all. And that is also incorrect. Humanity is something to be rejoiced in and something to be nurtured and kept. You know we need to make sure that people are having the best kinds of lives that they can mediated by technology. So algorithmic bias starts with data sets, it continues with implementation, and it continues with the lack of ability to understand and have model visibility. And it continues with a lack of good governance.

Rep. Jamaal Bowman (D-NY):

Thank you so much. I ran outta time. I have a few more questions. Mr. Chair cannot submit the questions for the record. Absolutely. I yield back. Thank you

Rep. Jay Obernolte (R-CA):

Gentlemen. Yields back. We’ll go next to the gentleman from Texas, Texas. Mr. Babin, you’re recognized for five minutes.

Rep. Brian Babin (R-TX):

Thank you, Mr. Chairman. Thank you witnesses for being here as well. Dr. Murdick would you, this question is directed at you by many metrics. China has caught up or surpassed the United States in research and commercial capabilities. As Chairman Lucas referenced in his opening statements, Chinese universities are publishing many more research papers than US institutions. And they receive nearly the same share of citations as US papers. How concerned are you with the pace of AI progress in China and what can we do about that?

Dr. Dewey Murdick:

Wonderful question. And I just a quick comment on methods of how we determine who’s leading research is an easy place to start. It’s an easy place to count. I think, you know, just want to say what is probably very obvious you know, patenting investments, talent numbers, job postings, all these other data sets are really important to take into account. Also, it’s really important to mention scale. I think in 2021, China’s population was four and a quarter times the US population there. So therefore, just on sheer statistics, you’re gonna see quantity numbers going to be outstripped by China. So that’s just some observations about the measures about how concerned I am, which I think is a very interesting question. My image of going forward, I’ve, I’ve been following China since for over 15, well over 15 years.

And it’s been written on the wall for a long time that they were going to be a peer innovator. We’ve known it was coming. So the metaphor that I come into is a kind of a grappling situation. China and the US and its allies will be grappling with its different values and different approaches for the foreseeable future. And when you’re in a grappling, you don’t freak out when someone, you know, puts an arm on a shoulder, you know how to respond. And it’s a give and take. And I believe that that metaphor kind of expresses how I see this going. So I’m not gonna freak out about them having a larger number. It’s, it’s, it’s, we’re gonna have to just figure out how to walk through this in a, which is why I mentioned the policy monitoring concept is whatever we do, we need to make sure it’s working and be able to adapt because they’re going to counter whatever we do at any given instance. Right.

Rep. Brian Babin (R-TX):

Okay. Thank you. the American Research Enterprise currently operates in a transparent and open manner with basic research being published without restrictions. How can we maintain a balance between ensuring the openness of basic AI research while also mitigating potential threats to national security that might arise from sharing such information. And this is for Dr. Mathi and Dr. Murdick as well. how, how can we how can we solve this problem?

Dr. Jason Matheny:

Thanks so much. I think that for the largest so-called foundation models where we’re just learning about how they can be misused, how they can be used either in developing cyber weapons, biological weapons, and massive disinformation attacks we should be really cautious. And so before publishing or open sourcing the largest models, I recommend that we require some amount of red teaming and evaluation in order to understand the risk scenarios before we unleash this thing into the world and we’re unable to take it back.

Rep. Brian Babin (R-TX):

I gotcha. One, one quick thing. I believe that all of us here today agree that it’s important for the United States to lead in technological advances, especially in the cutting edge field of a and I of ai. Unfortunately, I’ve seen instances where overregulation hinders our ability to compete globally thereby forcing growth overseas or allowing others to step in and lead. And I’ve also seen the private public partnerships and how successful they’ve been, they’ve put the US in the driver’s seat and look at our space industry as an example. So Dr. Murdick how do we make sure that we do not over-regulate, but rather innovate through the facilitation of bottom-up consensus-based standards rather than top-down regulations? And can you speak to the role NIST has played in enabling innovation, particularly for ai and how we can leverage that approach going forward in the rest of the time that we have?

Dr. Dewey Murdick:

The distributed innovation system within the US is extremely important. I think Congress has levers. I think one of the core tenets of this hearing was how do we figure out how to innovate going forward? And I think that there, we need all areas to be innovated in. And for example, compute HA has helped move us forward, data’s helped move us forward. talent helps us move and we need to make sure whatever investment we do looks at like balanced portfolio of all those to be able to move forward. In terms of like the space industry, I think it’s really exciting to see how we have led, but we have made policy actions that have damaged parts of our industry. For example, with the satellite industry, when we put in places, put in rules that we could not deal with China we actually, the, the size of the us market dropped and Japan and European countries developed a a r free satellite technology that allowed them to increase their market share. So I think we have to be very going back to that wrestling grappling concept. We have to be very cognizant of everything we do will quickly be adapted and tried to be used against us, and we have to be able to adjust our policies and monitor them. So I think that monitoring action of like, how well is this working is actually a core part of anything that Congress is gonna do going forward.

Rep. Jay Obernolte (R-CA):

Thank you.

Clement Delangue:

If I can add on, on the previous points about kind of like ensuring that open source and open science is safe. I think it’s important to recognize that when we keep things more secret, when we try to hurt open science, we actually slow down more the US than China, right? And I think that’s what’s been happening for the past, past few years, and it’s important to recognize that and, and make sure we keep our ecosystem open to help the United States.

Rep. Jay Obernolte (R-CA):

Merci. Thank you. Yield back, gentlemen. Yields back. We’ll hear next from my colleague from North Carolina, the gentleman Congresswoman Ross, you’re recognized for five minutes.

Rep. Deborah Ross (D-NC):

Thank you very much Mr. Chairman and Ranking Member Lofgren and to all of the witnesses for being here today. I will not go into intellectual property. I serve on the same committee with all of them. <laugh>. as we all know, artificial intelligence permeates all areas of business industry, the arts and influences decisions that we make in our daily lives. In my district, North Carolina State University launched the AI Academy in 2020 that will prepare up to 5,000 highly qualified AI professionals from across the nation through their workforce development program. The AI Academy is one of the labor department’s current 28 public private apprenticeship programs. And I look forward to hearing from you about that. I know we’ve talked a little bit about community colleges but my first question is about cybersecurity. I’m concerned that we’re not ready for AI enabled attacks. I mean, we’re not even ready for the attacks that we have already. and one of the best defenses against phishing emails is that they’re often poorly drafted with misspellings. but generative AI provides an easy solution for malicious actors. And so, Dr. Matheny and Dr. Murdick and then anybody else, but I do have another question, so be quick. Are current cybersecurity standards and guidelines equipped to be able to handle AI enabled cybersecurity and attacks? And what could the federal government do?

Dr. Jason Matheny:

Thank you for the question. we’re not equipped yet. And I think that this has been, I know a priority for DHS cisa to take this on. and there’s a lot of thoughtful work there. Looking at the impact of AI on cybersecurity, including advances in spear phishing, which can be made more cost effective through the use of these language models, but also through the use of these lang large language models to generate not human language, but computer code. So these code generation tools can be used to create offensive cyber weapons. and it’s possible that in the future, those cyber weapons could be quite capable and very cost effective and generated at scale, a scale that right now isn’t possible even for state programs. So I think that’s quite worrisome. AI can also enable stronger cyber defenses. And so figuring out how to invest in AI capabilities that will ultimately create an asymmetric advantage for defense over offense is an important research priority.

Rep. Deborah Ross (D-NC):

Okay. And Dr. Murdick, could you be very brief cuz I, my next question’s for you.

Dr. Dewey Murdick:

I think your sense of this being an important priority is actually right, because I think AI plus cybersecurity is probably one of the earliest spaces where we’re gonna see AI manifesting. And I’ll just agree with Jason, otherwise good points. We need to work on this.

Rep. Deborah Ross (D-NC):

Great, thank you. And Dr. Murdick to you, American leadership begins with a strong workforce, as we’ve discussed. one that nurtures both domestic talent and attracts the best global minds. Many international students come here to conduct innovative research in emerging technologies. And a report from your organization revealed that two thirds of graduate students in AI related programs are international students. and so although they wanna stay here are immigration laws keep them from staying here. I’ve done a lot of work with the so-called documented dreamers who came here with their parents and then at 21, have to self deport with all of the investment that we have made in their education. So can you discuss better ways to not just attract, but retain this talent, particularly in the AI space? And then if there’s time, anybody else?

Dr. Dewey Murdick:

So I will give one part of the points. I think we’ve clearly through our own surveys, seen that people want to stay in the us it’s one of our strongest strengths. The US attracting high quality talent is, is the thing that has driven a lot of our innovation. And being able to pull from the world’s best is really key. So we see this from China, we see this from all countries, and I think we need to very much invest in this and continue to invest in it. I think a lot of the inhibitors are bureaucratic. and I do think there’s a, for our high skilled talent base, we’ve seen China implement some really innovative, not China, Canada sorry Canada, yes. invest, invest in some really interesting ways of when someone comes to Canada and they meet their they give everybody in their family the opportunity to start working the same day that the person who was approved. That’s pretty amazing. And it really can make a big decision when you’re trying to decide whether to move to Canada or the US or some other place. And so I think there’s those kind of innovations are strongly within the Congress, your hands, and I think you can use them very effectively.

Rep. Deborah Ross (D-NC):

Thank you so much. And I yield back.

Rep. Jay Obernolte (R-CA):

Gentlewoman yields back. We’ll go next to my colleague from Georgia, Mr. McCormick here, recognized for five minutes.

Rep. Rich McCormick (R-GA):

Thank you, Mr. Chair. And thank you to the witnesses for your thoughtful answers. as members of co the House Committee on science space and technology, it is our responsibility to address the challenges and opportunities presented by this rapidly advancing technology. Artificial intelligence has potential to revolutionize our national security strategies and capabilities by harnessing AI’s power to analyze vast amounts of data. We can enhance early threat detection, intelligent intelligence, analytics, and decision making processes. However, we must proceed with caution, recognizing the ethical considerations, algorithmic biases and risks associated with AI deployment. As policymakers, we must strike a delicate balance between fostering innovation and protecting our national security interests. This requires investing in research and development to safeguard AI systems against adversarial attacks, ensuring, ensuring transparency and accountability and collaborating with international partners to address emerging challenges and establishing norms. By doing so, we can harness and transf the transformative potential of AI to strengthen our defense capabilities while safeguarding our values and security.

In fact, that whole statement was produced by AI at ChatGPT, which I thought was fun. and matter of fact, it probably could have said it a lot better than me too. that’s where we’re at right now. I think it’s, it’s kind of funny. I’m, I’m about to replace my entire legislative staff with ChatGPT and save a lot of money. Sorry, since they came up with that. and I say that tongue in cheek, but I thought you made a really Dr. Chowdhury. I thought you made a huge statement when you said, we, we are looking at AI like our savior and like it’s going to replace us. And, and I really thought it was great to say that you’re not going to save us. And, and also the government’s not gonna save us either, by the way, I wanna put that in.

Maybe that’s my little political statement. But it’s interesting, it is starting to replace some of us. I’m an ER physician and I’m watching radiologists being replaced. There’ll be in a supervisory capacity and you’re gonna see pathologists next. And eventually we just had a, a recent survey that says I’d rather interact with AI than a physician. Cuz they give me easy to understand answers and they’re nicer to me <laugh>, and they’re not in a rush. So I think it’s just a matter of time, right? We’re, we’re seeing that happen right now. And in the defense industry, I’m worried because I’ve seen those documentaries like Terminator and Star Trek and, and I understand that, that we have the potential for systems to actually start outthinking us and actually out reacting and, and has a potential to damage us. So I’m curious what kind of guardrails should we put in place to keep us not only employed, but actually safe as far as I’ll, Dr. Chowdhury, since you had the most insightful comment so far, I’ll let you start.

Dr. Rumman Chowdhury:

I appreciate that. So I think you’re talking about two things here. One is job displacement, which has happened with many technologies in the past. And I think some of my colleagues on this panel have raised the need for upskilling, retraining, easier access, lower barriers to entry investment in jobs programs. frankly, these are almost standard, but now we need to apply them to technology. And the second part is really parsing out what we mean by risks and harms. So some of the Terminator type narratives, we are very, very far from these things, but there are harms that we are seeing today. people being denied medical care or insurance because of an incorrectly specified model. We have examples of people of color being denied kidney transplants because an algorithm is incorrectly determining whether or not they should be on the kidney transplant list. These are harms that are happening today. We can’t even figure out how to fix these now. We don’t have enough to fix these now. We are so far away from a Terminator. And really what we should be focusing on are the harms and technology we’re building today.

Rep. Rich McCormick (R-GA):

Great, thanks. One thing I, I’m not even sure who to ask this for, but I’ll let the panel decide who’s the best person. I have a real sincere concern that a hundred percent of our AI chip right now are con, are, are producing Taiwan with the posturing we have from China, talking about they are going to take over Taiwan. Having an adversarial country owning a hundred percent of the production of the most influential technology in world history deeply concerns me. Now, I know AMD has some processes that they want to produce AI outside of Taiwan in the next couple years. But in the meantime, what do we do? I I feel like all of our eggs in one basket, please.

Dr. Jason Matheny:

Yeah. Rand’s done a a lot of work on this topic because it’s been quite concerning about what the economic impacts would be and the national security impacts if a Taiwan invasion occurred and disrupted the microelectronic supply chain, given that we’re dependent for 90% of our advanced microelectronics, the most leading edge chips coming from Taiwan. and it would be an economic catastrophe. So among the policy options that we have to deal with that is to deter an invasion of Taiwan by ensuring that Taiwan has the self-defense as needed for a so-called porcupine defense.

Rep. Rich McCormick (R-GA):

Great tie-in to my ask. Thank you very much. And with that, I yield.

Rep. Frank Lucas (R-OK):

Gentlemen yields back. The chair recognizes the gentleman from Illinois, Mr. Sorenson for five minutes.

Rep. Eric Sorensen (D-IL):

I’d like to begin by thanking Chairman Lucas and Ranking Member Lofgren for convening this hearing and for our witnesses today. Building AI systems responsibly is vital. I believe Congress is in instrumental in providing the guardrails with which AI can be implemented. However, there’s much that the private sector must do as well. Dr. Chowdury how should we audit AI systems to ensure that they do not provide unintended output? How do we ensure that companies that create the algorithms are using accurate data sets when training the system? And, and also I’ve met with Amazon, Microsoft and Google, and each have different stances on the need for guardrails within their companies. one representative of one of these companies says it’s actually Congress’s job. So do we need a system like the European Union’s AI Act which includes the concept of triangles of risk and, and how can Congress learn from the Europeans concept?

Dr. Rumman Chowdhury:

You were speaking to my heart. This is what I have been spending the past seven years of my life doing. Interesting to note that I’m not a computer scientist by background, I’m a social scientist. So I fundamentally think about impact on people in society. I would actually direct you to think about the way the Digital Services Act is constructed, where they’ve actually defined five areas. These are including things like impact on elections and democracy, impact on things like mental health you know, and more like socially developed issues and developing audits around them for companies that are, that have the at scale level of impact. Also I am an audit consultant for the Digital Services Act, helping them construct construct these methodologies. I will add that it is not easy and, and not only for traditional machine learning and AI models, but in generative AI, now we have a whole other ballgame. So really what the investment is needed here is in this workforce development of critical thinkers and algorithmic auditors.

Rep. Eric Sorensen (D-IL):

Great. Follow up question. You know, when we go to a search engine or we have a video conference do you believe that there should be safeguards so that consumers understand if the data that they’re receiving is organic and believable and most important trustworthy?

Dr. Rumman Chowdhury:

Yes.

Rep. Eric Sorensen (D-IL):

Thank you. Dr. Farshchi, first of all, I’d like to say that I was a little nervous when I first had my my electric car, my Chevy Volt drive itself down the road when I came up to that first curve. And it did it itself, all right. But I had a steering wheel to be able to control it, to take over. Nine days ago, I met with local union leaders from our bus systems in Bloomington and Rockford, Illinois. And they had concerns about autonomous bus systems. What does it mean for the safety of those that are on their buses or the safety of those folks that are standing on a sidewalk? AVs are complicated technology with extensive programming. How do we ensure safety if we’re going to put children on school buses, how do we protect our autonomous vehicles from those that might want to hack them And and cause cause problems?

Dr. Shahin Farshchi:

So the technology is still very early, but there are lessons that we’ve learned in the past that have helped create a certain level of safety for complicated machines and tech technologies. And I think the best lesson there is in aviation. So in aviation, there were certain guidelines that had to be met for an airplane to be able to fly. And now aviation’s become one of the most safe forms of transportation. I feel like we’re still kind of in the, you know, early part of the 20th century. On the AV side, we still don’t know exactly what the failure modes of these machines are. There’s still rapid iteration going on. They’re perhaps 98% safe, but they have to be 99.999% safe. And once the technologies are matured, then when we identify what the weak ports of these, what what the weak points of these technologies are, we will come up with a framework to audit the technologies to make sure that, okay, if we ma if, if a, if a vehicle passes these tests, just like what we have, for example, with Crashworthiness, if a vehicle passes these tests, we have this level of certainty that this vehicle will be safe in most circumstances.

And I think that safety should be higher than that of a human driver. So if a human driver, cuz humans make errors, if a human driver makes an error that leads to an accident every a hundred thousand miles, then these vehicles have to be audited to be safe for at least a hundred thousand miles. And once they pass that threshold, then we should consider them for use. So I think we’re still two steps away from being able to identify that regulatory framework to be able to audit these machines to be sure that they are safer than human drivers.

Rep. Eric Sorensen (D-IL):

Thank you very much for your testimony today. And I yield back the bounds of my time.

Rep. Frank Lucas (R-OK):

The chair recognizes the gentle lady from North Carolina, Ms. Foshe, for five minutes.

Rep. Valerie Foushee (D-NC):

Thank you Mr. Chairman, and thank you all for your testimonies here today. my first question is to Dr. Chow. And I think that I was really struck while reading your testimony by a particular line where you say it is important to dispel the myth that governance stifles innovation, that during your career that you have found, and you talked about this a little earlier, governance is innovation. Can you please elaborate on that notion that governance is innovation in the context of ai and how how can the federal government and certainly Congress fulfill our oversight duties to get this right.

Dr. Rumman Chowdhury:

Thank you. And I would love to elaborate on this and, and here’s where we make a distinction between what is research and what ends up being applied. In my time helping Fortune 500 companies implement AI systems, 99% of the time, the reason they did not implement it is because they could not reliably, reliably trust or predict what the outcome was. They did not know if they were operating within the appropriate safeguards, if they would be breaking the law, what it even meant to possibly break the law. So they just decided to not do it. So by creating the appropriate sort of standards, guidelines, laws, and this is all within the remit of Congress, you actually help innovation by showing companies where the safe area to innovate and play is.

Rep. Valerie Foushee (D-NC):

Would anyone else like to speak to this? Okay. So also with the rise of artificial intelligence, researchers are increasingly in need of computing and data resources at scale. Unfortunately, there is a general lack of access to AI resources outside of the large tech companies. this has resulted in a steep resource divide between the top tech companies and smaller AI firms. Dr. Farshchi and perhaps Mr. Delangue, can you speak to the needs of enabling innovation for smaller companies to access computer computing rather and data resources?

Clement Delangue:

Yes, I can start there. I think it’s important to realize that it’s not just compute. I think there’s been a very interesting study by the center of security of an emerging technology saying that, you know, for researchers to thrive and for companies to thrive, they need not only compute, but good data and system access for, for ai. so when we look at kind like providing the resources for all companies to thrive with ai, I think we need to be kind of like looking at all of that compute people and system access and provide more, more transparency. what we’ve seen on the platform, on the hi and face platform is that when you give these tools to, to company they thrive and they manage to use AI responsibly, right? we’ve seen everywhere from kind of like marble cutters, small businesses in the US using AI to detect material print shop u using image generation to generate image and, and generate kind of like t-shirts or phone covers using, using ai. So I think by like enabling and giving access to this resources more broadly, we can enable all companies to make use of AI.

Dr. Shahin Farshchi:

Congresswoman, just to add I think the federal government has a role to play here. There’s a bit of ambiguity right now. Going back to the conversation earlier about licensing versus registration, it’s still the wild West companies don’t know they need to train their models. They don’t know exactly if they would be doing something illegal or doing something unethical by using certain data sets. If the government were to make data available and instruct companies and researchers to use that data source and remove this ambiguity, I think that would be, to Clem’s point, I think that would be a huge step forward for these researchers.

Rep. Valerie Foushee (D-NC):

Thank you, Mr. Chairman. I yield back.

Rep. Frank Lucas (R-OK):

Gentle lady yields back the chair now recognizes the gentle lady from Colorado, Ms. Caraveo for five minutes.

Rep. Yadira Caraveo (D-CO):

thank you Chairman Lucas and to you in Ranking Member Lofgren for this very exciting hearing on ai, which I think we’ve all been following closely. And as a doctor as well, as Dr. McCormick mentioned earlier, I’m following advancements that are happening in the healthcare sector. new forms of biomedical technology and data are transforming the way that we research, diagnose, and treat health issues. And there’s a lot of enthusiasm using AI to combine different forms of data such as those from genomics, healthcare records and medical imaging to provide clinicians with critical insights to make clinical decisions for individual patients. so Dr. Matheny can you describe what you think the technological benchmarks are needed to realize benefits and especially in the medical field in the next 2, 5, 10 years and beyond?

Dr. Jason Matheny:

Thank you for the question. I think applications to medical diagnostics, to personalized medicine to home healthcare are profound. and I I think that establishing the sort of evaluative framework that we’re going to need in order to assess the added value of these technologies over current practice is one that’s really important for the fda, for Medicare, for Medicaid to be able to evaluate what what advantages these technologies bring. I really liked the expression that with, with Briggs, we can drive faster, and this is the, the history of technology innovation in the United States is that we have we have had government testing and evaluation as a propellant for innovation because when consumers contrast that technologies are safe, they use them more. and that allowed the United States to lead in, in pharmaceuticals and in other health technologies of that framework.

Rep. Yadira Caraveo (D-CO):

I’m gonna kind of expand on your, your notion of trust. Dr. McCormick also mentioned an article in the New York Times talking about how doctors are using chat G P T to communicate with their patients and looking for a more empathetic way of, of responding to them. as somebody that trained in medicine, that at first made me chuckle, then it made me worried a little bit. And then as I read the article, I, I, I must admit that the answers that it came up with were actually quite good and in keeping with the some of the training around communication that we use. But I think if I was a patient and realized that my doctor was not answering something themselves but using a, a separate technology to create more empathy and communication between us, I’d, I’d also be a little bit concerned.

So looking at that, and then also realizing in the article that it I, in, in my thoughts, something like AI would be very good at the radiology aspect, right? Of making sure that it was reading mammograms faster, for example. but it also pointed out that it was leading to a lot of false positives that were leading to unnecessary tests. So how in the future, as patients, as providers can we ensure confidence in the different kind of applications that AI and, and these technologies are going to be used for in medicine? And that’s really for anybody starting with Dr. Matheny.

Dr. Jason Matheny:

I do think that more testing in which the comparison arm is current practice so that we can evaluate, does it have the same precision recall as existing, for example, diagnostic practices, is gonna be essential. I think society as a whole is gonna be working to adapt on what we sort of recognize as being sincere communication when more and more communication will be generated by language models. but I do think that one benefit of some of these models is that they don’t get tired, they don’t get stressed. So if you’re a care provider and you’re working under sleep deprivation and time pressure, you might not be giving as much explanation as is really needed for a patient. So there could be real benefits there.

Dr. Dewey Murdick:

Just to add to that, I think this concept of human machine teaming and trust is really important here. You had the fortune of working with the armed forces for, for part of my career, and I watched the training culture that they had. They knew, they trained well with their teammates and they knew what their colleague was going to do and not do because they had spent that time training. And I think there’s a lot of interesting measures and metrics for designing systems that maximize that trust. So imagine a doctor or actually part of their training, learning how to work with these systems, spending those hours so that they know when they can rely on it, when they can’t rely on it. I just think it’s an important framing for the future of AI to make sure that it’s part of the team and is, is calibrated correctly, that trust and we can use it appropriately.

Rep. Yadira Caraveo (D-CO):

Really appreciate your answers. In particular, I think Dr. Matheny, I thought about burnout as well as we’ve come in particular out of a pandemic where we’re going to be facing more physician shortage, what are the applications to, to read over patient charts, to compile information to write long notes that doctors don’t necessarily need to spend hours on. And so very, very much appreciate those comments.

Rep. Frank Lucas (R-OK):

The member’s time is expired. The chair now recognizes the gentle lady from Pennsylvania, Ms. Lee for five minutes.

Rep. Summer Lee (D-PA):

Thank you Mr. Chairman. And thank you to our witnesses for your time and expertise on this critical area of technological innovation. advancing innovation in western Pennsylvania have become synonymous when we view the technological developments we’re providing to the nation. For example, Pittsburgh is America’s hub for autonomous vehicles research and development evidenced by the numerous self-driving vehicles you’ll find on our streets. Carnegie Mellon University in my district brought home 20 million from NSF to develop AI tools, specifically to address the design and utilization of ethical human-centric AI tools that will help improve disaster response and aid public health. As an environmental justice advocate, I’ve seen how ethical AI has been used to monitor, predict, and combat environmental conditions that are, that are ravaged by corporate polluters. AI can mean new possibilities for clean air and water and livable communities. We know that smart, smart city initiatives, innovations that leverage AI and data analytics will help to improve the quality of life for citizens and enhance sustainability.

However, despite the numerous possibilities for AI integration into every faucet of the American economy and everyday life, there also exists serious concerns that I don’t take lightly as some of my colleagues have mentioned this morning. We’re really discussing privacy rights, the ethical implications of AI technology, the continuing war against disinformation and misinformation. As a proud representative from Western Pennsylvania, I’d be remiss to not discuss the implications AI will have on our labor and our workforce. AI is exciting, sure, but we must exercise caution to ensure that we provide access and opportunities for skills training and education to every single worker so they’re not left behind. throughout and by this revolution. Last week, I introduced an amendment that encapsulates my legitimate concerns on the disparate impacts these technologies have on people that look like me. I would like to commend Chairman Lucas and this committee for their express commitment to ensure that the advancement of AI technology in our society does not result in black folks and otherwise marginalized communities being used as sacrificial lambs. We can all agree. Striking a balance between harnessing the benefits of AI and addressing its challenges is crucial to ensuring AI truly has a positive impact. Striking that balance begins here, of course. So Mr. Delangue, what are the untapped potentials of AI that could substantially improve combining environmental injustices and daily living standards of ordinary citizens.

Clement Delangue:

So first, I really appreciate your, your points. I think one of the important things that we need to realize today is that AI needs to be more inclusive to, to everyone more transparent. something that we’ve seen, for example, is that by sharing models, data sets, you actually allow everyone to be able to discuss them, contribute to them, report biases that maybe the initial model builders wouldn’t, wouldn’t see. So I think it’s really, really important to, to invest a lot in more equity, more inclusiveness for, for ai. to your second question on the environment, I’ve been really interested in all the research that has been done around carbon emission from AI, because that’s a very important problem that we’ll have to deal with in the future. so for example, there’s this model called Bloom that has been developed by big science that Hugging Face participated in, that published actually actually the carbon emissions generated by the training of the model. And that’s something that also I think we should incentivize more mm-hmm. <affirmative> in order to see the potential impact on carbon emissions of AI.

Rep. Summer Lee (D-PA):

Thank you. And I agree with your first point, obviously. Just last year, Shudu, a black AI generated model was featured in campaigns for Bowman, for Elise, and even Vogue, the creator said the model was inspired by human models like Naomi Campbell and Alex Weck. We know the fashion industry has long been discriminatory towards black women and other women of color. My question, Dr. Chowdury, what needs to be done to ensure that AI technologies are not taking away work from black folks in industries that are already white dominated?

Dr. Rumman Chowdhury:

I think investing in retraining programs and compensation programs for individuals whose jobs will be taken away is key in critical here. We can’t just leave people at the whims of massive co companies that don’t care or don’t think or don’t even know anything about them. Mm-hmm. <affirmative>,

Rep. Summer Lee (D-PA):

Do you have an opinion or an idea on how we can create safeguards to establish the rights of individuals over the use of their image, their voice, and their likeness in AI technologies?

Dr. Rumman Chowdhury:

I think this is something that Congress should take very seriously and think through. but I do, for example, myself, I am concerned about my image being on the internet and being part of training data or, you know, an image being generated that looks something like me, but maybe isn’t in, isn’t me, or more seriously the ability to create deep fakes that look like people. So we need these protections for all individuals. And in particular, I wanna say that women and people of color are the primary targets of deep fake generated photos.

Rep. Summer Lee (D-PA):

Thank you. I could go on and on, but that is my time. So I yield back, Mr. Chairman.

Rep. Frank Lucas (R-OK):

Gentle lady’s expired. Time is expired. The gentleman from California, Mr. Lu, recognized for five minutes.

Rep. Jamaal Bowman (D-NY):

Thank you Chairman Lucas thank you for holding this important hearing, and it’s been very informative. Generalized large language models are incredibly expensive to create and to operate. There was an article earlier this month in the Washington Post, the title of it was AI Chatbots Lose Money every time you Use Them. And it goes on to say, the cost of operating the systems is so high, the companies aren’t deploying their best versions to the public. Estimates are that OpenAI lost over 500 million last year. So my first question is to Dr. Farshchi, do you think OpenAI can be profitable?

Dr. Shahin Farshchi:

So I am not close to OpenAI, so I don’t know what their commercialization plans are to the comments that were made earlier regarding the productivity of ai. I believe AI can eventually become productive and enhance human output and create a net positive economic impact on society. But then as it relates to OpenAI.

Rep. Jamaal Bowman (D-NY):

I’m not asking that.

Dr. Shahin Farshchi:

I don’t have an answer for you, unfortunately.

Rep. Jamaal Bowman (D-NY):

Do any of these large language language models, could they actually commercially be profitable given how much it costs to develop them and how much it costs to operate them on a daily basis.

Clement Delangue:

So I think it’s important to realize that large language models are just like a small portion of what AI is. What we’re seeing as in terms of like usage from, you know, companies is more the use of smaller, more specialized, customized models that are both kind of like cheaper to use and more environmentally friendly. and that’s, that’s what I see the future of AI is more, more than one large language model to rule them. All right. An example of that is when you build a customer chatbot for bank, you don’t need the model to tell you about the meaning of life. So you can use a smaller, more specialized model that is going to be able to answer your question. Right.

Rep. Jamaal Bowman (D-NY):

Thank you. So my next question is for you. I’m a recovering computer science major. I, on balance, generally support open source. Some of your testimony was about having America continue to be a leader in this area. I’m just curious how America is a leader if it’s all open source or, or a pure competitor simply copy it.

Clement Delangue:

So you look at the past, I think the reason why America is the leader today is because of open science and open source. Yeah. If there was an open science and open source, the US probably wouldn’t be the leader. Yeah. And actually, I think actually in the past few years, the fact that it got less open is leading to, you know, the leadership of the US diminishing.

Rep. Jamaal Bowman (D-NY):

We don’t, we don’t, we don’t quite have open science. Right. We have an entire patent system where you can’t copy that science.

Clement Delangue:

I would argue that most of today’s progress is powered by the open research papers that have been published in the past that I mentioned. Like the attention is all you need paper. And actually most of the commercial companies today exploiting AI are actually based on these papers. Right. The ChatGPT, T is Transformers. It’s the famous open…

Rep. Jamaal Bowman (D-NY):

No, I got that. I’m fine with research and developing it. So I’m being open. so I’ll just give you a story. Last year I talked to a CEO of my district who’s creating a new chip that’s faster. He’s an immigrant. And at some point I said, how fast is this chip? And he says, 50,000 times faster. I was like, whoa. And it occurred to me that he never would’ve gone to Moscow. But no one wakes up and says, I wanna go to Moscow. No one really wakes up and says, I wanna go to Beijing. Where if you say something bad about President Xi Jinping or you get too powerful, they kidnap you and reeducate you. But they come to the United States, not just because we have talent and resources, because we have a whole vast set of intellectual property laws that we enforce and we have the rule of law, and we don’t let people copy this.

So I’m just, I’m just so interested in how you can have America be the leader and then have, I mean, we just sort of say to all these AI companies, just make everything open source. I don’t even know how they monetize that. And if people can just copy it, it just, it’s, it is sort of interesting to me. So would like to hear more about that later. I do wanna move to another topic and this is to Jason, you mentioned in one of your recommendations, additional funding for NIST to make sure they have the capacity to do their AI risk framework. So I assume you think it is a good framework? I think it’s a good framework. I looked through it and, and thought about it quite a bit. It’s pretty generalized. So you don’t, it’s not that prescriptive. So any company could actually adopt it. What is your view of just forcing companies to think about AI by, let’s say anytime the federal government contracts with someone? We require them to go through the Nest AI framework.

Dr. Jason Matheny:

Thank you. I do think that we need an approach in which the safety and reliability of systems is assessed before they’re broadly deployed. And right now we don’t have such a system. and there are a few ways for the federal government to help. One is not only requiring that for direct federal contractors, but also making it a term and condition that any compute provider that has a federal contract would be required to place those conditions on any company that’s using its computing infrastructure. So, sort of similar to the common rule in which a federal contract with a research organization requires then that any organization that they’re working with even if it’s not on a federal contract follows the common rules say for human subjects research, we could require the same for safety testing for ai. Thank you. I yield back.

Rep. Frank Lucas (R-OK):

Gentlemen yields back. Seeing no other members present, I want to thank the witnesses for your valuable, in some ways exceptional testimony. And the members, of course, for their questions. The record will remain open for 10 days for additional comments and written questions from members. This hearing is adjourned.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.