On Tuesday, September 19, 2023 the US Senate Intelligence Committee held a hearing on Advancing Intelligence in the Era of Artificial Intelligence: Addressing the National Security Implications of AI, chaired by Sen. Mark Warner (D-VA).
Witnesses included:
- Dr. Benjamin Jensen, Senior Fellow, CSIS and Professor, Marine Corps University School of Advanced Warfighting (written testimony)
- Dr. Jeffrey Ding, Assistant Professor of Political Science, George Washington University (written testimony)
- Dr. Yann LeCun, Vice President and Chief AI Scientist, Meta Platforms and Silver Professor of Computer Science and Data Science, at New York University (written testimony)
What follows is a lightly edited transcript of the discussion.
Sen. Mark Warner (D-VA):
Dr. Yann LeCun, Chief AI Scientist at Meta and as I’ve learned one of the real pioneers of machine learning, Dr. LeCun, really great to have you. Dr. Benjamin Jensen, who is the senior fellow at the Center for Strategic and International Studies and a professor at the Marine Corps University School of Advanced War Fighting. Welcome Dr. Jensen. And Dr. Jeffrey Ding, who is professor of political science at my alma mater, George Washington University and author of the influential China AI Newsletter on China’s AI landscape. I also want thank all of my colleagues who’ve been interested in this, but particularly Senator Rounds and Senator Heinrich who had been working with Leader Schumer on a series of other AI forms closed and open in many ways the opportunity and risk of this technology that has kind of captured everyone’s attention. AI is obviously not new for this committee. The agencies that we oversee have been some of the most innovative developers and avid adopters of advanced machine learning capabilities working with large language models and computer vision systems long before those terms entered the public vocabulary, the ability to sift through and make sense of enormous amount of data has been a hallmark of the American intelligence community since its inception and the use of data science and advanced computation has been at the core has been one of the core competencies of the IC for the last half century.
Our committee has been engaged on all of those topics for as long as I’ve been on this committee. What has dramatically changed, however, are the potential social, political and national security implications of this technology and it’s driven in large part by the proliferation of generative models that are both publicly accessible and incredibly capable. Due to a combination of unprecedented scale and breakthrough in training methods, rapid advancements in this field had the potential to unlock enormous innovation and public benefit in areas as diverse as drug discovery, creative arts and software programming. But as Congress evaluates the scope and significance of those transformations, we must equally grapple with the disruptions, ethical dilemmas and potential dangers of this technology in both the wider Senate and through a series of roundtables I’ve hosted the last several months, we as a body are seeking to rise to that risk and candidly, we were not able to do so on social media and again, I think our all sense is we can’t repeat that here.
As we discussed in today’s hearing, the proliferation of these technologies have dramatically lowered the barrier of entry for foreign governments to apply these tools to their own military and intelligence domains. The public release of technical details from trained model weights to code bases of highly capable models is a boost to foreign governments just as it is to startups, university researchers and hobbyists. While the United States intelligence community has benefited from AI innovation for signal processing, sensing machine translation, or more so too, should we now anticipate that a wider set of foreign governments will be able to also harness these tools. Many of them developed and actually released by US companies, but they will take those products candidly for their own military and intelligence uses. Our witnesses are well positioned to describe the current posture of our nation’s most strategic rival, the People’s Republic of China as it pertains to AI
I look forward to hearing from our experts where leading PRC research labs and technology vendors are in their efforts to build cutting edge AI models, development tools and innovation ecosystems. In today’s hearing, this committee will focus on maintaining the US intelligence community’s edge, including how the IC can better adapt and even holistically adapt for these technologies. I hope today’s discussion can better identify some of the organizational contracting and technical barriers to achieving these objectives. I’m also eager to hear where these tools currently fall short of some of the most lofty claims about their capabilities. Something Dr. Lecun and I have talked about. For instance, the propensity of even the most advanced language models to hallucinate raises serious questions about their fitness and mission critical and other sensitive areas in the intelligence domain. Mistakes can impact our nation’s security, the privacy of Americans, and the clarity of pivotal foreign policy discussions to be sure as well.
Generative models can improve cybersecurity, hoping programmers identify coding errors and contributing towards safer coding practices. But with that potential upside, there’s also a downside. Since the same models can just as readily assist malicious actors, I hope this hearing will explore the ways in which generative models alter the cyber landscape, lowering the barrier to entry for formerly second tier cyber powers and how we can advance the capabilities of more and how in the cyber domain it actually AI can advance the capabilities of more advanced state actors. Lots to discussion now as the leading body in Congress in tracking disinformation market manipulation and election influence efforts by our nation’s adversaries. Our committee is also deeply interested in the ways in which AI expands and exacerbates the threat of foreign malign influence. The ability for foreign actors to generate hyperrealistic images, audio and videos will undeniably make it harder for Americans to navigate our evermore complex, fraught and fast-paced media environment.
We must also contend with the ways in which bad actors will use these tools to undermine trust in markets, public institutions and public health systems. These tools will greatly challenge our society’s ability to agree on baseline facts and our already impaired ability to develop consensus. The last several years have amply demonstrated the ways in which speed scale and excitement associated with new technologies have frequently obscured the shortcomings of their creators in anticipating the harmful effects of their use. I hope we will also queue up a discussion of how the US can best harness and govern these technologies to avoid the same mistakes we made in failing to foresee vulnerabilities in other global scale technologies like social media. To that end, I hope we’ll also touch upon how other countries both allies and rivals are coping with potential disruptions, risks and economic dislocations of AI technologies through their own regulatory proposals.
And I know I hope we hear from Dr. Ding. We actually the PRC is doing in this field AI capabilities. I think we all know hold enormous potential. However, we must make sure that we think about that potential gain the upside, but we’re appropriate, put appropriate safeguards in place. I look forward to today’s discussion. I did an extra long opening today because the vice chairman has been held up for a moment. He will join and when he joins us after the presentations will allow him to make an opening comment and because in our tradition in open hearings, we will go by rank of seniority for five minute rounds. With that, I’m not sure who drew the long straw or the short straw in terms of opening comments, but I’ll turn it over to our panel. Thank you.
Dr. Benjamin Jensen:
Short straw. Senator. So Chairman Warner still evening is not there. Vice Chairman Rubio distinguished members of the committee. I really am honored today to sit with you and share my thoughts on, but I think you all agree from reading all the work that you’ve done on is probably the most important question facing our nation from a technological perspective and the magnitude of the moment is clear, right? Both the Senate and the House are very much cultivating a national dialogue and I just want to open as a citizen by thanking you for that. You have a powerful role in that and so doing this we are right now is key and so I have to be very blunt and clear with you that I’m going to talk to you less about the threat outside. Sir, I’m going to talk to you more about how I think we could get it wrong today.
As part of that ongoing dialogue, I really want to look at the often invisible center of gravity behind any technology, people, bureaucracy and data and in particular data infrastructure and architecture. Put simply, you get the right people in place with permissive policies and computational power at scale and you gain a position of advantage in modern competition. I’ll just put it bluntly. In the 21st century, the general or spy who doesn’t have a model by their side is basically as helpless as a blind man in a bar fight. So let’s start with people. Imagine a future analyst working alongside a generative AI model to monitor enemy cyber capabilities. The model shows the analyst signs of new adversary malware in targeting us critical infrastructure, the analyst disagrees. The challenge we have is today our analysts can’t explain why they disagree because they haven’t been trained in basic data science and statistics.
They don’t know how to balance causal inference and decades of thinking about human judgment and intuition. And sadly, I’ll be honest, our modern analytical trade craft and even something close to me. Professional military education tends to focus on discrete case studies more than statistical patterns or trend analysis. In other words, if we unleash a new suite of machine learning capabilities without the requisite workforce to understand how to use them, we’re throwing good money after bad and we really have to be careful about this. I can’t stress this enough if you don’t actually make sure that people understand how to use the technology, it’s just a magic box. Let’s turn to the bureaucracy. Now, I want you to imagine the Cuban missile crisis would look like in 2030 all sides with a wide range of machine learning capabilities. There would be an utter tension has machines wanted to speed up decision-making in the crisis, but senior decision makers needed to slow it down to the pace of interagency collaboration.
Even worse, you would be overwhelmed by deep fakes and computational propaganda. Pressuring you as elected officials in any senior leader to act and pressure to act at a moment of crisis doesn’t necessarily lead for sound decision-making. Unfortunately, neither our modern national security enterprise nor the bureaucracy surrounding government innovation experimentation are ready for this world. If the analyst and military planner struggles to understand prediction, inference, and judgment through algorithms, the challenge is even more acute with senior decision makers at this level, most international relations and diplomatic history tells us that the essence of decision is as much emotion, flawed analogies and bias as it is rational interests. What happens when irrational humans collide with rational algorithms during a crisis, confusion could easily eclipse certainty, unleashing escalation and chaos. There are even larger challenges associated with creating a bureaucracy capable of adapting algorithms during a crisis. Because of complexity and uncertainty, all models require a constant stream of data to the moment at hand, not just the moment of the past, but crises are different than necessarily what preceded them.
Unfortunately, slow adapters will succumb to quick deaths on that future battlefield. As a result, a culture of experimentation and constant model refinement will be the key to gaining and sustaining relative advantage. Now ask yourself, do we have that bureaucracy last considered data architecture and infrastructure? How do we put the pieces of the puzzle together? I want you to imagine we’re almost back to the old days of the scud hunt, right? Imagine the hunt for a mobile missile launcher. In a future crisis, a clever adversary knowing they were being watched could easily poison the data used to support intelligence analysis and targeting. They could trick every computer model into thinking a school bus was a missile launcher causing decision makers to quickly lose confidence in otherwise accurate data even when you were right. 99% of the time, the consequences of being wrong once are still adding unique human elements to crisis decision-making.
Artificial intelligence and machine learning therefore are only as powerful as the underlying data yet to collect, process and store. That data is going to produce significant costs going forward. This is not going to be cheap. Furthermore, bad bureaucracy and policy can kill great models if they limit the flow of data. Last, let’s talk about the fact that Prometheus has already shared the fire and I think you all know that even from your opening comments. Chairman, adversaries now into the foreseeable future can attack us at machine speed to a constant barrage of cyber operations and more disconcerting mis, dis and malformation alongside entirely new forms of swarming attacks that could hold not just our military but our civilian population at risk. Unless the United States is able to get the right mix of people, bureaucratic reform and data infrastructure in place, those attacks could test the very foundation of our republic. Now, I’m an optimist, so I’m going to be honest with you. I’m confident the United States can get it right. In fact, the future is ours to lose. Authoritarian regimes are subject to contradictions that make them rigid, brittle, and close to new information. Look no further than regulations about adherence to socialist thought in data sets. These regimes are afraid to have the type of open, honest dialogue this committee is promoting and that fear is our opportunity. Thank you for the opportunity to testify.
Dr. Yann LeCun:
Chairman Warner, vice Chairman Rubio and distinguished members of the committee. Thank you for the opportunity to appear before you today to discuss important issues regarding ai. My name is Yan LeCun. I’m currently the server professor of computer science and data science at New York University. I’m also Meta’s chief, chief AI scientist and co-founder of META’S fundamental AI Research Lab at Meta, I focus on AI research development strategy and scientific leadership. AI has progressed leaps and bounds since I began my research career in the 1980s. Today we are witnessing the development of generative AI and in particular large language models. These systems are trained through self supervised learning or more simply they’re trained to fill in the blanks in the process of doing so. Those AI models learn to represent text or images including the meaning, style, and syntax in multiple languages. The internal representation can then be applied to downstream tasks such as translation, topic classification, et cetera.
It can also be used to predict the next words in a text, which allow LLMs to answer questions or write essays and write code as well. It is important not to undervalue the far-reaching potential opportunities they present. The development of AI is as foundational as the creation of the microprocessor, the personal computer, the internet, and the mobile device. Like all foundational technologies, there will be a multitude of users of AI and like every technology, AI will be used by people for good and bad ends as AI systems continue to develop. I’d like to highlight two defining issues. The first one is safety and the second one is access. One way to start to address both of these issues is through the open sharing of current technology and scientific information. The free exchange of scientific papers, code and train models in the case of AI has enabled American leadership in science and technology.
This concept is not new. It started a long time ago. Open sourcing technology has spurred rapid progress in systems. We now consider basic infrastructure such as the internet and mobile communication networks. This doesn’t mean that every model can or should be open. There is a role for both proprietary and open source AI models, but an open source basic model should be the foundation on which industry can build a vibrant ecosystem. An open source model creates an industry standard, much like the model of the internet in the mid nineties. Through this collaborative effort, AI technology will progress faster, more reliably and more securely. Open sourcing also gives businesses and researchers access to tools that they could not otherwise build by themselves, which helps create a vast social and economic set of opportunities. In other words, open sourcing democratizes access, it gives more people and businesses the power to build upon state-of-the-art technology and to remedy potential weaknesses. This also helps promote democratic values and institutions, minimize social disparities and improve competition. We want to ensure that the United States and American companies together with other democracies lead in AI development ahead of our adversaries so that the foundational models are developed here and represent and share our values. By open sourcing current AI tools, we can develop our research and development ecosystem faster than our adversary.
As AI technology progresses, there is an urgent need for governments to work together, especially democracies to set common AI standards and governance models. This is another valuable area where we welcome working with regulators to set appropriate transparency requirements, red teaming standards and safety mitigations to help ensure those codes of practice standards and guardrails are consistent across the world. DW White House’s voluntary commitment are a critical step in ensuring responsible guardrails and they create a model for other governments to follow. Continued US leadership by Congress and the White House is important in ensuring that society can benefit from innovation in AI while striking the right balance with protecting rights and freedom, preserving national security interest and mitigating risks where those arise. I’d like to close by thanking Chairman Warner, vice Chairman Rubio and the other members of the Committee for your leadership. At the end of the day, our job is to work collaboratively with you, with congress, with other nations and with other companies in order to drive innovation and progress in a manner that is safe and secure and consistent with our national security interest. Thank you. I look forward to your questions.
Dr. Jeffrey Ding:
Chairman Warner, vice Chairman Rubio, oh, sorry. Chairman Warner, vice Chairman Rubio and members of the committee. I’m honored by the opportunity to brief this committee on the national security implications of ai. In all honesty, I also have a selfish reason for attending. Today I teach political science at GW and my students all really look up to the committee members in this room and also all the staff who are working behind the scenes to put this hearing together. So when I got to tell the class this morning that I was doing this testimony, they all got the most excited I’ve ever seen them get excited this semester, and so hopefully that will cause them to do more of the required readings in class. In all seriousness, I have great students and I’m very grateful to be here today. In my opening remarks, I want to make three main points from my written testimony.
The first is when it comes to the national security of implications of ai, the main driver in the main vector is which country will be able to sustain productivity growth at higher levels than their rivals and for this vector, the distinction between innovation capacity and diffusion capacity is central to thinking about technological leadership in AI today. When various groups, whether that be experts, policymakers, the intelligence community, when they try to assess technological leadership, they are overly preoccupied with innovation capacity, which state is going to be the first to generate new to the world breakthroughs, the first to generate that next leap in large language models, they neglect diffusion capacity, a state’s ability to spread and adopt innovations after their initial introduction across productive processes and that process of diffusion throughout the entire economy is really important for technologies like ai. If we were talking about a sector like automobiles or even a sector like clean energy, we might not be talking as much about the effects of spreading technologies across all different productive processes throughout the entire economy.
AI is a general purpose technology like electricity, like the computer, like my fellow panelists just mentioned in his testimony and general purpose technologies historically proceed waves of productivity growth because they can have pervasive effects throughout the entire economy. So the US in the late 19th century became the leading economic power before it translated that influence into military and geopolitical leadership because it was better at adopting general purpose technologies at scale like electricity, like the American system of interchangeable manufacturer at a better and a more effective rate than its rivals. Point number two is when we assess China’s technological leadership and use this framework of innovation capacity versus diffusion capacity, my research finds that China faces a diffusion deficit. Its ability to diffuse innovations like AI across the entire economy lags far behind its ability to pioneer initial innovations or make fundamental breakthroughs. And so when you’ve heard from other people in the past or in the briefing memos you are reading, you are probably getting a lot of innovation centric indicators of China’s scientific and technological prowess.
It advances in R&D spending headline numbers on patents and publications. In my research I’ve presented evidence about China’s diffusion deficit by looking at how is China actually adopting other information and communications technologies at scale? What are its adoption rates in cloud computing, industrial software related technologies that would all be in a similar category to AI and those rates lag far behind the us. Another indicator would be how is China’s ability to widen the pool of average AI engineers? I’m not talking about Nobel Prize of computing winners like my fellow panelists here, but just average AI engineers who can take existing models and adapt them in particular sectors or industries or specific applications. And based on my data, China has only 29 universities that meet a baseline quality metric for AI engineering, whereas the US has 159, so there’s a large gap in terms of China’s diffusion capacity compared to its innovation capacity in ai.
I’ll close with the third point, which is some recent trends in Chinese labs, large language models. China’s has built large language models similar to open AI’s, ChatGPT, as well as open AI’s text to image models like DALL-E, but there’s still a large gap in terms of Chinese performance on these models and in fact on benchmarks and leaderboards where US models are compared to Chinese models on Chinese language prompts, models like Chat-GPT still perform better than Chinese counterparts. Some of these bottlenecks relate to a reliance on western companies to open up new paradigms. China’s censorship regime, which Dr. Jensen talked about and computing power bottlenecks, which I’m happy to expand on further, I’ll close by saying I submitted three specific policy recommendations to the committee, but I want to emphasize one which is keep calm and avoid over-hyping China’s AI capabilities in the paper that forms the basis for this testimony.
I called attention to a 1969 CIA assessment of the Soviet Union’s technological capabilities. It was remarkable because it went against the dominant narrative of the time of a Soviet Union close to overtaking the US and technological leadership. The report concluded that the technological gap was actually widening between the US as the leader and the Soviet Union because of the USS superior mechanisms to spread technologies and diffuse technologies 50 years later. We know why this assessment was right and we know we have to focus on diffusion capacity when it talks to, when it comes to scientific and technological leadership. Thanks for your time.
Sen. Mark Warner (D-VA):
Thank you all very much, gentlemen. I’m going to ask Vice Chairman Rubio to make any opening comments he wants. Then we’ll go to questions.
Sen. Marco Rubio (R-FL):
Round and I’ll be brief and I apologize. I was wrapped up in a call that started late and ended late, so I’ll be very brief. This whole issue is fascinating to me because the story of humanity is the story of technological advances from the very beginning in every civilization and culture, and there’s positives in every technological advance and there’s negatives that come embedded in it. And generally technological advances have for the most part what they’ve allowed human beings to do, what humans do, but faster, more efficiently, more productively, more accurately. In essence, technology and technological advances has in essence allowed humans to be better at what humans do. I think what scares people about this technology is the belief that it not simply holds the promise of making us better, but the threat of potentially replacing the human being able to do what humans do without the human, and that is the part that I think particularly when we get into the depth, I think one of the things that’s interesting is we’ve been interacting with AI or at least models of AI and applications of it in ways we don’t fully understand, whether it’s estimating how long it’s going to take from point A to point B, which is the fastest route based on their predicted traffic patterns of that time of day to every time you say Alexa or Siri.
All of these are somewhat built into learning models, but now we get into the application of machine learning where you’re basically taking data and you’re now issuing recommendations and based on of a predictive nature. So that’s sort of what we understand now, but then the deeper learning that actually seeks to mimic the way the human brain works, not only can it take in things beyond text to like images, but in essence learn from itself and continue to take on a life of its own. All of that is happening and frankly, I don’t know how we hold it back. So really the three fundamental questions that we have from the perspective of national security is first and foremost, which I think this is a broader topic that involves national security, but beyond national security is how do you regulate a technology that is transnational, that knows no borders and that we don’t have a monopoly on?
We may have a lead on it, but some of the applications of AI are going to be pretty common and for purposes of what some nefarious actor might use and some of its applications as well. The second is, will we ever reach a point? And this is the one that to me is most troubling where we can afford to limit it. So as an example, we are in a war, God forbid, with an AI general on the other side, how can we afford to limit ourselves in a way that keeps pace with the speed and potentially the accuracy of the decision-making of a machine on that end and our limits on ours, I fear. And the same is true in the business world. When we get AI CEOs making decisions about who are companies invest, there comes a point where you start asking yourself, can we afford to limit ourselves despite the downside of some of this?
It’s something we haven’t thought through and one that I don’t think, I think you’ve addressed a lot of or are going to be addressing a lot of the national security implications, but here’s the one that I think is related to national security. We have seen globalization and automation has been deeply disruptive to societies and cultures all over the world. We have seen what that means in displacing people from work and what it does to society and the resentments it creates. I think this has the potential to do that by times’ infinity in essence, how disruptive this will be the industries that it will fundamentally change, the jobs that will destroy and perhaps replace with different jobs, but the displacement it could create and that has national security implications and what happens in the rest of the world and in some of the geopolitical trends that we see.
And I think it has the threat of reaching professions that up to this point have been either insulated or protected from technological advances because their education level and so again, not a national security matter, but the look at the strike in Hollywood, a part of that is driven by the fear that the screenwriters and maybe even the actors will be replaced by artificial intelligence. Imagine that applied to multiple industries and what that would mean for the world and for its economics. So there’s a lot to unpack here, but the one point that I really want to focus in on is we may want to place these limits and we may very want be in favor of them from a moral standpoint, but can we ever find ourselves at a disadvantage facing an adversary who did not place those limits and is therefore able to make decisions in real time at a speed and precision that we simply cannot match because of the limits we put on ourselves? That may still be the right thing to do, but I do worry about those implications. Alright, thank you.
Sen. Mark Warner (D-VA):
Well, thank you Marco. I think, again, we are all trying to grapple with this in a variety of ways. I remember a year and a half ago as I was trying to get self-educated a bit and thought, first of all, should we even find a definition? I thought, no, not worth that. Last week we had 22 and it was really kind of the who’s who from all the tech side and on a variety of figures in civil society for a year and a half later. And no one still started with the definition of what the terms were even using. And yet we’ve seen, I would argue, I think about most things I spend time on, there’s some linear progression I feel in terms of the amount of time versus the amount I’ve learned in this topic area, the amount of time I get, sometimes more confounded.
I also think, for example, the economics, the economics of last November, whenever you said whether it was China in terms of scale, amount of data compute, et cetera, or entities, Microsoft or Google, the gating cost to come into this would be so high. We had the director of the OSTP recently say, potentially because of release of things like LLAMA, that you can now get in on a variational large language model for pennies on the dollar. So this is moving so quickly. And Dr. LeCun, I’m going to start with you and I warned you I was going to come at you on this. I worry a little bit when we talk about democratizing access then it sounds like some of your colleagues from social media in the late nineties we’re going to democratize access and we’ll figure out the guardrails after the fact. But in that democratization of access, I don’t think we ever put our guardrails in place. How do we democratize access with a AI tools and yet still put some level of guardrails and obviously bearing in mind what Senator Rubio said is you don’t want to act in unilaterally disarm, but the notion of not putting some guardrails in place in a field that’s changing so quickly. Can you speak to that
Dr. Yann LeCun:
Senator, this is a very important question that of course we’ve devoted a lot of thought to. I think the best way to think about this is to think about the type of AI systems that have been released so far as being basic infrastructure in themselves. Those systems are not particularly useful. They need to be customized for a particular vertical application. And a good example of this is the infrastructure of the internet, which is open source. It didn’t start out as being open source, it started out as being commercial and then the open source platforms won because they are more secure, easier to customize, safer. There’s all kinds of advantages. AI is going to become a common platform and because it’s a common platform, it needs to be open source if you want it to be a platform on top of which a whole ecosystem can be built.
And the reason we need to work in that mode is that this is the best way to make progress as fast as we can. I really like the argument of Professor Ding about the fact that in the US we’re extremely good at seizing the opportunity. When a new innovation comes in or new scientific advance, it diffuses very quickly into the local industry. This is why Silicon Valley is Silicon Valley, it’s geographically concentrated because information flows very, very quickly. Other countries that are somewhat isolated ecosystems intellectually do not have the same kind of effect. And so it favors us to have open platform.
Sen. Mark Warner (D-VA):
But I would respectfully say I think the two most immediate potential harms with AI tools that have already been released is the ability for
Massive undermining of trust in our elections and massive underlying of trust in our public markets. Come back to that later. Dr. Ding, one of the things you said, and I think you’re accurate about the fact that the PRC as a state has not done a good job of diffusing technological innovation, but I got to believe if we do do open source, there are geopolitical harms and there is the ability for the PRC, at least its intel and military to gain this knowledge from things that are released into the wild. Can you speak to that?
Dr. Jeffrey Ding:
Yeah, I think it’s a tough debate and tough discussion point. I think I agree that open source is important to spur diffusion and in terms of fundamental architecture, so going back to this internet example, the protocols for how IP addresses work, it makes sense that open source might be a way we spread that at scale. I’m less convinced that open source is the best model for reducing the harms that you identified in terms of specific powerful models that might not be as close to this infrastructure layer. So something like ChatGPT I think took a good stance in terms of setting up an application programming interface, which makes it not open source, it’s closed source and developers can impose rules on how the model can be used. So Chinese developers cannot use ChatGPT today to your point. And so I think this provides a balance in terms of the research community can still play around with the model by using the a p I system, but open AI and potentially government actors can use this a P i to implement rules to govern how these models are used by potentially malicious actors.
Sen. Mark Warner (D-VA):
Lemme get one question quickly to Dr. Jensen because the notion that scale was going to be the determinant factor seemed to be the argument that most large line good models launched from November till about May and then it switched. If scale is not the largest determinant of who will be most successful, what will be the determinant?
Dr. Benjamin Jensen:
The people? So you’re worried about the guardrails, I’m not even sure you have the railroad engineers to get the train at the station. And what I mean by that is you’re going to have to have hard decisions first like you’re seeing of what data is open and closed, right? If you want an innovative ecosystem, the exchange of ideas were built on that as a nation. Obviously not all ideas are meant to be shared. Even George Washington had secrets. So what do you hide in terms of data will become really important and how do you aggressively corrupt your adversary’s data through poisoning it as well a digital terracotta army to confuse them and scale will matter because the larger amounts of data inputs you have, the more likely you are to be able to detect adversary manipulation or those markets that we should worry about. And to do that, it’s not just the data infrastructure.
Again, you have to have the people who know what they’re doing. And I work with these folks. I mean I’ve served in uniform for 20 years, I teach warriors and I watch them when we tinker with this in the classroom, and honestly some part of me goes back to classical reasoning. You want to really teach someone how to work with a large language model, make ’em think like Socrates, learn how to ask questions. Otherwise it’ll just be confirmation bias. You’ve made them to understand how a sequential prediction that a large language model is just basically predicting how they’re going to finish their sentence. There’s no magic there. And so I really, it’s not just the scale of the data as you point out, sir. It’s how do we actually educate that workforce so that we can out cycle any adversary?
Sen. Mark Warner (D-VA):
Thank you all. I’m going to Senator Rubio, and again, I remind my colleagues in our open hearings, we go by seniority
Sen. Marco Rubio (R-FL):
And I understand we want to talk about the commercial and broader scientific applications of this. It’d be great to be the world leader, industry standard top of the line, but for purposes of this committee, how it be used as a nation state, what I think it’s important to sort of reiterate is you don’t need the state of the art for it to be adopted internally for your decision making. Every conflict in human history has involved an analysis. At some point, someone who started the war made an analysis based on their data that was in their brain, their understanding of history, their beliefs, their preconceived notions and the like, that they could be successful and that this was really important and now isn’t the time to do it. And that’s the part that worries me the most, particularly when it comes to how it applies to authoritarian regimes.
And here’s why, at least in our system, for the most part, we encourage people to come forward and as policymakers make an analysis and give us accurate information, even if it may not be the one we want to hear. In authoritarian systems, you usually get promoted and rewarded for telling leaders what they want to hear and not for reporting bad news and the like. And so I don’t know if anyone can answer this question, but I wanted to pose it to you. Isn’t one of the real risks as we move forward that some nation with some existing capabilities will conduct analysis on the basis of their version of ai, which will be flawed to some extent by some of the data sets and those data sets and the analytic functions, they reach the conclusion that this is the moment to take this step against this.
Now is the time to invade. Now is the time to move because our system is telling us that now is the time to do it. And that the system may be wrong. It may be based on flawed data, it may be based on data that people fed in there on purpose because that’s the data that their analysts are generating. That’s the part that I worry about because even if it’s not the top of the line or the state-of-the-art data, it will be what influences their decision making and could very well, very well lead to 21st century conflicts started not by simply a human mind, but how a human mind used technology to reach a conclusion that ends up being deeply destructive. Is that a real risk, is the question?
Dr. Benjamin Jensen:
I’m happy to talk about war anytime, Senator. I think you’re hitting on a fundamental part of human history as you’re saying, every leader usually not alone as part of a small group is calculating risk at any moment. And having models incorrectly or correctly add to their judgment about that is a real concern. There’ll be windows of vulnerability and the possibility of inadvertent escalation that could make even the early Cold War look more secure than it actually was. And so I think that’s the type of discussion you have to have. That’s where we hopefully will have back channel conversations with those authoritarian regimes. And frankly, it just bodes well for what we know works for America. A strong military where your model finds it really hard to interpret anything but our strength. So I think that there are ways that you can try to make sure that the right information is circulating, but you can never fundamentally get away from those hard, weird moments, those irrational people with rational models.
So you see the model as rational or flawed because it collects just as skewed data. I worry more about what we just saw happen in Russia where a dictator living in corrupt mansions reading ancient imperial history of Russia decided to make one of the largest gambles in the 21st century. And so I don’t think that’s going to leave us. I think that’s a fundamental part of human history. And I actually think in some senses the ability of models to bring data could steady that a bit and we can make sure that we show the right type of strength that it steadies it further.
Sen. Marco Rubio (R-FL):
Let me ask this question related to that one, and it has to do with the work of this committee in general at the core of intelligence work is the analysis. In essence, you can collect all the raw bits of data you want, but someone has to interpret in tele policy maker, this is what we think it means in light of what we know about those people, what we know about historical trends, what we know about cultures, what we common sense, whatever. And there’s an analytical product. And then you have to make policy decisions either with high confidence in the analysis, moderate confidence, low confidence, whatever it may be. Given that what suggestions if it’s possible at this point, could you provide us as to what that analysis should include or look like if applied to the way we analyze data sets so that not only are we reaching the most accurate results and the ones that are true, but ones that provide our policymakers a basis upon which to make the best decisions possible, weighing all the equities, including human consideration, not just simply the cost benefit analysis from an economic or military standpoint.
Dr. Jeffrey Ding:
So let me start with your earlier question, which I take as what is the risk of AI in terms of contributing to military accidents? And so I would say that an authoritarian regime might be a contributing factor to a state having a higher risk of military accidents. I think when we talk about these information analysis systems, think about the AIS, right? The US’ AIS system that collects information and analyzes it and issues what this target is, whether it’s friend or foe, and then whether we should fire a missile towards the target. In the 1990s, the US accidentally fired upon an Iranian civilian airliner killing 300 people. So military accidents can happen in democratic countries, but I think it’s an interesting research question, right? One of the things that I’m interested in studying is how has China as an authoritarian state actually demonstrated a decent safety record with civil nuclear power plants and aviation safety?
How does that happen in a closed authoritarian system? What is the role of international actors and a military accident anywhere? Whether it’s caused by AI or any other technology is a threat to peace everywhere, to your point. So we should all be working to try to reduce the risks of these systems, sort of accidents in military AI systems. To your second point, sort of one of my recommendations would be to keep a human in the loop regardless of whatever AI system we adopt in terms of intelligence, surveillance and reconnaissance, and hopefully that will make these systems more robust.
Sen. Ron Wyden (D-OR):
Thank you, Mr. Chairman. Lemme start with you, Dr. LeCun. I have proposed the algorithm accountability act that would require that companies test their AI for harmful bias, such as biases that can affect where you buy a house or what healthcare you have access to. Now reviewing your testimony, Dr. LeCun, you stress commitment to privacy, transparency and mitigating bias, all important values. It’s important to me also that there not be an uneven playing field that advantages the company’s cutting corners over companies that do the right thing. Could your company support this legislation?
Dr. Yann LeCun:
Senator, this is a very important question, Ari, thank you for raising this point. I’m not familiar with the details of the legislation in question. Certainly in the basic principles, they align with my personal thinking and those of meta, and I’ll be happy to put you in touch with the relevant people within the company who take care of legislation.
Sen. Ron Wyden (D-OR):
Hearing you agree with the principles is a good way to start. We’ll follow up now. The use of AI by the US intelligence community raises many issues starting with accountability. Now, if AI is going to inform the intelligence community’s surveillance decisions, one question I would have is if an American is spied on in violation of the law, who is responsible?
Dr. Yann LeCun:
Senator, again, this goes very much outside of my expertise, not being a lawyer or legislator. Privacy and security safety are on top of our list in terms of priorities. They’re very good principles to follow. We try to follow them as much as we can. I think an important point to realize as it relates to current AI technology such as large language models, they’re trained on public data, publicly available data, not on private user data. So there’s no possibility of any kind of privacy invasion from that.
Sen. Ron Wyden (D-OR):
Let me ask it this way. Your testimony stresses the importance of accountability in the private sector for its use of AI. How might these principles apply to the government.
Dr. Yann LeCun:
Senator, I think the White House commitments, voluntary commitments are a good start to specify guidelines according to which the AI industry should conform, including questions related to your point. So
Sen. Ron Wyden (D-OR):
What was it about the guidelines in your view that are responsive to my question about how this accountability in the private sector, the principles would apply to government.
Dr. Yann LeCun:
Senator, again, this is very much my expertise and I’d be happy to put you in touch with the relevant. Lemme
Sen. Ron Wyden (D-OR):
Move on then. Dr. Jensen, your testimony describes the need for intelligence analysts who understand that ai, the AI that’s informing their analysis. Now, I don’t know how realistic it is to add advanced computer science expertise to the job requirements of intelligence analysts. It seems to me though, you’re raising a very important issue. What are the consequences of disseminating intelligence analysis derived from processes that nobody understands?
Dr. Benjamin Jensen:
You hit the nail on the head center, and I think it’s less about making sure every analyst has a PhD in computer science. We can’t afford that. No offense Meta, but what we can do is make sure they understand the basics of reasoning. Causal inference. Sometimes we throw around the term critical thinking as a blanket statement, but almost going back to how would Aristotle teach Alexander the great to interpret a model? I know that’s a weird thought experiment, but think about that. Would that be a square of syllogism? Would it be about the logic? Would it be about contradictions? I think it’s realistic that we can go back to some basic philosophical reasoning and not necessarily have to have high degrees of computational understanding. And I think by doing that, you start to hone the ability of the person to ask a question. And as you all know, asking the great question is what produces real dialogue. So
Sen. Ron Wyden (D-OR):
As a general proposition, who would be accountable when the intelligence analysis turns out to be wrong?
Dr. Benjamin Jensen:
This is a great question and it actually dovetails with the question you were just asking. I’m going to summon my inner senator king and say, you still need one throat to choke, right? So that means ultimately you have to have people accountable in terms of how they certify the assurance of their AI model. And that means less like we had to struggle with this in the financial sector. You’re going to have entirely new positions created on how people certify the actual model, and does the model address a certain set of data, a certain set of questions, and back to what Senator Rubio was talking about, even assign confidence levels to that. I don’t think we can even imagine what that’s going to look like in five years, but I can tell you it’s going to be a growth industry and an important one.
Sen. Ron Wyden (D-OR):
My time has expired.
Dr. Benjamin Jensen:
Thank you.
Sen. Mark Warner (D-VA):
Senator Cornyn.
Sen. John Cornyn (R-TX):
Its simple. Computer or internet search tells me that the AI was basically the roots of ai. Go back to 1956. So maybe you could explain to me why we’ve gone all these years and haven’t talked much about AI and today we can’t talk about anything else. Anybody want to take a stab?
Dr. Yann LeCun:
I think I have to answer that question, Senator. Thank you for asking it. Generally, the problem of making machines to act intelligently has been much more complex than people initially realized. And the history of AI has been a succession of new ideas with people thinking this new idea was going to lead us to machines that are as intelligent as humans. And over and over again, the answer has been no. Those new ideas that you had have solved a number of problems, but human level intelligence is still out of reach and this is still the case today. So despite the fact that we have incredibly powerful systems that are very fluent, that seem to manipulate language at least as well as humans, those systems are very far from having human intelligence. Now, to directly answer your question, the reason why we hear about AI today so much over the last 10 years roughly is because of a new set of techniques called deep learning that has allowed machines to not be programmed directly, but to be trained for a particular task. And that’s been incredibly successful for relatively narrow tasks where we can train machines to have superhuman performance. But so far, we still do not have a way to train a machine that is as efficient as the way humans or even animals can train themselves. This is why we don’t have LOV cars. We don’t have domestic robots that clear up the dinner table and fill out the dishwasher.
Sen. John Cornyn (R-TX):
Thank you. I’m going to try to get two more questions in one. Dr. Ding, you mentioned that artificial intelligence is a general purpose technology, which leads me to ask why is it that we feel the need to regulate AI as opposed to regulating the sectors where AI is actually used, which the government already does?
Dr. Jeffrey Ding:
It’s a great question. I do think that a sector specific regulatory model is a good starting point because AI will have different risks across different sectors. In fact, the European Union is a good model for this. In their EU AI regulations, they have identified certain high risk applications where there are more stringent regulations. So I think the crucial work to be done is identifying which sectors or which specific type of applications would be considered more high risk than others. Obviously, the use of AI in a nuclear power plant, for example, or a chemical processing plant might be higher risk. And then there are other applications that are in a murky area such as large language models and content generation models. It’s more difficult to assess how you would compare the safety risks of those against more traditional industries.
Sen. John Cornyn (R-TX):
Thank you. I just have about a minute. Georgetown Center for Security and emerging technology documents that American investors are provided roughly $40 billion or 37% of the capital to PRC AI companies from 2015 to 2021. This has been an area of emerging concerns, subject of an executive order by the administration, Senator Casey and I have in the defense authorization bill, an outbound investment transparency bill, because it occurs to us that we are helping to finance our chief competitor globally, the PRC. So can you, maybe I’ll stick with you, Dr. Ding, can you speak to how effective those export controls, the outbound transparency issues? How are we doing in terms of slowing down our principal competitor and while we try to run faster?
Dr. Jeffrey Ding:
It’s a great question. I think first of all, the transparency measures are a good starting point because we need to have a better sense of how much outbound investment is going to China, what’s the nature of that outbound investment, and what are the national security risks of that investment. I think oftentimes we think anything that helps China is going to hurt the US when it comes to this space. I think it’s an interesting question, right? By dance companies like Alibaba, Baidu, these giants in China’s AI industry, they have a lot of foreign investment, but a lot of the profits that they make come back to the US economy. It hopefully gets reinvested into American productivity. So I think it’s a more difficult question than just any investment in China’s AI sector means it’s harmful to US national security. So hopefully the transparency measures will help us get a better 360 degree view of the situation.
Sen. Martin Heinrich (D-NM):
Thank you. Thank you Chairman. Dr. Jensen, you’ve spoken a little bit about the importance of data. How should we leverage our unique US government data sets to our advantage?
Dr. Benjamin Jensen:
Thank you, Senator. First, make them actually interoperable. One of the big challenges we have is that because of just antiquated bureaucratic procedures and stove piping, this AI scientist would probably not be that successful in the US government because he couldn’t make any of the data. So it’s not that it’s just heterogeneous, it’s spread out over bureaucracies with random officials, each exerting authority. They don’t have to limit the ability to share. So if you need more data to make smarter models and you have people limiting the exchange of data, you’re never going to, again, the train’s not going to leave the station. So I think the first thing, and this is where I think the Congress has a very important role, is how do you actually, whether it’s through testimony, whether it’s through the NDAA, whether it’s through hearings like this, how do you get bureaucrats to be accountable to actually exchange the data? And it actually goes deeper into actually government procurement and contracting. Right now, we could insert basic blanket language that require all vendors because the sensors, the intelligence committee, we buy it, right? So why shouldn’t it be that they’re required to produce the information in standardized formats that we could ensure are interoperable? So we lower the actually bureaucratic and engineering friction and we can make use of it. It’s just a boring issue. So sadly people don’t pay attention
Sen. Martin Heinrich (D-NM):
To it, but it’s probably the most foundational issue from Yes. Yeah. Dr. Ding, you talked a lot about diffusion capacity. Are there any things that you consider threats to our diffusion capacity that we should be concerned with?
Dr. Jeffrey Ding:
Yeah, I would say the main things to improve in terms of the US’ diffusion capacity is investing more in the STEM workforce in terms of not just attracting or building up the best and the brightest, but widening the base of average engineers in software engineering or artificial intelligence. The US government has some proposals on that and the CHIPS and Science Act made some steps in that direction. But I would say we have overly weighted towards investing in r and d as sort of the end all be all of science and technology policy. And a more diffusion oriented policy would look at things like broadening the workforce, investing in applied technology services centers and dedicated field services. There’s different voucher schemes that can encourage the adoption of AI techniques by small businesses as well. So all of those things would help resolve some gaps in our diffusion capacity.
Sen. Martin Heinrich (D-NM):
Very helpful. Dr. Jensen, the current DOD directive governing lethal autonomous weapon systems appears from my read to permit, the potential development, even deployment of fully autonomous systems, not just in the defensive that can select and engage targets that could select or engage targets without a human in the loop. Talk to me about what are the risks there, how we should be further developing that policy, especially with regard to potential escalation.
Dr. Benjamin Jensen:
Sure. Great question, Senator. I think the real issue here is not whether or not you should do it, it’s here, right? It’s how, again, you get the assurance and the model and why you have to constantly train and experiment to understand those edge cases, which I think is where also Senator Rubio is getting, where is there that moment of high escalation risk that’s actually irrational in the grand scheme of thing, but it makes perfect statistical sense at the moment. You don’t find those weird cases until you do large scale war gaming and constant experimentation. And that’s not just fine tuning the model. This is where I think we get it wrong. We think it’s like, well, I’m just going to calibrate the model. I’m going to fine tune the model. No, it’s fine tuning the men and women who will use the model to make some of the most difficult decisions about taking life, because eventually there’s still someone flipping that switch, right? And so we need to make sure that those people, those men and women in uniform and the elected officials granting them the ability to use lethal force, have actually done tabletop exercises and experiments where they’ve thought through this, don’t let your first moment of unleashing your robotic swarm be the first time you thought about it. And that’s going to require a larger national security dialogue and even tabletop exercises that help them see those moments.
Sen. Martin Heinrich (D-NM):
Dr. LeCun, last question. Before systems get released into the wild, as it were, there are a lot of ethical and other questions that need to be asked. Do you think that META’S AI and trust safety team, or that matter, the team for any AI developer, have the capability to really understand what the potential risks and benefits are before to be able to know whether or not making a system, putting it into the wild or making it open is a good idea?
Dr. Yann LeCun:
Senator, thank you for your question. I can describe the process that we went through for the LLAMA and LLAMA 2 system. So first of all, the LLAMA system was not made open source. There is two parts to an open source package. There is the code. And the code frankly is very simple and not particularly innovative. What is interesting for the community to release is the train model, the weights. This is what’s expensive. This is what only large companies can do at the moment, and we released it in a way that not authorize commercial use. We kind of vetted the people who could download the model. It was reserved for researchers and academics. This was a way for us to test what the system could be used for. Of course, there was a long history of three years of Urban Source L LMSs that were available before and the harm had not materialized so far.
So there was a history we could Bezos on. For LLAMA 2, we had a very thorough process. First, the dataset was curated in such a way that the most controversial toxic content was eliminated from it so that we would get a high quality model. Second, there was a lot of red teaming, so people will basically try to break the system and get it to produce dangerous toxic content. Thousands of hours spent doing this by a group that is independent from the group that actually designed and trained the system. We have an entire group called Responsible ai whose responsibility among others is to do this kind of thing. And then we gave distributed in a limited way the model to white hat hackers at the DEFCON conference, for example. So it was a slightly bigger community of people are really expert at trying to break systems of this type and got some assurance that those systems were good. We fine tuned them so that whatever was bad was fixed and then we instituted a bug bounty policy so that there would be an incentive for people who find weaknesses in our system to tell us. And in fact, the enthusiasm from the open source community after the release of LLLAMA 2 has been so much that’s so enormous that we are getting feedback absolutely all the time and make those systems safer.
Sen. Mark Warner (D-VA):
Senator Moran.
Senator Jerry Moran (R-KS):
Chairman. Thank you. Dr. Jensen. You say that kids in China aren’t rushing to join the army and that Russian tech workers fled the country to avoid fighting the unjust war in Ukraine. Senator Warner and I have introduced over the years a startup act designed to create greater entrepreneurial environment in the United States, but includes the creation of a STEM visa that would allow immigrants with advanced STEM degrees to stay in the US as long as they remain engaged in the STEM field. The indication is workforce is hugely important. Let me ask though, when you say what you said, give me some ideas of what you think about how opening the pathway for Bright Minds would benefit the US in development of AI and at the same time, depending on where they come from, perhaps actually hindering our adversaries.
Dr. Benjamin Jensen:
Anytime you bring people in, as you well know, right? When you bring someone into a SCIF, you always assume risk. So the question becomes is what procedures do you put in place to analyze the risk versus the trade-off of exchanging that information and honing someone’s ability to make a time sensitive decision? I tend to view immigration, especially with people with high stem backgrounds as an outstanding American tradition, and don’t just give us you’re tired and you’re poor and you’re sick. Give us your brilliant people who want to come here and make the world a better place. Now, how we integrate them into national service I think is a really extension to what you’re talking about. How do we make them see that we’re a country that believes in service and it believes its service from the local to the national level and encouraging that. Now does that mean everyone who was maybe has a cousin in the PLA gets a top secret clearance? No, but there’s still a lot of ways people can serve, whether in uniform or whether in the government or our society writ large and frankly the smarter people, we have to look at this critical moment in history, bring them.
Senator Jerry Moran (R-KS):
Thank you, Dr. LeCun. I think I’ll address this to you. I’m the lead Republican on a subcommittee that appropriates money for the Department of Commerce, justice and Science, and one of the efforts that the NSF has is the National Artificial Intelligence Research Institute program. I’d be glad to hear you or any of our panelists critique or praise the outcomes or the capabilities of that program and how do we work to see that that program fits in with the majority of research which is done in the private sector?
Dr. Yann LeCun:
Senator, this is a great question. As a person who has one foot in academia and one in industry, what we’re observing today is that academia, when it comes to AI research is a bit of a bind because of the lack of resources. So one thing I believe is in this bill is to provide infrastructure, computing infrastructure for academics and other non-commercial scientists to make progress, which I think is probably the best use of money you can have. Another one would possibly be favoring the free exchange of information and idea to basically improve the diffusion process between industry and academia.
In some countries, some European countries, there are programs that allow PhD students to have residency in industry, not just an internship, but spend a significant amount of time, like two or three years during their PhD. And we actually at Meta, we’ve established programs of this type with bilateral agreements with various universities across the US because it was so successful in Europe that we tried to translate it here. If there was some help from the government for this, that would be absolutely wonderful. Last thing is access to data. So this is something that Dr. Jensen mentioned in a different context, but certainly research in healthcare, for example in medicine could be greatly improved if researchers had better access to data, which is currently mostly kept private for various reasons, complicated legal reasons that perhaps Congress could help resolve.
Senator Jerry Moran (R-KS):
Thank you. I’m going to try to get one more question in. Tell me about how custom AI models off the shelf, how do they fit together? How can the government best combined commercial off the shelf and custom AI technologies to ensure that government is fully leveraging AI capabilities, particularly in the intelligence community?
Dr. Yann LeCun:
Senator? I think there is the current state of affair in the AI marketplace, and there is where I think it’s going, which is a bit of a guess, and where I think it’s going is that it’s going to be a little bit like the internet in the sense that there’s going to be common platforms that are essentially open source produced by companies like Midtown and others, possibly with contributions from academia, et cetera. On top of those platforms, commercial products would be produced, which may not necessarily be open source, they may be commercial in various ways and there’ll be customized, fine tuned for particular applications, whether it is in government, in education, in industry, manufacturing services, entertainment, you name it. So that’s I think the model of the future. This is the type of model we’re observing today in various domains in software, certainly on the internet. That’s where I think it’s going.
Sen. Mark Warner (D-VA):
Thank you. Thank you all. And now to the author of the choke theory, as Dr. Jensen indicated, Senator King.
Sen. Angus King (D-ME):
Well, Mr. Chairman, in a couple of minutes ago, I went to ChatGPT and asked them to write a poem about the Senate Intelligence Committee in the style of Dr. Seuss. I want to share it with you in a chamber up high beneath the Capitol Dome, this was written in about four seconds, I timed it four seconds. In a chamber up high beneath the Capitol dome, there’s a committee that seeks knowledge, they call it their home. The Senate intelligence wises an owl in the pursuit of truth, they go the extra mile with hearings and briefings. They gather the facts, no secrets too deep, no hidden contracts. They investigate, inquire, and they never back down. In their quest for transparency throughout the town, they question the spies, they question the Chiefs to uncover the truth. They provide us relief in a world full of secrets where lies often play the descended intelligence lights up the way they protect our nation, our values they hold with their wisdom and courage stories untold. So here’s to the Senate intelligence, steadfast and true in the style of Dr. Seuss. We salute you
Four seconds. I mean this is, you’ve got to experience this to realize the implications and the power that this has. Dr. Jensen first, coincidentally, we had our third report from the solarium commission this morning and I talked about the brilliant work of the staff and you were a leader in that, and I just want to thank you. The work that you did was absolutely extraordinary. We’re now up to about 70% implementation of their solarium recommendation, so thank you for that. The word productivity has been mentioned. One person’s productivity is another person’s job loss. Talk to me about Dr. Ding talked to me about that. Are we an 1811 and the Luddites, and is this something that is a serious concern or is this just the March of human history where tools have been enabling more productivity since the invention of the hammer improved upon the rock?
Dr. Jeffrey Ding:
It’s a great question. I think it also aligns with some of the statements that Vice Chair Robio was talking about in terms of job displacement. I think when I have my national security cap on, the reason why I emphasize productivity and fusion capacity is because historically great powers have been able to rise and fall based on whether they’ve been able to leverage and exploit new technological advances to achieve economy-wide productivity growth. The
Sen. Angus King (D-ME):
Manhattan Project might be an example of that.
Dr. Jeffrey Ding:
Yeah, I think for me it’s less about the moonshot, the moonshot singular technological achievement. It’s more about who can diffuse electricity at scale or who can intelligent ize the entire economy at scale. And to your point, there are debates among economists about whether the future job displacement by AI and robots would be greater than the jobs that would be created by some of these new technologies. I’ll defer to those works by folks like Darren Aslu in terms of job displacement. And I think it gets complicated because if job displacement is so severe that it causes internal political cleavages, that could also then become a national security issue. And
Sen. Angus King (D-ME):
It’s also hard to project what the gains will be because it’s a new area. You don’t know how many new jobs will be created in wholly different areas.
Dr. Jeffrey Ding:
Yeah, I also agree with that and Eric b Brinson at the digital economy team at Stanford has talked about it’s very hard to measure the productivity gains from digital technologies in particular. And I think another key point is these general purpose technologies that affect on productivity take decades to materialize. So we might not see the computer in the productivity statistics as economists solo once said, but eventually we will see the impact
Sen. Angus King (D-ME):
Come. One of the major spurs of the economic boom, if you will, of the nineties and early part of this century was the integration of the computer into the workforce. That is enabled a great economic upsurge.
Dr. Jeffrey Ding:
Exactly, and the key is that took more decades to happen than we predicted initially.
Sen. Angus King (D-ME):
Now, Dr. LeCun, a very practical question. How feasible is watermarking of AI generated images? This is of concern to us frankly, because we are very likely to be subject to AI generated false disinformation, very skillful, our face, our voice, our gestures, but completely false. How feasible is it to require that those that AI generated images on Facebook or on TikTok or Instagram all be watermarked or labeled in such a way so that the consumer will know that they’re looking at something that isn’t real? Is that technically feasible and is that something we should be thinking about here as we’re thinking about regulation?
Dr. Yann LeCun:
Senator, this is a very timely question. Of course, it is. Technically the main issue is for a common standard to be adopted, so there needs to be a common way of watermarking invisible or invisible way using Synography. The fact that the process by which a piece of information has been produced, this can be done with images and video and audio such that a computer can detect whether the system has been generated by a generative AI system, but the user will have to not counteract it, will have to use the products to produce it that obey that standard. And so that needs to be adopted industry-wide. The problem is much more complicated for text. There is no easy way to hide a watermark inside of a text. People have tried to do this by manipulating the frequency of different words, but it’s very far from perfect. But for text, text is produced by humans. It’s not like a photograph which you can take anywhere text. In the end, the person posting the text has to be responsible for that content.
Sen. Angus King (D-ME):
So we should not have liability protection like Section two 30 publishers should be responsible for what they produced.
Dr. Yann LeCun:
Senator, I’m not a lawyer. I know that Section two 30 has been crucial for the success and the development of the internet, but I would certainly be happy to put you in touch with
Sen. Angus King (D-ME):
Chairman experts. I hope the panel will help us on this watermarking question because I think that’s something we really need to understand and that may well be part of whatever legislation we’re developing and we need your expertise on that, that the consumer should know what they’re looking at. Thank you. Thank
Sen. Mark Warner (D-VA):
Mr. I concur. And I think the notion that there could be seven or eight or 10 different standards if each platform chose may not get us there with apologies to my friend Mike rounds. Senator Lankford.
Sen. James Lankford (R-OK):
Well, thanks for apologizing to Mike Rounds that I’m going to be up next. I appreciate that. So thanks for your testimony and thanks for the research and the work that you’ve already done on this. Part of the challenge on whether it be watermarking, whatever it may be, is that obviously there’s open platforms there that are both Meta has produced and that also the Emiratis have produced. And obviously we don’t have authority to be able to tell the Emiratis what they can actually produce for watermarking. So this becomes their text in that sense. So it becomes much more difficult in this process. You go back decades ago, we’re researching how to build rockets and to be more effective both for space and for military use. And we understand that other countries are working on the same thing for both space and military use to be able to determine the differences.
And so we try to set limitations for that. We’re in a unique position now where the PRC is also very interested in partnering, and there are current research hubs in the PRC on OpenAI that Google and Microsoft both have at this point. The question is how far does that go and what do we engage with and at what point does that become facilitating someone who may be an economic adversary that we hope is never a military adversary in that kind of partnership or that kind of research. So at Meta, as y’all are dealing with this and trying to be able to think through partnerships, whether it be cloud basing with Alibaba or whether it be actually partnerships with PRC entities for research in the area of ai, how should we approach that from the intelligence committee as just a national security issue?
Dr. Yann LeCun:
Thank you, Senator. This is a question I think that Dr. Ding is much more expert at than I am. Meta does not operate in A PRC for two reasons, because the regulations in the PRC about user privacy are incompatible with META’S privacy principles and also because the PRC basically wants to control what information circulates. So meta does not operate in A PRC.
Sen. James Lankford (R-OK):
Alibaba is a cloud partner though with Meza, aren’t they?
Dr. Yann LeCun:
So Alibaba can has installed LLLLAMA 2 and provides it as a service on its cloud services outside of the PRC. So I dunno. Okay,
Sen. James Lankford (R-OK):
Dr. Ding, you want to be able to help us unravel this? This will be a larger policy issue that we’ve got to be able to resolve.
Dr. Jeffrey Ding:
Yeah, I think it goes back to our conversation about US investment flows into Chinese AI companies. A similar story, a similar debate is happening around should US multinational technology giants have r and d labs in China. If I were to rephrase your question, like Microsoft Research Asia in Beijing, which is
Sen. James Lankford (R-OK):
Or partnerships or have PRC partnerships that are here in the United States actually, and they do research together on it knowing full well where that research
Dr. Jeffrey Ding:
Goes. Yeah, I think my take is one of the senators earlier mentioned this idea of running faster. My take is the US can adopt one of two approaches. One is this sort of Fortress America approach where we’re like, we can’t let any technological secrets leak to China. The second is this run faster approach where it’s we’re going to take the bet that our open economy, our open system of innovation, there are going to be some leaks. There might be some partnerships that might allow China to get a little bit further in AI than they otherwise would have, but that partnership might also help us companies run faster. So being able to access global innovation networks and keep abreast of what’s happening not just in China, but the U A E or Israel. I think the advantages of that and continuing to maintain the openness of those global innovation networks is always going to favor the US in the long run in terms of our ability to run faster.
Sen. James Lankford (R-OK):
So you base, it’s going to always favor us in the long run based on what?
Dr. Jeffrey Ding:
I think there’s a couple of historical examples in this space. So we had similar debates about satellites and previously we had a lot of export controls on satellite technology, but over time we relaxed those controls because we realized first of all, this technology is so commercially based and being driven by the commercial sector that Chinese companies were just getting satellite parts from European suppliers or other hubs in this global innovation network of satellites.
Sen. James Lankford (R-OK):
But that’s already developed technology. The challenge that we have with the AI side is we’re in the process of still developing in so many areas when you form a partnership, then they’re getting it near simultaneous. That’s a very different issue. If you’ve got a satellite part piece or satellite as a whole that’s already been developed, already used, we’re already seeing copies of it, we’re already seeing other innovation commercially on that that’s different than ground zero. If we’re going to remain a competitive edge, having someone at the table that maybe then exporting that in real time out becomes a real challenge for us. On the intel side of things and a relationship issue, it’s the reason that we partner with Russia, with NASA on technology on the space station, but we’re not going to partner with the Soviet Union in the earliest days of all of our work because that’s the innovation side of things. Now while we’re always innovating, it’s trying to be able to protect what’s first generation. Does that make sense? So that’s an issue long-term that we’ve just got to be able to determine what’s the best way to be able to do that. Where do you put limitations and how do you develop those? Thank you, Mr. Chairman.
Sen. Mark Warner (D-VA):
Senator Rounds. He actually asked pretty damn good questions. Senator Bennet.
Sen. Michael Bennet (D-CO):
Thank you Mr. Chairman. And I want to thank my colleague, Senator Langford for his questions. I think it’s an important thought exercise to consider where we were with space technology 10 years ago when it was zero hour for that technology. I think at least from my perspective, I think it’s very, very clear that our complete lack of export controls, our complete lack of paying attention to the protection of our IP has allowed China to build something in outer space that’s our near peer competitor. Even worse than that, without the expense that we went through to develop this without the expertise and without the society that Dr. Ding has talked about, which I never would bet against either, but I do think that is a serious problem that we have just spent the last 10 years doing with a serious issue with respect to space. And I hope we find a way to avoid it here, which I think is your point.
I definitely want us to avoid it here. Dr. Jensen was something you said at the very beginning of the hearing caught my attention and I thought it was worth more elaboration. So what if you find yourselves in a Cuban missile crisis? Again, were something equivalent to that. You’ve obviously thought about that, so let’s talk about that. What would that look like today versus what that would’ve looked like in the early 1960s when President Kennedy was trying to reach the decisions that he was trying to make, Khrushchev was trying to reach the decisions he was trying to make, and both people were making fundamental mistakes of judgment along the way. What? And in the end, it resolved itself in the best resolution possible for humanity. What does that look like in an AI charge situation?
Dr. Benjamin Jensen:
After this meeting, Senator, I’m going to send you the generative AI artwork we did on this to have it imagine a Salvador Dali painting, a Edward Murrow type news broadcast of that moment, and it is both beautiful and deeply disturbing.
Sen. Michael Bennet (D-CO):
I’m going to hang it up right next to Angus’s poem.
Dr. Benjamin Jensen:
Yeah, Dr. Seuss meets nuclear war.
Sen. Michael Bennet (D-CO):
Exactly.
Dr. Benjamin Jensen:
So this came out of actually a series of tabletop games we did for the defense threat reduction.
Sen. Michael Bennet (D-CO):
Actually, we should ask ChatGPT. If you’re listening, we should ask them to run the scenario for us.
Dr. Benjamin Jensen:
Just don’t ask them manually,
Sen. Michael Bennet (D-CO):
But you’re going to do it for us, so go ahead. I’ll
Dr. Benjamin Jensen:
Do it for you. But we did this in a tabletop for DTRA thinking about what exactly would a critical crisis moment look like? And one of the most interesting things is there’s such a tendency to move faster, but faster isn’t better even when you’re analyzing data. And what we found is a lot of the discussion was about how just as actually Senator Rubio was talking about you’re getting information correct or incorrect, and now it’s triggering these very human instincts, right? We’re all of a sudden getting afraid. We’re nervous. You’re heart’s racing. It’s the person interacting with the algorithm. There is no pure machine. What we found is actually maybe our earlier generation of statesmen and women were brilliant by slowing down crisis decision-making. If you’ve ever seen the red phone, it’s not a phone, it’s a telex for a reason because it deliberately slows down and makes you deliberate. So what we walked away with is you’re going to be fine in the crisis if you know when to slow down and not let the machine speed you up. Now, how do we get there? Again, this is why at CSIS and the Futures lab and the ISP program, we are running tons of tabletop, actually, I’ve seen some staffers here. We invite congressional staffers out because if we don’t actually think about those moments now in a very human sense, this isn’t necessarily fine tune the algorithm. It’s those interactions. How
Sen. Michael Bennet (D-CO):
About if you had to replay on a day-to-day basis the decisions that were made about our nuclear arsenal or the Soviet Union’s nuclear arsenal? Have you guys thought about that at all, what that would
Dr. Benjamin Jensen:
Look like? Sure. And so at the office and net assessment in the late seventies, early eighties when they funded the Rand Rass program where they used old school expert system AI to actually do large scale modeling of what strategic competition looked like, the military balance, and actually to fight entire simulated campaigns. So what Senator Corn is saying, true, we’ve been experimenting with variations of the science of the artificial intelligence isn’t quite there for generations and I think we’ll continue to do so because ultimately crisis decision-making is about people, it’s about politics and it’s about emotion even more than the models in front of you.
Sen. Michael Bennet (D-CO):
I think one of the, just if I could lay on one more, not another question, just observation, Mr. Chairman, before I stop. We’ve had this discussion today about a race and effect and can we win the race or is China going to win the race? And I think we haven’t had a discussion about how do we get to an end state here where AI reflects the values that this democracy supports in terms of freedom, in terms of rights, in terms of free speech and other kinds of things, so that if there are people that are unfortunate on planet Earth to live in a totalitarian society where the authoritarian is rolling out that sort of AI system, there is something available to the rest of humanity that is not that lowest common denominator or uncommon denominator. I don’t know the answer to that, but I suspect that has a lot less to do with some sort of race with China than ensuring that as we think about the implementation here, we’re doing it in a way that actually is true to those core values that we have. And that among other things, is going to look a lot different than the rollout of social media over the last 15 years or so, as you said at the outset of this hearing. Thank you, Mr. Chairman.
Sen. Mark Warner (D-VA):
In a brilliant lead up to Senator Rounds.
Sen. Michael Rounds (R-SD):
Thank you, Mr. Chairman. Listening to Senator King’s poem, I wondered if it was a hallucination on the part of ChatGPT at that point, as they kindly talked about our committee, and I’m wondering if it would’ve been a different poem, if it would’ve been a house member requesting that type of a message on a Senate committee. Dr. Jensen, I want to begin just with a question to you about our current state of play loitering, munitions on, I’m thinking back, the Nno Carrabba war between Armenia and Azerbaijan September of 2020. Rajan was very, very successful in a very short time period using loitering munitions to literally identify using ai actually an Israeli drone system that was there in the marketplace. They could buy it. Can you talk a little bit about just how widely spread and how difficult the battlefield situations are right now with regard to the use of AI?
Dr. Benjamin Jensen:
I’m going to try to be short. I could talk to you about this all day. Senator. I think actually Ukrainian President Zelensky summed it up best when he said it’s trench warfare with drones. And so one of the reasons this spreads is because really cost matters. So if I can get low cost ability to actually move a munition closer to its intended target with a good circular air probability to strike, I’m going to do it. That’s just human instincts. If we’re in a fight, you got to win. And why I think you’re going to see something really interesting on the horizon and why we need to get our house order in the United States is that the only way to make that work is if you allow people to actually tailor their model from the bottom up. You’ve actually seen this in Ukraine where nonprofits and tech workers have been able to actually train imagery recognition on the fly.
If they would’ve waited for a standard US government certified algorithm and piece of equipment, it would not look the same, right? So they’re able to actually take all that drone footage quickly, retrain their model in the battlefield and then vectored the attack. And that’s why honestly, we’re going to have to be honest about how expensive this is going to be. The intelligence collection costs to get a picture of every tank at every time of day, at every angle, just so you reduce the risk means you are now going to have constant collection going on to train that model. And if you don’t have the people who know how to use it and interpret it, you’re going to have the training and calibration up at such a high level that the bureaucratic time to use will not be there.
Sen. Michael Rounds (R-SD):
But the point being it exists today. The cat’s out of the bag and our adversaries or other countries are utilizing it today, and the United States is probably in a position to use it today as well.
Dr. Benjamin Jensen:
We are in a position to use it and in the best of American tradition, let’s scale it and do it better and more just than the other guy.
Sen. Michael Rounds (R-SD):
Dr. LeCun, Europe has developed a model right now, or at least they’re in the process through their European parliament, developed a model to regulate ai. They’ve identified high risk categories along with two other categories of lesser risk. Have you had an opportunity to look at that and what is your thought about the approach they’re taking in terms of trying to regulate AI based on that approach?
Dr. Yann LeCun:
Senator, there are principles that are in that bill that are probably a good idea, although I don’t know the details. Frankly. I think the startup and industry community in Europe has been quite unified in opposing that regulation, not because of the points that you’re making, which I think are probably good ones, but because of the details of the regulation, frankly my knowledge of it is too superficial to make more comments about it.
Sen. Michael Rounds (R-SD):
Very good. Dr. Ding, I’m just curious. Weapons systems on call today. As indicated earlier in the conversations right now, we have our weapon systems that once they’re armed, whether it be on one of our ships near our coastlines and so forth, once we’ve armed a weapon system and they can identify an incoming as being a threat, we currently utilize that because in some cases there is no way a human could make the decision as quickly as that machine could. Is it fair to say that not just us, but our adversaries are also using that same weapon system and it’s being incorporated on the battlefield today?
Dr. Jeffrey Ding:
Oh, wow. I think my mic has been on the whole time, so sorry if there’s been any extra sound, I think I’ll defer mostly to Dr. Jenssen’s answer to this. I think my slightly different take is loitering. Munitions, to the best of my knowledge, are not using cutting edge deep learning, deep learning advances that Dr. LeCun is talking about. So I guess it’s AI, if you define the term very, very loosely, but the algorithms that are underlying the fundamental breakthroughs in the civilian AI space today to the best of my knowledge, are not being deployed at scale in any military, whether it be the US or any other leading militaries. I think that speaks to these technologies take a very long time to diffuse and become adopted throughout different militaries. I guess the optimistic view from that is we do have time to hopefully figure some of these things out that might be a different view than you hold or others in this room. Hold though.
Sen. Michael Rounds (R-SD):
Thank you. Thank you, Mr. Chairman.
Sen. John Ossoff (D-GA):
Thank you, Mr. Chairman, and thank you to the panel. So Dr. Ding, as I understand the argument you’re making and it’s a compelling one, the US has an advantage because of the structure of our market economy and our R&D enterprise that this technology can be diffused, adopted across sectors more rapidly, enabling us to realize productivity gains more rapidly and so on. I guess a question for you is, is it about maximizing the rate of diffusion in your terms or is it about achieving the optimal rate of diffusion and what’s the difference between those two?
Dr. Jeffrey Ding:
Yeah, it’s a great question. I think there is a difference. I can imagine a world where a country diffuses a technology very quickly in the initial years, maybe a technology that’s unsafe or harmful, and then there’s a backlash to that technology so it doesn’t reach that optimal state or that optimal level of diffusion that you’re talking about in the long run. So for me, when we’re having discussions about AI regulations, oftentimes it’s framed as regulation will hamper diffusion, and maybe in the short term it might reduce the speed of diffusion, but I think smart, sensible regulation that ensures more trustworthy AI systems, safer AI systems in the long run will get us to that more optimal rate of diffusion that you’re talking about.
Sen. Jon Ossoff (D-GA):
And in addition to, and by the way, I think the conversation is a little bit overweight risk to the point where we are in jeopardy of unduly restraining the diffusion of some productivity enhancing or research advancing capabilities. But in addition to the risks associated with economic displacement, which you spoke to earlier, Senator Rubio spoke to earlier, what are the other specific risks that you anticipate could emerge from two fast diffusion and adoption.
Dr. Jeffrey Ding:
Some of the ones we’ve talked about today with regards to the use of these large language models, for example, for disinformation misinformation to enable propaganda at scale? I think some of the risks that we haven’t talked about today, we’ve seen examples of algorithms that have specified reward systems. So there’s an example of an open AI system that is about this boat trying to go through this course as fast as possible, and it’s driven by an AI model based on reinforcement learning and the model because the programmers have specified how the model should learn. The model learns that the best way to accumulate the most points in the fastest amount of time is just to crash the boat immediately, and that kind of creates some sort of so
Sen. Jon Ossoff (D-GA):
Short run, long run incentives, for example. Dr. Jensen, what are some specific potential applications of this technology in the conflict avoidance, risk mitigation? I mean, we’ve talked a lot about how it can make militaries faster, better informed, more lethal. What are the applications that are applicable in the institutional structures analogous to the red phone, the hotline or the now more abundant UN security council or the kinds of verification regimes that were in place around the test ban treaty and so on Open skies.
Dr. Benjamin Jensen:
Yeah. Before we get to that harder intelligence problem set, I know that you have an interest in human rights in this as well. So think about what a peacekeeping mission would look like if I could actually tailor my messages. I’m doing key leader engagements and meeting with different stakeholders. I think that first, at that most basic level, it’s not just the best wars you win are the ones that you never fight without seeding the advantage. And usually that requires a degree of actually managing crises all over the world. So I think there’s actually a whole way we could use those in peace building and development as well. When we go to the harder target sets you’re talking about which deal with hard target problems where an adversary is deliberately hiding something, so I’m using open skies, you can see me looking, but we’re playing this kind of compliance game.
I have to expend energy to hide. I think there’ll be very clever ways of building models that simulate some of that or even help you analyze some of the data. But again, I would even be okay if we could have the simple models Dr. Ding is talking about. I would be okay if we could even just have some basic imagery recognition and accelerate it faster than we currently have and then get to the other stuff. The most beautiful AI thing I’ve seen is watching a military officer when we introduce us in the classroom, try to write their commander’s intent. The most personal thing any military officer will do because you are responsible for your actions is to write your commander’s intent. Well, why wouldn’t I want to have a dialogue with the corpus of military history and look at different ways it was worded and different ways I could hone my own voice as I took responsibility for using force.
Sen. Jon Ossoff (D-GA):
Mr. Chairman, one more question if that’s all right. So Dr. Luco, I’m not entirely sure what exactly you are proposing in your discussion the merits of open source systems for the development and diffusion of a technology, are you suggesting that it should be mandated that models are based upon and licensed on open source principles? Are you suggesting that it’s simply preferable if developers use open source ethics and guidelines and the development and licensing of their models? What exactly do you mean when you advise us that this is desirable?
Dr. Yann LeCun:
Senator, thank you for your question, which I’m personally very, very interested in. I think it should certainly not be mandated. It should not be regulated out of existence. There are people who are arguing that AI technology, particularly in the future, will be too dangerous to be accessible. And what I am arguing for personally and also meta as a policy is on the contrary, the way to make the technology safe is to actually make it open at least the basic technology, not the products that are built on top of it to ensure American leadership because this is the only way we know to promote progress as fast as we can and stay ahead of our competitors. So that’s the first point. Then there is a sort of future imagining a future in which AI systems reach the level of human intelligence. For example, let’s say a decade or two from now, the number may be wrong. Every one of our interactions with the digital world will be mediated by an AI system. All of us will have an AI system helping us in our daily lives all the time. You’re familiar with the situation because you have staff working for you. So this would be like having a staff of artificial people basically who are smarter than you possibly, where I’m familiar with having people who are smarter than me working with me go with
Sen. Angus King (D-ME):
Job loss again.
Dr. Yann LeCun:
Well, I defer to what I learned about this is from Eric ovs whose name was mentioned by Dr. Ding before. I think we have no idea what the major jobs that we’ll hire in 20 years will be. There’ll be new jobs. We can imagine them today. But to continue on this picture, everyone’s information and interaction with the digital world will be mediated by one of those AI systems, which basically will constitute the repository of all human knowledge that cannot be proprietary. It’s too dangerous. It has to be open. It has to be open and contributed to by very wide population the way, for example, Wikipedia is produced through crowdsourcing. That’s the only way to have enough of a diverse set of views to train those AI systems. They need to be able to speak all of our languages know about all the world cultures, and this cannot be done by a single private entity. It will have to be open there. No, and it will occur as long as it’s not regulated out of existence because it’s kind of the most natural way things will evolve. The same way they’ve evolved with the internet, internet has become open source because it’s the most efficient model.
Sen. Angus King (D-ME):
Thank you.
Sen. Mark Warner (D-VA):
I’ve got a couple more questions, but Senator King’s got one of, I want you to stay for John.
Sen. Angus King (D-ME):
I’d like to, as you know, we’ve been doing a lot of work on ai. We had the big forum last week with some amazing folks, but I want to thank you all. This has been a very informative panel. Here’s my question. We’re in the legislation business. What’s the problem? Everybody’s talking about legislation. We’ve got to do something with ai. I would like to ask you to do some homework and give us, here are four things that the Congress should address with ai. Is it watermarking? Is it job displacement? Is it copyright? I’m searching for the problem that we’re trying to solve because until we know exactly what we’re trying to solve, we can’t begin to write legislation. So you’re in a position to tell us or to suggest to us what you think the major problems we should be addressing are. That’s my question. Thank you.
Sen. Mark Warner (D-VA):
Let me, I echo what everybody said. Very informative. I first of all got to take a couple of quick shots, fair or unfair, but Dr. LeCun, when you were talking about all of the steps that meta went through before they released arts language model seemed appropriate. But I think back from a reference point long before you were with meta, if we use that same testing model against the initial Facebook product, I am not sure you could have known the downside implications Facebook was going to be tested. Does it bring people together in a social setting? Yeah, it did pretty good, but I don’t think there was a malicious understanding that it might also lead to people’s mental health issues that exponentially come out of being so dependent upon this social connection. So I don’t sure how you fully test everything on the front end. I think about the fact of this connectivity, much of which this committee exposed when foreign entities, Russia in particular used these platforms that did not have that intent through the use of bots and other things to create huge downsides. I think about the fact that, and this is one of the closed models, and I’m wide open on this open closed analogy, but I’m
Really concerned that we’ve already seen say with chat G B T, that with very little push and pull, all of the guardrails of protections that were put in place didn’t stand the test. And you were shortly very quickly starting to have the Chachi BT model give out answers that were well beyond its scope. Now that in a sense would be, I would argue a reason for open because you have much more of the testing with the white hat hackers and so forth. I’m really worried, and this is one of the areas where in understanding Dr. LeCun’s comment, you don’t want to write it out of existence, but do we really want to trust that the risk benefit analysis should only be done by the vendors
Who may or may not have the same long-term societal goals? That’s part of our responsibility. And the thing that I think, I’m not sure people fully appreciated Dr. Ding’s comment, which I’ve been taught about by much smarter people than me. On the boat example, which you were programming, the programmer’s goal was to create a tool that could show how this boat could destroy as many bad guys as possible. Well, the boat, the model self tested in a way that thought about the problem in a way that obviously the programmer never assumed and it figured out that the best way to score the most points was actually being self-destructive ramming directly and not avoiding the adversary ramming directly into it. I don’t think this is too big of a stretch, but it’s a little bit like Hal in 2001 that if you don’t program it right, and how smart can any of the programmers, I’m not saying Dr.
Jenssen’s point that assuming the government bureaucrats or the politicians are going to be smart or may not be the right presumption either. But I do think there is kind of an underlying question here. Do we trust, should we simply trust the vendors alone to make these determinations? And I’d like everybody to address this and then I’ll have one other after. And at the same time, if we presume that there may be some level of outside trusting or outside scrutiny, some available to have routinized and forcible approach to make sure that these models have been tested, should we have that? And frankly, it goes back to some of my initial questions about definition. If we said that the tools that fell into the AI category had to go through this testing, you might have a whole bunch of these AI tools that are simply just advanced computing that are using AI now as a marketing tool to redefine themselves. So it’s a long way around to kind of the basic question that Senator King is asking of, should we just trust the companies to do all the testing on their own? Because if we don’t get the right questions asked, I think Dr. Ting’s suggestion of the boat self-destructing is only one step away from HAL in 2001. You want to go down the line or how do you want to take it away? Dr. Jung, you take it first.
Dr. Yann LeCun:
Thank you, Senator, for your question. I think there is three or four questions. If I understood correctly in your remarks. The first about should vendors be trusted? I mean that’s why we have regulations when a product is put on a market, for example, driving assistance system for car or a medical image analysis system that uses ai, but
Sen. Mark Warner (D-VA):
You should know that at least in terms of social media, Congress has done nothing even though your companies and others have said they’d be willing, but we have done
Dr. Yann LeCun:
Nothing. I’m not aware of that. So what’s happened is an interesting history that’s happened with social networks, which is that some side effects of enabling people to communicate with each other on social networks that occurred were not predicted perhaps because of some level of naivete or perhaps other reasons, but they were not predicted. But for most of them that were predicted, they were fixed as soon as possible. And so for every attack, for example, attempt to corrupt elections, there’s a countermeasure attempts to distribute CSAM content. This is a countermeasure attempt to misinformed dangerous misinformation. There are countermeasures, defects, et cetera. All of those countermeasures make massive use of AI today. So this is an example where AI is not actually particularly the problem, it’s really the solution. Taking down objectionable content for example, has made enormous progress over the last five years.
Terrorist propaganda and things of that type because of progress in ai. And so as long as, okay, I’m not going to cycle the old jokes of the good guy with AI is better than the bad guy with ai, but there is considerably more good people, more educated with more resources with AI than there are bad people. And AI is the countermeasure against AI attacks. So that was the second point. Third point is I think it would be a mistake to extrapolate the limitations of current AI systems. Current lms LMS are really good for producing points. They’re not very good for producing things that are factually correct. They’re not good as a replacement for actually correct that we’re wise.
It can be entertaining, that’s for sure. But factually correct is different. So in fact, I don’t think current AI technology, LLMs in particular could be used for the kind of applications that Dr. Jensen was talking about because it’s just too unreliable at this time. Now this technology is going to make progress. One of the things that I’ve been working on personally and various other people is AI systems that are capable of planning and reasoning. Current LLMs are not capable of planning. They’re not capable of reasoning. You don’t want to use them for defense applications because they can’t plan. They can retrieve existing plans that have been trained on and adapt them to the current situation, but that’s not really planning. It’s more kind of memory retrieval. So until we have technology that is actually capable of planning in real situations, currently we have such technology but only for games. So a system for example, they can play diplomacy. We were just talking about this with Dr. Jensen or play poker or play GO, things like that. Those systems can plan, but currently we don’t have systems that can deal with the real world that can plan that progress will occur over the next decade probably.
I’ve been calling this objective driven AI. So this is AI systems that do not just produce one word after the other, like LLMs, but AI systems that plan their answers so that it satisfies a number of constraints and guardrails. These are the AI systems of the future. They’re going to be very different from the ones that currently exist. They’re going to be more controllable, more secure, more useful, smarter. I can’t tell you exactly when they will appear. That’s the topic of research.
Sen. Mark Warner (D-VA):
Well, let me make sure I got one other, but I want to hear from Dr. Jensen and Dr. Ding because I think Dr. LeCun said, well, maybe you got to have some pre-clearance, but I think you’re still more saying leave it to the enterprises to decide not.
Dr. Benjamin Jensen:
I think this is the first major disagreement we had amongst my new friend. I think a lot of people underestimate how important knowledge retrieval is and a dialogue in actual military planning because so much of good military planning, and I actually say this, a good thing about good intelligence analysis too is that creative spark and how you’re able to start thinking about defining the problem, right? If I had enough time, I mean the whole point is defining a problem in a way that it lends itself solution to be transparent. So don’t discount even just basic LLMs can augment some very critical parts of military planning right now before we get to higher order reasoning, senators, you were talking to answer your question, I had a thought experiment of the F D A. What would this hearing of look like thinking about food and drug administration and how you would go about actually regulating what we put into our body?
And if it’s true that we’re going to have most of our interactions, I think you’re right, mediated through digital assistance of varying degrees of being artificial and intelligent, I think the hard regulatory question is on you is like what counts has things that the companies who produce these products have to report, have to account for, have to be transparent, and then can be actually inspected and certified via some whole new form of AI assurance. And I think it’s going to take, sadly, a generation at least to work out that balance of what actually is reported, how you study it, what that looks like. But the analogy I come back to is the F D A, and it’s going to be just as important as that. Frankly,
Sen. Mark Warner (D-VA):
Dr. Ding, and I should make clear, I was reminded by my guys, your boat analogy was about a boat race, not about one boat taking out the other. But I think the premise again that if you didn’t ask the right question, the boat took an action that none of the original programmer would’ve thought that not winning the race but crashing the boat was a way to score more points.
Dr. Jeffrey Ding:
Yeah, I don’t think we should expect vendors to be the sole solution to regulating and ensuring safe and robust AI systems. I think even if there are more good guys or good people, oftentimes the boat example is an example of everybody was trying to do the right thing, but you still had these accidents that can occur. I think I would cite sort of my former colleague at the center for the governances of AI, Toby shoveling, he talked, he wrote a lot about sort of structured access to very powerful AI models. He now works at DeepMind, one of the other leading AI labs, and he was proposing ideas about how labs like Meta’s AI lab could open up checkpoints corresponding to earlier stages in the training of these models to allow outside researchers to study how the model’s capabilities and behaviors are evolving throughout training. So obviously Meta is doing great work on that in terms of this red teaming, all these red teaming exercises, but involving having ways to involve outside researchers might help check against some of the risks.
Sen. Mark Warner (D-VA):
We want you to take Angus’s task to bear in terms of what, but let me last one then. If John wants to jump back in. It’s this question of with this tool, do we need to overweight? Because I’ve been spending a lot of time thinking about where is the most immediate threat coming from existing AI tools? And I would make the case that two of our institutions that are most immediately threatened are faith in our public elections and faith in our public markets, public elections, it’s First amendment, what have you, gets you more challenging to think through. But faith in our public markets, there are already laws in place about deception and manipulation of stocks. What I think to Angus’s question, which is good when where’s the problem? Well, the problem is we now have tools with AI that allows those manipulation act actions to take place at an exponentially almost unlimited volume that never before took place. Somebody had to cheat on an individual basis or manipulate on an individual basis. But the volume of things that could take place from deep fakes to product misrepresentation to false ss e c claims, just the litany. Because I would argue that there is an overweighted risk of AI taking something that’s already wrong and doing it on steroids exponentially. And if you would agree with that, then the question is does it need a new law? Does it need a lower standard of proof because the damage could be so great? Is it a higher penalty?
Am I going at least directionally the right way? We’ll do it in reverse order this time. We’ll do Dr. Ding and back up the row and then I’ll turn it back over to John if you want.
Dr. Jeffrey Ding:
Yeah, I think in terms of the specific legislative proposals, let me take some time and get back to you on that. I think I would say one of the things here is we often overweight the role of technology in this. So you want to protect public markets, you want to protect public democracy. Should we spend all the time thinking about how AI is going to impose risks in all these different things? Or should we think about just overall procedures or policies that would incentivize and create a more robust and trustworthy media system that would then be able to fact check any sort of AI generated false information? So yeah, I will get back to you on the specific legislative proposals on the AI side, but I would also emphasize that it’s not just about technology. It’s about how do we provide the better surrounding society that can ensure that this technology doesn’t undermine those two public goods that you mentioned.
Sen. Mark Warner (D-VA):
Dr. LeCun.
Dr. Yann LeCun:
Thank you Senator for the question. I’ll speak to the technological aspects or the scientific aspect rather than the legislative ones. But so these electronic astroturfing that you’re describing, which if I understand correctly for market or the political scene, I think what would help with this would be a systematic way of tracing the origin of a piece of content that would be in the hands that would be visible for users. So watermarking is an example of this. For pictorial content, unfortunately that doesn’t work for text. So for text, you have to make people who post the thing responsible for the content they produce and for its deceptive character. In the US of course, we’re the first amendment, so you can stop people from what they want and you shouldn’t. But there is an interesting point there, which is that the main issue with misinformation is not at the creation or misinformation, but at the dissemination level. There’s interesting academic studies on this by Arvind Narayanan from Princeton University who has studied those questions. He seems to think that the problem is that dissemination and certainly meta, we agree with this because we have systems in place to take down dangerous misinformation to limit its impact on things like vaccines, for example, or any misinformation that endangereds the public or the integrity of elections. So I think at the dissemination level, there are measures that can be taken technologically watermarking for pictorial content and audio content, not for text, and then perhaps some regulatory help. But
Sen. Mark Warner (D-VA):
I am going to go to Dr. Jensen. I’m just going to quickly add here. I think the tool itself, the distribution as well, I understand, but at some point, do we think we have to think about whether the tool, the world decided that the use of chemical gas was going to carry a bigger penalty than shooting somebody both ways, you’re going to be dead. And do we need to overweight the risk? Because somebody giving a false tip could manipulate the market, but the volume of tools that could be used to mess with the market using the AI tools seems to me to be of a, this analogy probably doesn’t worry, but the gas versus bullets. But there’s something that, so Dr. Jensen, bring us home and alright,
Dr. Benjamin Jensen:
Task accepted, Senator. Thank you. Your larger question is really a philosophical question about what it means to govern in the 21st century. And honestly answer,
Sen. Mark Warner (D-VA):
I’m not sure that’s going to bring us home because,
Dr. Benjamin Jensen:
Well, yeah, well, we can do it quickly, but honestly it’s important because I don’t think necessarily, we’re not going to throw out the beauty of our constitution. So we have to think about how we have maintained our values and standards and execute those laws. And you’re probably going to have to, you already have, I mean, going through the work you’ve already been doing, we are going to add acts we are going to add laws, but I think it needs to be done in open settings like this, an open dialogue that allows you to kind of calibrate how far that goes. And to be really clear that obviously this is not libertarian paradise. There will be regulation and there needs to be because it helps create a common standards. And standards are strategy. They allow everyone to have foresight and think about what’s going to happen, how to make business decisions about how to actually tool their algorithms and send ’em to the marketplace.
So I think you’re probably going to have to create something. I’ll definitely get back to Senator King and you and the committee. I think you’re definitely going to have to create something like the FDA, but for AI assurance, I don’t know what that is. I’m not usually a fan of creating new bureaucracies in our government. So maybe it’s an existing body that does that. Maybe it’s an extension of NIST. But I think that we have to all be patient. This has happened faster than we anticipated and we’re going to have to work it out over the next generation. Well,
Sen. Mark Warner (D-VA):
If you could spend a little time on it. I think it is a clear, we spent a lot of time about public elections. I don’t think we spent near enough time about disruption in public markets. And I think we are an event or two away from what then Congress would potentially overdo, which is overreact. But because I think the threat could be so great and the tools could be so unprecedented, the question I post for you, do you need a new law? Do you simply need a different standard of proof? Do you need a higher penalty or is there and is the potential downside of robbing broad-based faith across the whole market? How do you preempt most of these laws and penalties and proofs are after the fact? And I don’t know how in some of this, yeah, I
Dr. Benjamin Jensen:
Mean we’ve already had this in a flash crash in a sense, right? So if two algorithms are trying to place trades and each of ’em is trying to anticipate, we’ve already seen flash crashes, what you’re getting at is more the question of intent. When an individual or network of individuals actively uses these tools, which can be for good to do broad-based market deceptions, and that’s really a straightforward legal question. You hold them accountable, you prosecute them, and you throw them in jail and you throw them in jail in a way that still is keeping to the rule of law. But that definitely sets an example.
Sen. Mark Warner (D-VA):
But is there, and again, keep coming back to my example, I didn’t want to use, but there have been times when society have said, we’ve got to so overweight the risk that we’re going to really throw the entire buyer book at you. And I’m not saying we have to go there, but I do worry that the arguments that I, and I’m not saying from Dr. LeCun, but I have heard this from many of, oh gosh, you’re going to regulate away innovation. You’re going to kill the golden goose. And that argument, which was very much the argument that was launched by, and again, I say to my friends from Meta, we’ve had these discussions many times, but what’s launched by many of the social media platforms in the late nineties, early two thousands, I don’t think there’s very many members Congress either side that wouldn’t say, gosh, we ought to had some guardrails. Now there still would be not lack of agreement on what those guardrails, well ours, so I rest my case. But the upside threat and potential as Dr. LeCun you just laid out with all of our AI assistants, we’re going to have, boy, boy, we got to get it right. And I think I’ve managed to drive away. We’re still here virtually everybody else. Thank Arjun as well. My staff had to stay, but Arjun didn’t have to. But you guys, it was a very stimulating conversation and a good hearing. I very much appreciate we are adjourned.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.