Kelly Adkins is the business and finance reporter for the Medill News Service, M.S. Journalism, of Northwestern University, covering the tech and money beat.
WASHINGTON — Artificial intelligence (AI) experts on Wednesday urged Congress to jump into the intimidating world of regulating AI and avoid some of the pitfalls of the past, when the government failed to rein in transformative technology. On the morning of November 8, the Senate Homeland Security and Governmental Affairs committee hosted a hearing titled “The Philosophy of AI: Learning from history, shaping our future.”
“This is not the first time that humans have developed staggering new innovations. Such moments in history have not just made our technologies more advanced, they’ve affected our politics, influenced our culture, and changed the fabric of our society,” chairman of the committee, Sen. Gary Peters, D-MI, said in his opening remarks.
Past waves of technological change disrupted society in different ways. Today, AI promises widespread automation, which was also a consequence of the British Industrial Revolution and, in the US, mechanization and agriculture in the 1800s. In his testimony, Dr. Daron Acemoglu, an economist and professor at the Massachusetts Institute of Technology (MIT), said during that time, automation ultimately created millions of jobs. In modern days, however, his work found downsides to automation. “Automation accounts for more than half, and perhaps as much as three quarters, of the surge in US wage inequality,” he wrote. The difference today, he told the committee, lies in the fact that AI automation aims to replace human labor, instead of empower it.
“To improve human performance, we need to think beyond creating AI systems that seek to achieve artificial, general human intelligence and human parity,” said Acemoglu.
Applying a philosophical lens to regulation is part of what the government needs to do to take charge, the experts said.
“The reason we must consider the philosophy of AI is because we are at a critical juncture in history, where we are faced with a decision – either the law governs AI, or AI governs the law,” said Margaret Hu, research and law professor of Taylor Reveley and William & Mary Law School.
Expert witnesses said one current narrative is that humans cannot control AI, but they urged Congress to step in, regulate how AI is created and used, to change that narrative. However, the committee’s ranking member, Sen. Ron Johnson, R-WI, expressed doubt that the federal government would come together to address the issue.
“This is an incredibly important issue and question, and I just really don’t know whether this dysfunctional place is gonna come up with the right answers,” he said.
In the current political and socio-economic climate, American democracy is weakened and susceptible to succumbing to the downsides and misuse of advanced technology, according to Dr. Shannon Vallor, who chairs the Ethics of Data and Artificial Intelligence Edinburgh Futures Institute, University of Edinburgh.
AI threatens security, privacy; exacerbates inequality
“On social media and commercial tech stages, generative-AI evangelists are now asking:
what if the future is merely about humans writing down the questions, and letting
something else come up with the answers?” Vallor said. “That future is an authoritarian’s paradise.”
Sen. Peters queried the witnesses as to whether they saw any merit in more laissez-faire approaches to AI.
“There’s a popular line of thought out there touted by many influential people that unfettered technological innovation will solve all of our problems and it’s going to lead to increased wellbeing for all. Just let it go and we should all be very happy about the end result,” he said, characterizing a point of view that resembles ideas put forward in a recent widely circulated manifesto by prominent Silicon Valley venture capitalist Marc Andreesen.
“I completely disagree,” said Acemoglu, arguing that the advance of technology must take place in an ecosystem “where social input, democratic input and government expertise are part of setting the agenda for innovation.”
“I think this type of techno-utopianism is something that we really need to look at with an eye of skepticism, and especially in a constitutional democracy,” said Hu. “I think that it poses the problem that the ends may be not justifying the means in a constitutional democracy.”
“We solve problems often with the help of technology, but technology doesn’t solve problems,” said Vallor. “And when we start expecting technology to solve our problems for us without human wisdom and responsibility guiding it, our problems actually tend to get worse.”
Where there is technology, there will be a person or party capable and willing to misuse it, Vallor said. Realistically, AI is not something that can be taken out of the hands of bad actors. A better solution would be to identify and mitigate the incentives for misuse. One such incentive could be the tech industry’s large-scale data mining to target digital ads and products. This is just one driver of many adverse effects on users and society, according to the witnesses.
Sen. Laphonza Butler (D-CA) raised concerns that AI would be used to target disadvantaged communities in America and increase inequity. “The reality is that this technology is already widening preexisting inequities by empowering systems that have long histories of racist and anti activist surveillance,” said Sen. Butler.
“I think we have seen plenty of evidence that if we do nothing, the use of AI technologies will continue to disproportionately harm the most vulnerable and marginalized communities here in this country and also in other countries,” said Vallor. “When you allow large companies and investors to reap the benefits of the innovation in ways that push all the risk and cost of that process onto society, and particularly onto the most vulnerable members of society as we’re seeing today, you produce an engine of accelerating inequality and accelerating injustice.”
Another inequity gap could come between blue- and white-collar workers. Sen. Johnson said he was worried about how college-educated individuals could suffer from AI tools taking their jobs.
Acemoglu argued that AI is more likely to be used to replace technical work of blue-collar workers. Meanwhile, college-educated workers would be more likely to use AI to help them with their tasks. For AI innovation to be used for good, he suggested using it to provide support and training to blue-collar and trades workers, specifically, instead of replacing their tasks altogether.
High stakes, potentially irreversible decisions
The witnesses urged the government to get involved in overseeing how AI is developed and used in the private sector. With regard to data privacy, Hu described the issue as a “triangle of negotiation” between the technology companies who create tools, citizens who use them, and the government that regulates the market. She raised the prospect that a Constitutional amendment may be necessary to “enshrine privacy as a fundamental right.”
Acemoglu stressed that the government needs to direct the market to put the focus back on useful innovation and away from profit maximization. A solution could involve taxing digital advertising, which would shift tech business models away from personal data collection. In a similar fashion, policy could direct the type of AI products being created. Acemoglu said OpenAI’s product, ChatGPT, learned from speech patterns on social media to mimic how humans talk.
“The amount of information was not important, it was sounding like humans,” he said. “This is why government regulation is important.”
Sen. Jon Ossoff, D-GA, asked what types of policy could avoid quick-fixes where “whack-a-mole” regulation only patches problems as they arise, and instead create a solid foundation with basic law and fundamentals to a more comprehensive regime.
In addition to fundamental privacy rights, Hu said another avenue could be amending the Civil Rights Act of 1964 to address possible discrimination in automated systems.
Alternatively, Sen. Ossoff asked Acemoglu for input on a fiduciary approach to data protection.
“I think we just don’t know which one of these different models is going to be most appropriate for the emerging data age,” said Acemoglu.
The fiduciary model has positives, according to Acemoglu, but there is also room for failure. He said Europe’s General Data Protection Regulation (GDPR) had backfired and may have ultimately handed an advantage to large companies who can bear the burden of compliance.
“If we want to have a fighting chance for an alternative, the government may need to invest in another government agency,” he said.
Acemoglu stressed that government should not stand by and treat privacy issues as an afterthought.
“We really need to institute the right sort of regulations and legislation about what rights people have to different types of data that they have created, and whether those rights are going to be exercised individually or collectively,” he said.
The Biden Administration proposed an executive order on AI policy last week, which could be the start of more significant federal AI regulation. Vice President Kamala Harris said during the signing of the executive order that America is a leader on the AI front, and can catalyze global action and consensus unlike any other country.
Vallor pointed to the executive order in a response to Sen. Butler.
“You see also in the executive order recently released many moves to empower and instruct federal agencies to begin to take action so that we can in a way start by making sure that government uses of AI are appropriately governed, audited, monitored, and that the powers that government has to use AI are used to increase the opportunity and equity and justice in society rather than decrease it,” she said.
– – –
What follows is a lightly edited transcript of the discussion.
Sen. Gary Peters (D-MI):
The committee will come to order. Well, we are living through perhaps one of the most exciting times in human history as artificial intelligence becomes more advanced each and every day. AI tools have the capacity to revolutionize medicine, expand the frontiers of scientific research, ease the burdens of physical work, and create new instruments of art and culture. AI has the potential to transform our world for a better place, but these technologies also bring new risks to our democracy, to civil liberties and even our human agency. As we shape and regulate AI, we cannot be blinded by its potential for good. We must also understand how it will shape us and be prepared for the challenges that these tools will also bring. Some of that work will be accomplished with innovative policy and I’m proud to have passed numerous bills to improve the government’s use of AI through increased transparency, responsible procurement, workforce training and more.
I’ve convened hearings that explore AI safety, risk procurement of these tools and how to prepare our federal workforce to properly utilize them. But as policymakers, we also have to explore the broader context surrounding this technology. We have to examine the historical, the ethical and philosophical questions that it raises. Today’s hearing and our panel of witnesses give us the opportunity to do just that. This is not the first time that humans have developed staggering new innovations. Such moments in history have not just made our technologies more advanced, they’ve affected our politics, influenced our culture, and changed the fabric of our society. The industrial revolution is one useful example of that phenomenon. During that era, humans invented new tools that drastically changed our capacity to make things. The means of mass production spread around the world and allowed us to usher in a modern manufacturing economy. But that era brought with it new challenges.
It led to concerns about monopolies, worker safety, unfair wages and child labor. It produced the weapons that were used to fight two world wars. In short, it wasn’t just about technology that could be used for good and I’m grateful. Our first witness, Daron Acemoglu, has studied this phenomenon. He has not only examined the history of technological change, but also the democratic institutions that are needed in response. In the 20th century, we had trade unions to protect workers’ rights and effective government regulation to keep those industries in check. What tools do we need to meet this moment and what else should we learn from the history? Artificial intelligence also brings unique challenges. The history of technological change has largely centered on human strength and how we can augment it through the use of new machines. AI will affect physical work, but unlike other technologies, it is more directly tied to our intellectual and cultural capacities.
It has already introduced new ways to ask and answer questions, synthesize information, conduct research, and even make art. These qualities, the ability to understand ideas and create culture are the very foundation of our humanity. We must work to preserve them as they become influenced by artificial tools. Perhaps most importantly, AI’s influence on these capacities is not neutral. These tools like the humans who make them are biased. We must define what values lie at the core of our human experience and create technological tools that support them. Our second witness, Sharon Vallor, will be a helpful resource in understanding these ethical questions. She studies the way that new technologies reshape our habits, our practices and moral character. With her help, we can understand the values embedded in these technologies and the effect that it’ll have on our human character. And finally, we’ll explore AI through a constitutional law framework.
AI poses risks to our civil liberties. New surveillance tools can be used to target vulnerable communities. Biometric systems like facial recognition, can endanger a citizen’s right to do process. Advanced technology brings renewed questions about our privacy and our personal information. If we do not understand how AI can be used in ways that erode our constitutional rights, it can pose a grave danger to our democracy and civic institutions. Our third witness, Margaret Hu, will help us understand these intersections. She researches the risk that AI possesses to constitutional rights due process. In civil liberties, artificial intelligence has already begun to shape the fabric of our society. Our response cannot come through piecemeal, policy alone or isolated technological fixes. It must include a deeper examination of our history, our democracy, and our values, and how we want this technology to shape our future. We must look to the past and learn the lessons of previous technological revolutions. We must answer the ethical questions that AI poses and use these new technologies to build a world where all humans can thrive and we must protect our civil liberties and democratic institutions against risk that these tools can pose. This hearing provides an excellent opportunity to focus on this work, and I’d like to
Thank our witnesses for joining us today and we certainly look forward to your testimony. It is the practice of this committee to swear in witnesses. So if each of you would please rise and raise your right hands. Do you swear to the testimony you give before this committee will be the truth, the whole truth, and nothing but the truth, so help you God, thank you. May be seated.
Our first witness is Daron Acemoglu. Professor Acemoglu is an economist at the Massachusetts Institute of Technology. His work focuses on the intersection of technological change with economic growth, prosperity, and inequity. Professor, welcome to the committee. You are recognized for your opening comments.
Daron Acemoglu (written statement):
Thank you for inviting me to testify on this important topic. I will argue that there is a pro-human meaning pro worker and pro citizen direction of artificial intelligence that would be much better for democracy and shared prosperity.
Unfortunately, we are currently on a very different and worrying trajectory. Digital technologies have already transformed our lives and AI has further amplified these trends, but all has not been good. US inequality has surged since 1980. Many workers, especially men without a high school degree or just a high school degree have seen very significant declines in their real earnings and inequality has multiplied in other dimensions as well. My research indicates that the most important cause of these trends is automation, meaning the substitution of machines and algorithms for tasks previously performed by workers. Automation accounts for more than half of the increase in US inequality. Other trends such as offshoring and imports from China have played a somewhat smaller role. Technological change is a force for good, but we need to use it the right way. During the mechanization of agriculture and the three decades following World War II, automation was rapid, but the US economy created millions of good jobs and built shared prosperity.
The main difference from the digital age was that the new technologies not only automated some tasks but also created new ones for workers. Henry Ford factories used new electrical machinery that automated some work, but at the same time, they also introduced many new technical tasks for blue collar workers. Simultaneously, manufacturing became much more intensive in information activities and created a lot of jobs through these channels as well, such as in design, planning, inspection, quality control and accounting. Overall new tasks were critical for employment and wage growth during these eras. Unfortunately, my research shows that emerging AI technologies are today predominantly targeting automation and surveillance. The emphasis on surveillance is of course much more intense in China. We are already seeing the social and democratic implications of the rising inequality in the us. Areas that have been most hit by Chinese competition or the introduction of robots show much greater degrees of polarization and inequality undermine support for democracy and this lack of support make democracies more unstable and less capable of dealing with challenges.
This path is not inevitable. To improve human performance, we need to think beyond creating AI systems that seek to achieve artificial general intelligence or human parity. The emphasis on general intelligence is not just a chimera, but distracts from the more beneficial users of digital technologies to expand human capabilities. Making machines useful to humans is not a new aspiration. Many people were working on this agenda as early as 1949 and many technologies that have been foundational to our lives today, including the computer mouse hyperlinks menu-driven computer systems came out of this vision machine. Usefulness is more promising today than in the past. An irony of our current age is that information is abundant by useful information is scarce. AI can help humans become better problem solvers and decision makers by presenting useful information. For example, an electrician can diagnose rare problems and accomplish more complex tasks when presented useful information by AI systems.
The analog to the pro worker agenda in communication is a pro citizen perspective to provide better information to individuals and enable them to participate in deliberations without manipulation or undue bias. The opposite approach is one that focuses on surveillance, manipulation, manufacturing, false conformity. The evolution of social media illustrate this manipulative path with algorithms used for creating echo chambers and extremism. The survival of any political regime depends on how information is controlled and presented. Authoritarian rulers have understood this for ages. The chin rulers in China 2,200 years ago, reputedly burned books and executed people who could rewrite them to control information. The anti-democratic use of computers is clearly visible in Russia, Iran and China who controls information matters no less for democratic regimes. Digital platforms’ monopoly over information today is completely unprecedented. Their business model is based on monetizing data via digital ads and much work in social psychology documents, and in fact, unfortunately teaches platforms how to increase engagement by manipulating user perceptions and presenting them with varying stimuli and emotional triggers.
AI is a new technology, but as you pointed out, history offers important clues about how to best to manage it. The British Industrial Revolution is today remembered as the origin of our prosperity. This is true, but only part of the story. The first 100 years of the British Industrial Revolution were simply awful for the working people. Real income stagnated working hours, increased working conditions, deteriorated and health and life expectancy became much worse in the face of uncontrolled epidemics and intensifying pollution. The more positive developments after 1850 were due to a major direction of technology away from just automation and towards pro-human goals, and this was embedded in fundamental political and social changes, including democratization and new laws to protect worker voice and worker rights. Just like during the industrial revolution, we have widely different paths ahead of us. A pro-human direction of AI would be much better for prosperity, democracy and national security.
Yet that’s not where we are heading. My five minutes is up, but I will be happy to discuss later policy proposals for the redirecting AI towards a more beneficial trajectory. Thank you.
Sen. Gary Peters (D-MI):
Thank you Professor. Professor Hu is a professor of law and director of the Digital Democracy Lab at William and Mary Law School. She is a constitutional law expert and her research focuses on the intersection of technology, civil rights, and national security. Professor who previously served in the civil rights division at the US Department of Justice. Professor Hu, welcome to the committee. You’re recognized for your opening comments.
Margaret Hu (written statement):
Thank you. Good morning. It’s an honor to be a part of this critically important dialogue on the philosophical and historical dimensions of the future of AI governance. The reason we must consider the philosophy of AI is because we are at a critical juncture in history where we’re faced with a decision either the law governs AI or AI governs the law. Today I would like to place AI side by side with constitutional law. Doing so allows us to visualize how both function on a philosophical level. It also provides us with a window into how they are philosophically in conversation with one another and gives us a method on how we must best respond when we see that they are philosophically in conflict with one another. Constitutional law is more than just the text of the Constitution and cases. Similarly, AI is more than its technological components.
AI can be understood as more of a philosophy than a technology like constitutional law. They are both highly philosophical in nature. Specifically, AI is animated by multiple sciences and philosophies, including epistemology, a science and philosophy concerning the structure of knowledge and ontology. The philosophy of existence AI and the law is highly complex and requires grappling with these interdisciplinary consequences. Just as constitutional law is highly nuanced and contextualized. In the past year, we have entered a new phase of large commercially driven AI investments. This new phase brings into sharp relief, the need for a dialogue on rights-based AI governance. The creators of generative AI have shared that their ambition is to advance artificial general intelligence, which aims to surpass human capacities. Generative AI and AGI ambitions force us to confront these epistemological and ontological questions head on and with some urgency. In a constitutional democracy, AI is already being deployed as a governing tool in multiple contexts.
AI particularly due to its combined ontological and epistemological powers as well as its combined economic, political and social power has the potential to evolve into a governance philosophy as well as potentially a governance ideology. AI is constitutive of not only a knowledge structure, but also a market structure in an information society and a governing structure. In a digital political economy, the incentives of AI privatization and the exponential growth of datafication can operate as an invisible governing superstructure under an invisible and potentially unaccountable hand. Additionally, AI can execute both private and public ordering functions sometimes without authorization, rapidly shifting power towards centralized and privatized and often automated and semi-Automated methods of governing the Constitution is inspired by a philosophy of how to guarantee rights and how to constrain power. Constitutional law is animated by commitment to a governing philosophy surrounding self-governance through a Republican form of government.
In theory and philosophy, it separates and decentralizes power, installs, checks and balances to prevent or mitigate power abuses and supports a government that is representative of the people by the people for the people. An important question at this critical juncture is how to ensure that AI as it potentially evolves into a governing philosophy, will not compete with and rival constitutional law as a governing philosophy in a way that sacrifices our philosophical commitments to fundamental rights and to constraints on power, including separations of power. The Constitution is more than a text, it is a philosophy. AI is more than a technology, it is a philosophy. So I’d like to return to my opening question, which is, will AI govern the law or will law govern AI in order to preserve our democracy and reinforce it? There can only be one answer. The law must govern ai. Thank you.
Sen. Gary Peters (D-MI):
Thank you, professor. Finally, Shannon Vallor is a professor at the University of Edinburgh. She is appointed to the University’s Department of Philosophy as well as Edinburgh Futures Institute. Her research centers on the ethical challenges of AI and how these new technologies reshape human character habits and Professor Vallor, wonderful to have you here all the way from Edinburgh. You may proceed with your opening comments.
Shannon Vallor (written statement):
Thank you, Chairman Peters and distinguished members of the committee for this opportunity to testify today. It’s a profound honor to address you on a matter of such vital importance to the nation and the human family. I direct the Center for Techno Moral Futures at the University of Edinburgh, which integrates technical and moral knowledge in new models of responsible innovation and technology governance. My research has focused on the ethical and political implications of AI for over a decade. It is deeply informed by our philosophical and historical perspectives on AI’s role in shaping human character and capabilities. The most vital of these capabilities is self-governance. This capability to reason, think and judge for one’s self, how best to live underpins the civil and political liberties guaranteed by the US Constitution and by international law it also underpins democratic life. My written testimony explores the deep tension between AI and our capacity for democratic self-governance and some important and powerful lessons from history for resolving it. The power of AI is one we must govern in modern democracies. Free peoples may not justifiably be subjected to social and political powers which determine their basic liberties and opportunities, but over which they have no say, which they cannot see and freely endorse and which powers are in no way constrained by or answerable to them.
Many of the greatest risks of AI technologies have arrived before the promised social benefits which prove harder to deliver at scale. Yet the gap between AI’s social power and are democratic will to govern it remains vast. As a result, public attitudes toward AI are souring. This is a grave warning for those of us who want AI technologies to mature and succeed for human benefit. GMOs and nuclear power also suffered public backlash in ways that greatly limited their beneficial use and advancement and AI may become a similar target. Yet we do know how to govern AI technologies and responsible AI researchers have given us plenty of tools to get started. The US has a long and proud history of regulatory ambition in making powerful and risky technologies safer, more trustworthy and more effective, all while fueling innovation and enabling wider adoption. It was done first in the 19th century with steamboat regulation, then automobiles, aviation, pharmaceuticals and medical devices to name just a few.
This required the courage to give manufacturers, operators and users irresistible incentives to cooperate. It required the capacity to keep learning and innovating and adjusting our regulatory systems to accommodate technological change. It also required persistence of shared governance aims in the public interest across changes in political administration. This was all within our democratic capacity and still is, but the political will to use that capacity is now damaged for many reasons. The mischaracterization and misuse of AI technologies makes this problem worse by undermining our confidence and our own capabilities to reason and govern ourselves. This was predicted by early AI pioneers in 1976, Joseph Weizenbaum lamented that intelligent automation was emerging just when humans have ceased to believe in, let alone to trust our own autonomy. Norbert Wiener who developed the first theories of machine learning and intelligent automation warned in 1954 that for humans to surrender moral and political decision making to machines is to cast our responsibility to the winds and to find it coming back seated on the whirlwind.
Yet many of today’s powerful AI scientists and business leaders claim that the truly important decisions will soon be out of our hands. As just one example, Open AI’s Sam Altman has suggested that we are merely the biological boot bootloader for a form of machine intelligence that will dwarf hours not just in computing power, but in wisdom and fairness. These careless and unscientific AI narratives are pressing on democratic cultures already riddled with stress fractures. If we don’t assert and wisely exercise our shared capacity for democratic governance of ai, it might be the last chance At democratic governance we get had AI arrived in a period of democratic health, none of its risks would be unmanageable. But we are in a weakened political condition and dangerously susceptible to manipulation by AI evangelists who now routinely ask, what if the future is about humans writing down the questions and machines coming up with the answers?
That future is an authoritarian’s paradise. The question upon which the future of democracy hangs and with it, our fundamental liberties and capacity to live together is not what will AI become and where is it taking us? That question is asked by someone who wants you to believe that you are already out of the driver’s seat. The real question is what kind of future with AI will democracies choose to preserve and sustain with the power? We still hold one where human judgment and decisions matter or one where they don’t. Thank you to the committee.
Sen. Gary Peters (D-MI):
Thank you, professor. You can see we have a lot going on today. So we’ve got people coming and going, but Senator Johnson has to leave shortly. Senator Johnson, a moment for our question or two. You’re recognized.
Sen. Ron Johnson (R-WI):
First of all, I appreciate this hearing. I really do thank the witnesses. I like the hearing title philosophy on AI because I think it’s crucial. I’ve been interested in science fiction all my life, and now we’ve been holding these seminars or hearings here in the Senate trying to understand what this is, but I’ve also been reading some pretty interesting science fiction books. Science fiction writers are unbelievably prescient, and these things are research pretty well, and so these things go off in different directions and some pretty troubling directions, which is I think what the chairman was talking about in his opening remarks as well as what you’re talking about Professor Vallor, you’re talking about our capacity to regulate this and our ability to do so.
President Eisenhower and his farewell address not only talked about the military industrial complex, he also warned us about government funding science and research, and that would lead to scientists more concerned about obtaining a government grant, then really addressing pure scientific knowledge. It could lead to a scientific and technological elite that drove public policy. I think that’s the concern with ai. He was concerned about human beings that were technologically and scientifically elite. Now we’ve got computer capability that’s going to vastly outpace our ability in terms of volumes and speed of calculations, and it is highly concerning. I would argue just the latest pandemic scientific research was certainly looking at how to take a virus with gain of function, make it more dangerous, and then come up with a countermeasure anticipating biological warfare. I would argue that obviously got out of hand. We don’t know the exact origin, but I guess I’m less convinced that we’re going to really be able to control this and that we’ve got the governing capacity to do so. It’s hard to really put a question in on this, but this is an incredibly important issue in question, and I just really don’t know whether this dysfunctional place is going to come up with the right answers. Think of the Hippocratic oath first, do no harm. Again, I’m not a computer scientist. I can’t even begin to grapple with how they create these algorithms, and we’ve got a few smart people that know this that are warning us about a certainly possibility of AI destroying this country, destroying humanity. I guess just speak to that.
Shannon Vallor:
Happy to. Thank you very much. So I think the important point in what you’re saying is that AI today is an accelerant of many of the dynamics that are currently present in our society, in our economy, and among those, for example, is rising economic inequality and declining social and economic mobility, which has been an issue now in this country for decades. And one of the greatest worries about AI is that it will accelerate those trends unless we actively govern AI in ways that ensure that its benefits are realized by everyone who has a right to have the infrastructure that AI will build serve them. So I’ll just say that I think the fact that we have done this in the past with other technologies that were at the time equally unprecedented, equally powerful, equally challenging to regulate actually if we have the political will leaves us in a better place than we have ever been to govern a complex technology like ai because we have 200 years of experience in regulatory innovation in adjusting the incentives for powerful actors. It has been effective before. That is why airplanes now are safer than driving. That would not have happened if we had stepped back and let airlines operate without any kind of regulatory oversight, accountability or constraint.
Sen. Ron Johnson (R-WI):
But regulating transportation devices is completely different, and I would even say nuclear power versus nuclear weapons is completely different than this that we can’t even begin to grapple with. Once this is unleashed and it’s starting to learn and maybe becoming self-aware, what is that going to actually result in Professor Acemoglu … pretty close. We’ve all talked about growing inequality. I would argue that we’ve put ourselves in a huge pickle over the last couple generations is we’ve told all of our children, you have to get a four year degree. To me, the greatest threat AI represents to loss of jobs are those college educated kids that machines can learn a lot quicker. And you’re seeing what is happening in ChatGPT. Certainly we’re seeing a real shortage of workers in manufacturing in the trades. We’re always going to need those folks. And unfortunately, certainly in Wisconsin, those employers trying to hire people, our kids aren’t doing it. We sent ’em all to college and they think that kind of work is beneath them. So they’re screaming for legal immigration reform, which I’m all for, but here’s an instance where we didn’t really regulate, but society and mass told all of our kids, you have to get college. Thereby implying that being in construction or being a trades person was somehow a lesser occupation. Second class citizen. I think all work has value. Why don’t you speak to that in my remaining time?
Daron Acemoglu:
Well, that’s a very, very important point. I have also come to believe exactly like you said, that we have undervalued a lot of important work, but we haven’t just undervalued it philosophically. We have also failed to create the right training environment and the right technologies for these workers. It is a tragedy in this country that we have a tremendous shortage of skilled craftspeople, electricians, but they’re not even paid that well.
Sen. Ron Johnson (R-WI):
They’re starting to get paid well.
Daron Acemoglu:
Sorry?
Sen. Ron Johnson (R-WI):
They’re starting to get paid well.
Daron Acemoglu:
Well, they’re going to get paid somewhat better because of scarcity, but they can be paid even more if we create the right environment for them to acquire more expertise and provide the tools for them. And the promise of AI is really, if we strip all of the hype is really in doing that because what is generative AI good at? It’s taking a vast body of knowledge and some specific context and finding what’s the relevant bit of that vast body for that specific context. So that’s a great technology for training. That’s a great technology for boosting expertise, and that’s the way to actually use AI in a way that’s not inequality inducing. Now you’ve raised another very important point which many economists also make, which is, well, these ChatGPT like technologies are going to go after college educated jobs. I’m not actually sure. This is not the first technology that has been promised to automate white collar workers or white collar work and therefore reducing equality that way.
My work finds that many of these technologies end up actually going after the lower skill jobs. Like you’re not going to automate managerial jobs or those people who have power, but you’re going to do to the sort of IT security type jobs which are not very well paid anyway. And moreover, that’s not a very effective way of reducing inequality because what happens to people who say used to do it security or advertisement writing, et cetera, they go and compete for other white collar jobs that were lower paid and the burden, again, falls on lower educated workers. So you are 100% right. Four, your college for everybody is not the solution, but skills for everybody, building expertise for everybody is the right solution. Sorry,
Sen. Ron Johnson (R-WI):
I can’t stay around. This really is an important subject. Thank you.
Sen. Gary Peters (D-MI):
Thank you. Senator Johnson. I want to have a little bit of a dialogue, perhaps ask a broad question and then we’ll go through. And so a little different than hearings where there’s questions answers. If you want to chat among each other too, that would be very much appreciated because you bring some different perspectives here. So I’m going to ask a very broad question. And first professor, I hope you can answer based on your historical research, your understanding of economics and framing that, and all of you have some specific examples that would be helpful for us to have in the record. Professor Hu I’d like you obviously to hear your perspective based on your understanding of constitutional law and professor Vallor. I hope you can do it based on your study of a future worth wanting. What should humans want and how do we achieve that? The first kind of question, open question is that there’s a popular line of thought out there touted by many influential people that unfettered technological innovation will solve all of our problems and it’s going to lead to increased wellbeing for all. Just let it go and we should all be very happy about the end result. So do each of you agree with that line of reasoning? And if not, why and what should we be thinking about? We’ll start with you.
Daron Acemoglu:
I completely disagree. First of all, we as humans decide how to use technology. Technology is never a solution to problems. It could be a helper or it could be a distractor exactly like you said in your opening remarks. Moreover, unfettered competition is not always the vehicle for finding the right direction of technology. There are so many different things we can do with technology and profit incentives sometimes align with the social good and sometimes do not. I am certainly not arguing that government bureaucrats or central planning could be a rival to the market processes for creating entrepreneurial energy or innovative energy. I don’t think there’s any evidence that anything better than the market process for innovation has been invented by humans, but that does not mean that the market process is going to get the direction of technology and it doesn’t mean that we, without regulation we’re going to use these technologies the right way. So that’s why exactly like professors Hu and Vallor also pointed out, we need the right regulatory framework and the right regulatory framework has to be broadly construed. It’s not just like we create the technologies and then we put some regulations on how they can be used. I think we need to create the right ecosystem where social input, democratic input and government expertise are part of setting the agenda for innovation. In the past, US government played a very important leadership role in many of the transformative technologies from antibiotics, computers, sensors, aerospace, nanotechnology, and of course the internet. I think the right priorities for redirecting technological change in a socially beneficial direction is very, very important, and that’s the way to make use of these technological innovations. But if I could add one other thing, which is a reaction to the question that Senator Johnson raised, which is is it possible to regulate AI? So I certainly believe it is possible to regulate ai, but I agree with Senator Johnson that it’s much harder than the previous technologies.
But the reason for that is not just the nature of technology, it is because we have become completely mesmerized with AI in the wrong way. So both professors Hu and Vallor emphasize AI is a philosophy. You could say AI is also an ideology, and we’ve been chosen one specific way of perceiving that AI ideology as this general intelligence that’s going to solve our problems. It’s going to take away human agency, and it’s not only dangerous, it’s also really making it much harder for us both to find the right technologies to solve social problems and to regulate it. So I think we need a general change in perspective to help with the regulation of AI. Thank you. Thank you Professor Hu. And we can go beyond this clock. You’re all professors. I know you’re usually like to expand on your answers and you’re free to do that.
Margaret Hu:
Thank you for that very important question. I think this type of techno utopianism is something that we really need to look at with an eye of skepticism and especially in a constitutional democracy. We need to ask the question whether or not we have the proper means in order to achieve those ends. And with that type of invitation to see technology as something that can solve all problems and needs to be unfettered, I think that it poses the problem that the ends may be not justifying the means in a constitutional democracy. We must always consider the means. And I think that especially when we are faced with very compelling ends that are being presented before us, that AI can resolve pressing issues in national security or in health for example, then it seems even more compelling. But I think that this really also opens the door to the conversation of whether or not when we’re thinking about AI regulation, we really need to think about an ex anti approach and not just an bost approach in the law. Oftentimes it’s highly reactionary. We look at the harms and then we try to find some type of structure to deal with those harms. But with ai, I think that this is now a moment for us to ask what type of laws and regulations and rules do we need in order to anticipate the harms before they occur and address them.
Sen. Gary Peters (D-MI):
Thank you, professor. Professor Vallor.
Shannon Vallor:
Thank you for this important question. I’ll echo some things that my fellow witnesses have said. First of all, technology is a tool. We solve problems often with the help of technology, but technology doesn’t solve problems. And when we start expecting technology to solve our problems for us without human wisdom and responsibility guiding it, our problems actually tend to get worse. And that’s what a lot of people are seeing with ai, that we have something that’s not being used wisely and responsibly as a tool to solve problems, but something that we’re increasingly expecting to solve our problems for us. And to Professor Acemoglu’s point that is really undermining some of the confidence and ambition that we need in order to govern AI responsibly and wisely. In response to Senator Johnson’s earlier question, we certainly can’t cut and paste from aviation or any other sector, a regulatory model that is going to work for ai, but we don’t need to. We can do what we’ve done every time before, which is innovate in the sphere of governance and adjust different incentives and powers within the AI ecosystem until we have the results that we want.
But this brings me to my second point. You talked about the idea of unfettered technological innovation and this ideology that that kind of unfettered innovation leads us to human wellbeing for all. But notice that we always hear this promise now made with the word innovation almost never the word progress. And there’s a reason for that technology, not just the machines we build, but the techniques and systems that we create for all of history has been an essential driver of human progress. But that means meaningful measurable improvements in things like life expectancy, infant mortality, sanitation, literacy, political equity, justice and participation, economic opportunity and mobility and protections of fundamental rights and liberties. Today, there’s more advanced technology in the United States than anywhere else, but we’ve actually started seeing measurable declines in many of those metrics that I just mentioned. So what does that tell us about the connection between technology and progress?
It suggests that it’s breaking down because we’ve substituted the concept of innovation where we don’t need to prove that a new technology actually meets a human need only that we can invent a market for it often by changing our social infrastructure so that we can’t opt out of it. We need to go back to the heart of technology, which is the ambition to improve the human condition. And you asked about this work that I’ve done on building a future worth wanting. So the Spanish philosopher Jose Ortega cassette said in 1939, that technology is strictly speaking not the beginning of things. He says it will within certain limits, of course, succeed in realizing the human project, but it does not draw up that project. The final aims it has to pursue come from elsewhere. Those aims come from us. We have to identify what we want and need technology to help us achieve. AI can be a powerful tool in helping us do that, but not if we treat innovation as an end in itself.
Sen. Gary Peters (D-MI):
Thank you. Thank you Professor Senator Hassan, I recognize for your questions.
Sen. Maggie Hassan (D-NH):
Well, thank you very much, chair Peters, and I want to thank you and the ranking member for holding this hearing and a thank you to the witnesses for being here. We really, really appreciate it. Professor Vallor, I want to start with a question to you. Your testimony discusses the possibility that bad actors can use AI in ways that threaten national security such as in the bioengineering and nuclear fields. What would you recommend Congress do to minimize national security risks posed by the misuse of ai?
Shannon Vallor:
Well, I think one point is that we have to be realistic and recognize that AI is not a technology that we will always be able and everywhere to keep out of the hands of bad actors. So one of the important things to recognize is that this is part of the risk profile of AI that needs to be managed in the same way that we manage many other inherently risky technologies that can also be abused and exploited. One of the most important things is to identify what are the powerful incentives that bad actors have to abuse this technology and where can we remove those incentives or increase the costs for bad actors of using AI in harmful ways? So for example, the use of AI to produce disinformation is a worry for a lot of researchers, but actually there are cheaper ways to produce disinformation that a lot of bad actors have been relying on.
So it’s not clear, for example, that AI will be the most attractive path for people who want to do harm through that pathway. So I think we from a national security perspective, obviously need to have close monitoring of AI developments. And we need, and this is something that we need in the commercial ecosystem as well, forms of early warning systems as I would say, where we see incidents being reported back to us that we can then chase back and many platform companies can be incentivized to do that kind of incident reporting so that if we see signs of bad actors exploiting their tools, we have some advanced warning and ability to act.
Sen. Maggie Hassan (D-NH):
Thank you very much. Professor Acemoglu as we discuss AI and how it can magnify threats to democracy. I’m particularly concerned about Chinese AI tools that are used for surveillance and censorship and how these tools may undermine democracy and freedom around the world. So based on your research, how are Chinese surveillance and censorship tools spreading throughout the world and what is the effect of these tools on democracy and free speech?
Daron Acemoglu:
Thank you, Senator Hassan, and you are absolutely right to be concerned. China is not at the frontier of most AI areas, but facial recognition, censorship and other control tools have received the most investment in China. And that is one area in which China is on a par with the United States and other nations that are leading the AI knowledge. And those tools are not just being developed intensively in China, but they’re also being used very much both at the local level and the national level for controlling the population. And there is evidence suggesting that they’re not completely ineffective. In fact, one of the things that’s quite surprising in China is that the middle class has multiplied and there are a lot of aspirations, but those aspirations are not reflecting themselves in the political domain. And a lot of that is because of this very intense use of data collection control.
And you are also absolutely right that those technologies are not just staying in China. China is actively exporting them to many other nations. Just only the Huawei company has exported surveillance technologies to more than 60 other countries, most of them non-democratic, and those countries are also using them for surveillance. This is part of the reason why AI leadership coming from the United States is so important because the US has the resources, scientific resources and corporate resources to set the direction of research and it can choose a very different one from China. And if the US makes those choices, other countries will follow because the advances in the United States are going to provide profit opportunities for companies. And this is part of the reason why setting the right priorities with government support, but with also shifting priorities in the corporate and the tech world is so important. Thank you.
Sen. Maggie Hassan (D-NH):
Thank you. Another question for you. In today’s political climate, extremism can sometimes boil over into acts of violence. Just last week, the committee heard from FBI director Christopher Wray, that the most persistent terrorist threats to the United States are small groups or lone actors who sometimes commit acts of violence after watching or reading online content that glorifies extreme or hateful views. Professor, what lessons can we learn from history about how major technology advancements can contribute to a climate of extremism? And what recommendations do you have for Congress to mitigate how AI systems may contribute to extremism in the United States?
Daron Acemoglu:
Thank you for this important question as well. I think it is inevitable that digital tools are going to be used for spreading misinformation and disinformation. It cannot be stopped, but then again, the printing press was used for the same thing. The radio was used for the same thing, and lots of other vehicles were available to actors for fermenting extremism. The issue is that AI and digital platforms in general increase the capabilities for bad actors to use these tools, and this is an obvious area for regulation. But more importantly, I think we have to ask questions about how the business models of the leading tech companies is playing out in this domain. Part of the reason for the phenomena that you are pointing out is that many of these digital platforms are actually not just displaying misinformation, but they’re actively promoting it. I think displaying misinformation is very difficult to solve, but promoting is a choice and it’s a choice that they make because of their business model, which is based on monetizing information via digital ads.
And this is something that provides a lot of alternative directions for us. It is possible to use AI technologies in a way that’s much more reliable in a way that does not create the most pernicious echo chambers in a way that does not promote misinformation and disinformation. I think three types of policies are particularly important. Government regulation of where extremism is taking place and going after it is very important. So the government has to invest more in tracking where this is happening. And I think your committee is at the forefront of this. Second, I think digital ad based business models are creating a lot of negative social effects. My proposal has been for a while that we should consider digital ad taxes, meaning taxes on advertisement that uses personalized ads collected from digital platforms. And I think when we do that, we’re both going to discourage to some extent the most pernicious users of these digital ads.
But secondly, open up the market for alternative models. The marriage of data collection and venture capital and other kinds of funding has created a business environment in the tech world where the most successful companies are those that try to collect as much data as possible and try to get as much market share as possible for sometimes a decade or more. They don’t even make money, but they can get funding because this is the way of the future as viewed by venture capitalists. But that also means that alternative business models cannot enter because the market is being captured by these things. So a meaningful digital ad tax would actually be a pro-competitive tool. And then the final policy is data markets. Right now, a lot of this is also completely entangled with digital platforms being able to take data as they wish. So I think we need to have better regulation about who has rights to data and also perhaps start building legislation to create data markets in which, for example, creative data artists or writers have collective data ownership rights. And this way there will be a fairer division of the gains from digital technologies and ai, but also it could encourage better use of data and better ways of developing new monetization models in the digital age. Thank you. Thank
Sen. Maggie Hassan (D-NH):
You very much. And thank you, Mr. Chair for your indulgence.
Sen. Gary Peters (D-MI):
Thank you. Senator Hessen. Senator Butler, recognized for your questions.
Sen. Laphonza Butler (D-CA):
Thank you so much, chair and colleagues for helping us have more deep discussion about ai. This has been a topic that have been all of us have been talking about. It feels like in my short time for a good deal of time. And so I appreciate all of you for your work and your leadership on the topic. Dr. Hu, I think I’ll start with you if that’s okay. You have been doing an incredible amount of academic examination in this area, and I understand that AI could become a critical asset to stakeholders in the criminal justice system. However, we’ve already begun to see cutting edge artificial intelligence-based technology like facial recognition systems drive wrongful arrests of innocent, innocent people. The reality is that this technology is already widening preexisting inequities by empowering systems that have long histories of racist and anti activist surveillance. Here’s my question. I’m curious to hear your thoughts really on how we can best use this sort of tension because without action, we know that communities of color will disproportionately face continue to disproportionately face the harmful consequences of this technology. And so I’d love to hear your thoughts on how we can best respond, acknowledging that it is a technology that is going to exist and we have these sort of built in inequities in our current system.
Margaret Hu:
Yes, thank you so much for that important question. I think that this is one of the critical inquiries that we are faced with when we’re talking about ai, that it can, in the way in which it absorbs vast oceans of data, also absorb very historically problematic information and then translate that and interpret that in ways that are not consistent with our constitutional values or principles or civil rights. And this is particularly troubling in the field and in the way in which the technologies are being enrolled in criminal justice and criminal procedure because of our deep commitments to fairness in criminal justice and the ways in which we have those protections embedded in the fourth, fifth, sixth amendments, for example, of the Bill of Rights. How are we now faced with these types of evolutions in AI and these technologies and algorithmic decision-making in particular, in ways that we are having a hard time trying to preserve those fundamental constitutional rights. And I think that this is an opportunity for us to try to think through exactly what types of new jurisprudential methods and interpretations do we need in order to, for example, expand our interpretation of the Fourth Amendment in a way that encompasses these types of challenges so that we can stay true to our first principles of protections.
Sen. Laphonza Butler (D-CA):
Thank you so much for that. Ms. Vallor, if I could turn to you quickly, I think that your three practical recommendations in your written testimony to the committee are very compelling. I was struck by the idea that just like the Duchess childcare benefits scandal, it’s inevitable that we’ll get some of this stuff wrong and even despite our best efforts, can you talk a little bit about why you think it’s so important to create new systems of liability contestability and redress for impacted groups, which adjacent to my first question often includes the most vulnerable communities?
Shannon Vallor:
Absolutely. Thank you for that important question. And I think we have seen plenty of evidence that if we do nothing, the use of AI technologies will continue to disproportionately harm the most vulnerable and marginalized communities here in this country and also in other countries. As you noted, it has been seen in multiple places in the world where this dynamic occurs. A researcher in our field, Abe Bahana, has described these technologies as conservative, not in the political sense, but in the sense that they take patterns of the past and they literally conserve them and push them into the present and the future. So they make it harder for us to overcome some of the patterns of the past that we are rightly committed to addressing.
So we have to direct AI than as a tool for identifying harmful patterns, harmful biases, and mitigating those. And AI can be used as a tool for that as well and has been in many cases. So it comes down to who absorbs the risks that new technologies inevitably introduce. No technology can be completely safe or risk-free, but it’s about who absorbs those risks and who reaps the benefits. When you allow large companies and wealthy investors to reap the benefits of innovation in ways that push all the risk and cost of that process onto society, and particularly onto the most vulnerable members of society as we’re seeing today, you produce an engine of accelerating inequality and accelerating injustice. So what Congress needs to do is to ensure that those who stand to profit the most from innovation are asked to take on most of those costs and risks. We’ve done this before in areas like environmental regulation with the polluter pays principle, when it’s implemented correctly, it actually incentivizes for profit companies to build safety and responsibility into their operations so that instead of spending money to have to clean up pollution, they can spend money to make their operations cleaner and safer in the first place. I would love to see that dynamic be pursued in AI regulation as well, where we think about how can we incentivize companies to build more responsibly in the first place. And I think we can obviously begin where we already have some power, and that’s something that has come out of bills in this committee to address the uses of AI in the public sector to address uses by federal agencies.
You see also in the executive order recently released many moves to empower and instruct federal agencies to begin to take action so that we can in a way start by making sure that government uses of AI are appropriately governed, audited, monitored, and that the powers that government has to use AI are used to increase the opportunity and equity and justice in society rather than decrease it, which can happen even when we don’t intend it if we are not actually implementing many of them measures that I and other panelists here have described in the regulatory environment.
Sen. Laphonza Butler (D-CA):
Thank you. Thank you, Mr. Chair.
Sen. Gary Peters (D-MI):
Thank you, Senator Butler. Senator Hawley, you’re recognized for your questions.
Sen. Josh Hawley (R-MO):
Thank you very much, Mr. Chairman. Thanks to the witnesses for being here. I want to start my time with a piece of oversight business if I could. Last week when the Secretary of Homeland Security was here, secretary Mayorkas, I asked him about a whistleblower claim, a whistleblower who had come forward to my office and alleged that as many as 600 security special agents from Homeland Security Investigations 600 had been removed from felony investigations, including particularly child exploitation investigations and sent to the southern border to do things like make sandwiches for illegal immigrants. That’s a quote from the whistleblower, not from me. Here’s what she said. We’re being told to shut down investigations to go hand out sandwiches and escort migrants to the shower and sit with them while they’re in the hospital and those types of tasks. Now, secretary Mayorkas did not deny this.
He did say that, well, they’re working on fentanyl. They may be working on fentanyl claims the while they are at the border. After that testimony, multiple additional whistleblowers came forward to my office from across the country, different whistleblowers unrelated to each other from different offices across the, and directly contradicted Secretary Mayorkas testimony. One whistleblower said Secretary Mayorkas was, and I’m going to quote him now, absolutely lying, and that agents were not in fact being reassigned to investigate Fentanyl cases. Another whistleblower claimed that he was reassigned to the border to, in his words, babysit illegal immigrants. A fourth whistleblower confirmed that special agents had been pulled off child exploitation investigations, and all of these whistleblowers provided documentation about being asked to drop felony investigations, move to the southern border to conduct essentially ministerial tasks along the lines that the first whistleblower alleged. So Mr. Chairman, of course, I don’t personally know whether this is accurate or not.
I know now that we have multiple whistleblowers who are all alleging the same thing, and they’re also pointed out to me these whistleblowers that there may be violations of the law. In fact, the whistleblowers alleged that these practices violate 13 USC 1301, that it violates the Office of Management and Budget circular a 76, that it violates internal ice travel policies. So what I have done, Mr. Chairman, is per my normal practice and the practice, I think all of us follow on this committee. I have collected this information. I have written a letter to the Inspector General of DHS asking his office to investigate these claims, which I’m sharing with the committee today, and I’ve asked him to report back to me and to the committee so that we can see what he says. I’d like to submit this for the record if I could. Mr. Chairman, I want to thank you for your work always with whistleblowers and for those who come forward to my office and other offices before. And so I’m putting this on the record and we’ll see what he says, and I hope that he’ll look into this and he’ll get back to us and we can evaluate these claims. Objection. Thank you.
Now, turning, if I could to you, professor Acemoglu, let me ask you a little bit about AI in the recruiting context, the recruiting and hiring context. My understanding is that increasingly companies are using AI recruiting tools to what they would say enhance efficiency in the hiring process. This is especially true among large established companies. My concern is this is that AI’s application to recruitment is often controversial because hiring is an inherently subjective process. And we were just discussing, in fact, some of the issues when you use AI to make what we might call people decisions and some of the biases that AI tends to scoop up and replicate. One example of this is Amazon in 2018 where it’s reported that AI software used in the hiring process systematically discriminated against women. My question to you is this, where would you draw the line on AI decision-making in hiring practices? What should we be aware of or concerned about there?
Daron Acemoglu:
Excellent question. Thank you very much, Senator Hawley. I think that’s a very difficult question. I am very concerned about all users of AI that takes away human judgment, especially when human judgment is very important. And this becomes particularly concerning when AI practices then legitimize things that were not previously completely accepted. Let me give you an example. For instance, imagine that we have an AI system that puts a lot of weight on somebody having completed a four year college for essentially a semi semi-manual task. It’s quite easy how that might come about. Four year college workers are doing much better in the labor market, but for many semi-manual tasks, those college skills are not that important. But if the AI system starts putting that emphasis, it’s going to start turning a lot of good candidates down. The more it does that, the more it becomes accepted that you should really have a four year college to become an electrician.
And then our social norms and our expectations completely shift. Even though the original decision to turn down people who had just a high school degree was not right. So this is not a hypothetical situation because we are having a lot of similar cases happen when AI systems are engaged in decision-making, especially when people don’t know how to evaluate them. There’s a lot of evidence, for example, that doctors who get AI recommendations don’t know how to evaluate it, and they sometimes get confused where the AI recommendation comes from, and they may put overweight on things that they shouldn’t really overweight because they don’t understand the black box nature of these systems. So I think human judgment and the expert opinion of people who have accumulated expertise is going to be very important. This is particularly true when we start using ai not just for recruitment, but lots of other human resource tasks, for example, promotion or deciding who’s doing well or how to assign workers to different shifts. I think all of these we’re going to do much better, which if we do something broadly consistent with what I try to emphasize in my written testimony, choose a pro-human direction of ai, which means that we try to choose the AI technologies trajectory in a way that empowers humans, and then we train the decision makers so that they have the right expertise to work with ai. And that includes valuing their own judgment, not becoming slaves of the AI system. Thank you for that question.
Sen. Josh Hawley (R-MO):
No, very good. And your answer touches on something that I think is so important that we can’t lose sight of who has control of the AI and who the AI is benefiting. And I’ve said over and over, I’m sure that these giant corporations who are developing AI, I’m sure that they’ll make lots of money on it, mean I have no doubt about that. Will it be good for the, that they employ and in particular, will it be good for working people in this country? I’m less certain about that. And I see my friend Senator Blumenthal across the diocese and hearing that we had recently, I still remember the testimony of a large corporate executive who just remarked offhand that it was wonderful that AI was doing things like replacing people who work at fast food restaurants. And he said, and I think he just expected everyone agree because of course those are not creative tasks, so it’s good we can do without them. I thought, wait a minute, wait a minute. It’s easy for you to say as you sit in your position in the c-suite, maybe not so much for the person for whom that’s a first job that’s getting a foothold in the labor market from which she can advance to something else. So I think who controls the AI and what the biases are in it in the way that you point out is very, very important. Thank you, Mr. Chairman.
Sen. Gary Peters (D-MI):
Thank you, Senator Holly. Senator Blumenthal, you’re recognized for your questions.
Sen. Richard Blumenthal (D-CT):
Thank you. And I’ll just expand on the line of questioning that Senator Hawley was asking because he and I actually have been having hearings on the judiciary Privacy Tech and technology committee and the labor aspects have been perhaps less important for us than preventing the deep fakes and impersonations. And we’ve developed a framework for legislation including a licensing regime with an oversight entity and testing before products are released so as to prevent the kind of deep fakes and impersonations that so scare people at the same time preserve the promise of AI that I think all of you and we too agree is very, very important. But the impact on the labor market in terms of inequality, aggravating inequality, eliminating tasks without creating new tasks, I think is a very important point that you make Professor and you say, and I know that you cite in footnote five, a lot of studies that have been done on electrician and plumbers. Could you make it real for us? How can AI enhance the work done by electricians and plumbers? And then also what new tasks can you give us an example of how AI could create new tasks so that it can be worker as you say, pro citizen, pro-human.
Daron Acemoglu:
Thank you very much, Senator Blumenthal. So let me actually start giving a different example than the electrician. I’ll come to the electrician educators teachers. So a lot of the emphasis today is to use AI in classrooms for automated grading, automated teaching and also large language models take the place of experts in forming students. But actually one of the problems in many US schools is that a lot of students are falling behind and there is quite a bit of evidence in the education science literature showing that personalized teaching is tremendously useful for these students. So if we had the resources to have a teacher work with one or two students identifying their weaknesses and how the material could be presented to them so that they could understand, they could have a chance to catch up, but they don’t have those resources, they don’t have those opportunities.
So those students fall behind and that’s part of our educational crisis right now. So one quite feasible direction of AI, it’s actually well within the technological frontier, it doesn’t even require any advances, is use existing AI tools in real time, identify which students are having trouble with which part of the curriculum. You can do that actually as the class progresses and then provide suggestions to teachers. So we would need more and better trained teachers to do that. But you provide suggestions to these teachers to say, let’s take these two or three students and present the material differently, spend a little bit more time, give some remedial help. That’s the kind of system recommendation that AI can easily do. And you can see here that the tasks that the teachers will start performing will become new tasks. So the current educators, they teach to the class to 30 people or something.
They don’t have this aspect of identifying and working one-on-one in a systematic way. So those would be the examples of new tasks. So having given that example to the educators, I come to the electricians, it’s exactly the same issue. So electricians are going to have more and more problems as, for example, electrification of the grid or new electrical machinery. They’re going to be more and more rare problems, more and more troubleshooting problems right now, even for the very regular issues that I have in my house, an electrician will come as a semi-skilled electrician. It’s not the very, very best, and they would look at the problem and they can’t solve it and they have to go and they have to do some research and then another expert comes and they try to deal with these issues. So one way that you would make them much better, and this will help in general, a lot of semi-skilled craftspeople, is that real-time AI tools would draw from the expertise of a lot more electricians with similar problems and would make them recommendations so they can do on the spot troubleshooting, problem solving, and deal with the more complex and new tasks that are going to emerge with the changing environment.
And the benefit of that is not just it’s going to help with our shortage of electricians, it’s going to increase the earning capacity of electricians and the economy, but it’s actually going to be an equalizing tool because who’s going to benefit most from this? It’s not going to be the very best electrician because he or she would’ve been able to solve these problems. It’s going to be those with middle expertise who are good enough to do certain tasks, but they need help, they need training, they need additional recommendations to deal with the more complex problems. So that’s where the promise lies. Thank you for your question.
Sen. Richard Blumenthal (D-CT):
Thank you. That’s a really helpful answer. And will that in turn address the phenomenon of growing inequality in our system?
Daron Acemoglu:
I think it has a real chance of being a very contributing factor. It’s not going to be sufficient by itself, but one of the major reasons for why we have so much inequalities that we have not helped low education workers, we have replaced their jobs exactly like Senator Holy pointed out, and we haven’t given them new opportunities and we haven’t given them new tools. Those workers can become much more productive if we give them better technologies, better training opportunities. Again, AI has that capacity. ai, especially generative ai, forget the hype. I really think the hype is misleading, but there are some very impressive aspects of it and the most impressive one is that you can load on a tremendous amount of information and then give some clues about a context, and it finds from that vast amount of information which bits are relevant for that context. If we use that, we can really deploy it for making a more helpful technologies for low education workers, for skilled craftsmen, for semi-skilled crafts people, for service workers, for healthcare workers, for educators.
Sen. Richard Blumenthal (D-CT):
That is a very exciting prospect. At the same time, ai, I guess there’s good AI and less effective AI, and I read an article recently about hallucination that said that there’s a variation of 3% hallucination to 27% hallucination depending on the system. So I hope the plumber or electrician gets the more accurate version rather than 27% because they’ll be fired.
Daron Acemoglu:
That’s actually a very, very, very important point. Right now you couldn’t use ChatGPT or similar models to do that Exactly, because they’re not providing reliable information. And this goes back to Senator Holly’s comment. These technologies are developed in a way that’s good for the bottom line of the large companies, but not good for the workers or for the people. And that’s actually very easy to deal with. If instead of training these models on the vast amount of unreliable information on Reddit and all lots of other places, if you give them reliable information so the training set of these models is much more reliable, then the information they will give is much more reliable. So why are we training these models on the entire internet and the speech patterns that you see on Twitter, Facebook, Reddit, and so on? Because the agenda of many of these companies was to create the hype that these are general intelligence-like technologies, and to do that, they wanted to mimic the way that humans talk and the amount of information was not important. It was just important to get the human-like speech out of this. So different agendas, one is good for the corporations, the other one is going to be good for the workers, and I think this is where government leadership is going to be important.
Sen. Richard Blumenthal (D-CT):
Thank you very much. Fascinating topic and my time has expired, but there’s a lot more to discuss and appreciate all your work. Thanks Mr. Chairman.
Sen. Gary Peters (D-MI):
Thank you. Senator Blumenthal. Senator Ossoff, you’re recognized for your questions.
Sen. Jon Ossoff (D-GA):
Thank you Mr. Chairman, and thank you to our panelists for your testimony and your expertise and your work. Obviously there are and will be intersections between privacy law and privacy policy and any regulatory regime established that touches on or manages the development and deployment of artificial intelligence. Dr. Acemoglu, you mentioned in your statement suggestions about a property rights model for data. Professor Hu you cited some of the work of Jack Balkan in your opening statement who as I understand it, has suggested a fiduciary model for data whereby custodians and recipients of data from persons would have inherent duties of care and confidentiality to the individuals whose data they’ve collected and which they’re storing or using.
When I think about the failure of Congress to make effective privacy law, one of the things we see is an effort to imitate the EUS regime. Another thing we see is a sort of whack-a-mole regulatory approach that looks at current problems faced by consumers and individuals and tries to isolate and target them with certain specific regulatory prohibitions, but doesn’t seem to propose any kind of more basic law upon which fundamental obligations could be established that judges then over time could evolve into a more comprehensive regime protecting the privacy of individuals. So professor who, if you could just opine for a moment on your thoughts on the notion of a fiduciary model as a means for establishing some fundamental obligations for software companies, internet platforms, and others across the private sector and the public sector who will receive data from private individuals.
Margaret Hu:
Yes, thank you so much for that excellent question. I do think that we are seeing a renegotiation of the social contract and this is where Jack Balkan’s theory of the fiduciary model for privacy and the information fiduciaries in which you have, especially under first amendment rights, a acknowledgement that you have a triangle of negotiation of constitutional first amendment rights between the tech companies, the citizen and the government. And that new renegotiation of rights and obligations is the representation of our modern digital economy. But I want to go back to your question about do we need something more fundamental and I think that this is where we’re opening the dialogue to potentially needing a constitutional amendment that enshrined privacy as a fundamental right, and also if we look at that as a launching pad at which to through some type of constitutional amendment, empower Congress to enact legislation in order to try to ensure that fundamental privacy rights are extended to all citizens, then I think we do not need to see it as much of a negotiation in a triangle with the companies as we’ve heard from the other witnesses. We have to always ask the question of who is benefiting and how our data, for example, is being monetized in a way that is adverse to the best interests of the citizenry.
Sen. Jon Ossoff (D-GA):
Thank you Professor Hu, and it’s an intriguing proposition of course procedurally in terms of the difficulty of a process such an amendment would require a tremendous amount of effort. That’s not to say it may not be worth the effort, but a statute, although the record of Congress thus far enacting any kind of meaningful privacy statute is a failure. I think that there is an interest on both sides of the aisle and privacy law. Dr. Mogul, could you comment please on your reaction to this proposal of a fiduciary model for the protection of data and how it contrasts with other sort of a property rights regime, which you’ve suggested in your opening remarks?
Daron Acemoglu:
I think we just don’t know which one of these different models is going to be most appropriate for the emerging data age. I think the fiduciary model has a lot of positives, so did European union’s GDPR regulation, it was motivated by the right philosophy, but at the end we are seeing that it’s backfired, it’s not very effective and it may have actually advantaged some of the large companies because they’re better able to bear the costs of complying with the regulation.
I think the general point that we should bear in mind is data and who controls data is going to become more and more important and it has become one of the major reasons why the tech sector has become more oligopolies because a few companies have a big advantage in controlling data. So privacy issues are very important as professor who also mentioned, which is privacy is a right, but I think right now they are completely intersecting about who controls data and that’s the reason why I think I am tempted to favor models in which we try to systematize data markets. At the end of the day, if data is going to become the lifeblood of the new AI economy, it’s not going to be okay to treat data as an afterthought to solve privacy issues. We really need to institute the right sort of regulations and legislation about what rights people have to different types of data that they have created and whether those rights are going to be in exercise individually or collectively.
That’s actually a very tricky but new issue. The most natural thing for economists and I think for policymakers is to say, okay, we’re going to create property rights on data, so you own your data. That would not become a very workable model, both because it would be very expensive for individuals to track whether their data is being used, but they’re also also lots of market driven reasons for why individual data rights may not work. After all, if my data is about identifying cats, other people can do that as well as me, so that creates a lot of race to the bottom, so you may need some sort of collective ownership of data.
Sen. Jon Ossoff (D-GA):
With my remaining time. Dr. Acemoglu, you’ve also talked a lot about centralization and the development of these frontier models is very energy intensive, technology intensive. This is IP produced at great cost and there are few entities in the world with access to the processing power to do it. Just comment if you could on the risks of centralization, of ownership of such models and what kind of policy remedies might be available to Congress if they’re necessary at all in order to prevent some of the negative consequences of such centralization and market concentration.
Daron Acemoglu:
Again, this is an area we just don’t know enough about because there are some people who think open source is going to be a sufficiently powerful check on the power of the largest tech companies. On the other hand, there’s a lot of doubt about whether open source is going to work and I think the most important issue is that exactly like you have pointed out, there are two resources that are very centralized at the moment, which is compute power, which is becoming more and more expensive because there’s a shortage of the compute power at the moment. And second is data, and both of these are going to create potentially a much more monopolized system, which is not good for innovation and it’s not good for the direction of innovation because then it’s going to be just a few companies that set the agenda. I think antitrust tools are a very effective one, and here I’m talking as much about stopping mergers and acquisitions.
If you look at over the last 10 years, the largest tech companies have acquired dozens of rivals and often they actually sideline the technologies of those rivals because they just don’t want the competition. The second thing is to create the data infrastructure that I was talking about. That’s going to be a channel to create more competition. Then the final one that I think we should think about is whether there are reasons for the government to get involved in the allocation of compute power. If it becomes more and more scarce and it’s a critical resource, especially in a critical resource from a national security point of view, I think the government may need to worry about where that compute power is going. Thank you. Thank you. Thank you. Senator Ossoff, I’m going to ask a question for all three of you. We’ll start with Professor Valer will work this way just to change things up a little bit here.
Another really broad question. We have talked about a variety of issues here today at the committee. There’s a huge conversation going on across the world right now, but what do you think is missing from the AI conversation right now? And I want to be specific to governments and lawmakers, to those of us sitting up here who are thinking about how we deal with these issues. Is there something missing from the conversation that you really think we should be thinking about? Professor Vallor give you the first shot at that and then we’ll work down the dais.
Shannon Vallor:
Thanks for that question. I think one of the things that is not entirely missing, but I think it’s underemphasized in the current regulatory conversation, is the ability to see these systems as governable as opposed to things that are being thrust upon us as if they’re natural forces that have arrived. Every AI technology has been shaped by human decisions to serve particular ends and driven by particular incentives that our systems have set. I don’t think we talk enough about the incentives that we have created for some of the harms that we’re seeing perpetuating and accelerating across the AI ecosystem, both profit incentives and power incentives and where those can be changed. I also think we aren’t talking still enough about ensuring that when we use AI, that we’re using it to amplify human intelligence rather than becoming a cheap replacement for it. AI tools, as have been mentioned, are over-hyped, not because they’re not powerful, but because they’re powers of a different sort than the people who market them want us to believe. These tools are not intelligent in the way that we are. They don’t have the capacity for good judgment and common sense. They’re very powerful pattern amplifiers, and that’s a tool that we can use in our decisions, but many people are still talking about AI as if it’s going to be able to make the hard decisions for us. These systems do make decisions, they make calculations, even if we automate a process by letting a computer determine the outcome, that is a human decision, and I don’t think we are going to be served well if we forget how to make decisions well or even lose sight of the fact that we are still the ones making the decisions no matter how much we’re relying on the technology because in that case, we’re making decisions in the dark and that’s a terrible strategy for human progress.
Sen. Gary Peters (D-MI):
Thank you professor. Professor Hu.
Daron Acemoglu:
Thank you so much, chairman for that question. I think part of what’s missing from the discussion is whether or not a fundamental assumption is being challenged by ai, and that assumption is that the rule of law can proceed all other forms of power and that the law can govern effectively, especially if you have these tech companies and AI seeing themselves as co-equals and being able to speak with the law as an equal. Therefore, you might have a difficulty of having then the AI apparently in some instances being presented as something that can now precede the law. And so if we are going to, I think really address how the law will govern ai, I think we need to understand that that is the fundamental question under our constitutional democracy. Under Article one of the Constitution, it gave Congress the power to legislate, but how is AI trying to challenge that power?
Sen. Gary Peters (D-MI):
Thank you, professor.
Daron Acemoglu:
I think I’ve been emphasizing this for a while, but it’s still, I believe, underappreciated that there are different directions in which we can develop AI tools and that we have to make the right choice is understanding what these different directions are. So I’m an economist and as I responded to Senator Holly, of course the profit motive and the benefits to corporations matter, those are very, very important. But I think we are underestimating how much the founding ideology or vision of AI has influenced the way that the industry has developed and that founding vision, as I have argued in a number of different contexts, is that we have a desire or a social benefit from creating autonomous machine intelligence, meaning machines that are as intelligent as humans and they are autonomous. And if once we do that, a number of conclusions follow from that.
One is a lot of automation because if machines are really intelligent and autonomous, then they should do a lot of the tasks that we do because they can perform them as well as we can do second, much less need for human judgment because they’re autonomous and intelligent. Third, much less emphasis on humans actually controlling them. But this vision has become completely foundational to the tech industry and a lot of the emphasis on general intelligence follows from that. And I think it’s very difficult to change the direction of the tech industry with only regulation unless we also cultivate different types of priorities among tech leaders and the leading engineers and computer scientists. And that’s why I have emphasized not just my work, but many important people. Professor Vallor also emphasized, for example, Norbert Wiener and many other inspiring scientists as early as 1949 and 1950s came up with different visions, but they have been completely overshadowed by the artificial general intelligence or autonomous machine intelligence vision. Putting that on the table, encouraging a broader perspective on AI and encouraging or articulating the idea that having pro-human AI is both feasible and desirable is both missing and I think quite important for the future of this industry. Thank you.
Sen. Gary Peters (D-MI):
Thank you. Another question for all three of you this time I’ll start with professor who, so we mix it up here then Professor Vallor and Daron. These are always, this is a tough question, but given the fact that how complex this issue is and all of the issues that we’ve talked about, but as lawmakers, we have to distill things down to concrete actions that we need to take. And so this question is if the United States government can do just one thing, one thing to increase the chances that AI is going to increase the wellbeing for everyone, not just a few, but for everyone, what would that one thing be given your areas of expertise? Professor Hu.
Margaret Hu:
I think the one thing that I would prioritize is an amendment to the Civil Rights Act of 1964 so that we incorporate and then try to anticipate and address the types of AI driven civil rights concerns that we’ve seen over the last decade. And I think that we can see that across the spectrum of the ways in which the AI and automated systems and algorithmic decision making can cut across discrimination in the criminal justice context, housing, mortgage, financing in employment. And so that would be the one thing that I would emphasize.
Sen. Gary Peters (D-MI):
Thank you, Professor Vallor.
Shannon Vallor:
I think I would emphasize examining the misaligned incentives that we’ve permitted in the AI ecosystem, particularly with the largest and most powerful players, and learn the lessons from the past where we have had success realigning the incentives of innovation with the public interest so that we can create clear and compelling penalties for companies who innovate irresponsibly for companies that get it wrong because they haven’t put in the work to get it right while at the same time perhaps capping the liabilities or reducing the risk for innovators who do invest in innovating safely and responsibly and then want to find new ways of using those tools to benefit humans. Because we often see some of the good actors are hearing about the risks of AI systems, the ways that they might fabricate falsehoods or the way that they may amplify bias, and that can actually reduce innovation and narrow it to only those powerful actors who can afford to get it wrong. So I think if we adjust those incentives so that the best and most innovative actors in the ecosystem are rewarded for innovating responsibly and the most powerful ones have to be held liable for producing harms at scale, then I think we can see a way forward that looks much more positive for ai.
Sen. Gary Peters (D-MI):
Thank you, professor Acemoglu.
Daron Acemoglu:
Thank you for this question. There is no silver bullet, but I think one of the first steps is to redress the fact that the vision of AI is pushing us more and more towards automation surveillance and monitoring. And this is really an ecosystem. Senator Johnson pointed out the Eisenhower quote that government support for university scientists could have negative consequences because it makes scientists cater to the government needs. Well, right now it’s actually much worse when it comes to ai. All leading computer scientists and AI scientists in leading universities are funded and get generous support from AI companies and the leading digital platforms. So it really creates an ecosystem in academia as well as in the industry where incentives are very much aligned towards pushing more and more for bigger and bigger models and more and more of this machine intelligence vision and trying to automate a lot of work.
So I think if we want to have a fighting chance for an alternative, the government may need to invest in a new federal agency which is tasked with doing the same things that the government used to do. The US government used to do, for example, with DARPA, with other agencies playing a leadership role for new technologies. And in this instance that would be more worker, pro citizen agenda. And I think something along the lines of, for example, the National Institutes of Health, which has both expertise and funding for new research could be very necessary for the field of AI with an explicit aim of investing in the things that are falling by the wayside, the more pro-human pro worker processes and directions. Thank you.
Sen. Gary Peters (D-MI):
Thank you. Professor Vallor. You have ridden quite a bit about the connection between technology and human values. Would you share some concrete examples to this connection and in particular talk about based on your research, how you see AI changing our basic societal values?
Shannon Vallor:
Sure. Thank you. So first of all, I think it’s important to recognize, and you mentioned this in your opening remarks, that AI is not neutral. And in fact, no technology is neutral. All technologies are mirrors of human values. Every technology that human beings have ever created has been a reflection of what humans at particular times and places thought was worth doing or enabling or building or trying. But technologies also change what we value. So if we think of these kinds of AI systems that we are building today, trained on human generated data that reflects the patterns of our own behaviors and past judgments, we’re using AI much like a mirror, and we’re looking at AI increasingly to tell us what we value to reflect the patterns and preferences and to instruct us on the patterns and preferences that we and others hold. And this to me creates a very perverse relationship between technology and values because instead of our most fundamental human values, the things that are connected most deeply to our need for shared human flourishing, instead of those driving the tools that we need, and this is to professor Mogul’s point of there’s so much untapped opportunity to direct AI to address unmet needs and areas from health to infrastructure to the environment.
But that’s only if the values that are connected for us to shared human flourishing are what are driving those decisions. Instead, what’s happening, and I mentioned this earlier in my testimony, is that we are looking at a mirror of ourselves with these systems that actually reflects very old patterns of historical valuation, very old prejudices, very old biases and priorities that do not in fact reflect the values of the human family as a whole. And so I think it’s partly about being able to recognize what our values are without having to find them in the AI mirror, and in that way we can ensure that the technology continues to be shaped by the values that we hold most deeply.
Sen. Gary Peters (D-MI):
Thank you, professor. Professor Acemoglu, do you believe or would you argue that AI is either causing or going to cause increased dysfunction in government? And if so, how would we manage it? I think you’ve written on some of these areas.
Daron Acemoglu:
I don’t think right now AI is causing increased dysfunction in the government yet, except that I think we are falling behind the necessary regulation and building of the necessary expertise in AI in the government. So it’s wonderful to see this and several other senate committees deal with the issues of AI because I think the lawmakers need to be at the forefront of it. But as we move forward and AI systems become more widely used, exactly like my fellow witnesses have pointed out, we need to introduce the rights safeguards for making sure that individual rights, including privacy rights, but more importantly their human and civil rights are correctly recognized and protected. I don’t see huge issues there in the United States at the moment, but there are a few local law enforcement agencies that started using systems that are not very reliable for law enforcement. So that needs to be brought under control. But you can see from China and other countries how the emphasis on surveillance and monitoring is already having a tremendous effect. And it is particularly important for democratic countries to set the right legislation to ensure that both companies and government agencies are not tempted to follow China in the use of AI in the next decade.
Shannon Vallor:
Can I just briefly add to that?
Sen. Gary Peters (D-MI):
Please, please.
Shannon Vallor:
Just pointing out that that dynamic is an excellent example of how AI can in fact warp our human values because it can cause us to become increasingly resigned to control and efficiency as values that become more accepted and important than particular liberties and considerations of justice that are inscribed in our constitution and in international law. So I think it’s also important to remain anchored in those value commitments that are written in those documents for a reason and ensure that we aren’t letting the direction of the technology that is currently ungoverned undermine those very commitments.
Sen. Gary Peters (D-MI):
Thank you. Professor Hu, you’ve talked about changing the social contract. Do you want to talk about that in relation to AI?
Margaret Hu:
Yeah, absolutely. I think that what we’re seeing is really a quadrilateral situation with our social contract where the rights are being mediated and negotiated across the spectrum of not the government and citizens as it used to be when we first established the social contract, but now it’s negotiated across the government, citizens, civil society, the tech companies, and then the AI as the fourth vertices. And so I think that we need to think through whether or not that type of negotiation and mediation is consistent even with our constitutional democracy at all in the first instance.
Sen. Gary Peters (D-MI):
Thank you. Senator Rosen, you are recognized for your questions.
Sen. Jacky Rosen (D-NV):
Thank you, Chair Peters. It’s really an important hearing and thank you all for being here today. And lo and behold, you set me up perfectly for my question. We’re going to follow up about prioritizing values in AI because I’m a former computer programmer and systems analyst, and so I understand how evolving technology can revolutionize how Americans work. But as you’ve already been talking about in all technology, there are traces of human values, whether how it’s used or the math behind it. Many large language models LLMs the human bias and human values, they are baked into the system in some form or fashion, and some LLMs we know perform better under pressure. For example, when a user tells a model that their job is at risk or people will be hurt if a certain decision is made, that’s one thing. But those same human values can make large language models more fallible and easier to be manipulated. So I’m going to go to Dr. Vallor because in one recent study we found it was easier to evade an AI’s system safety mechanisms when the system thought the stakes were higher. So can you talk about what congress, you’ve talked about this with Senator Peter’s question, what should we consider when balancing values like efficiency versus accuracy, and in what context should more accuracy be required from the model than efficiency and vice versa?
Shannon Vallor:
That’s a great question, and the answer is one that I think highlights the need to invest more in the kind of interdisciplinary expertise around these systems that’s needed to make these kinds of decisions wisely. Because whether, for example, efficiency or accuracy matters more depends entirely on the sociotechnical context that you are designing and deploying the system in. And so if you are using an AI system to make high stakes, irreversible decisions where it’s life or death and there’s no undoing it, if you get it wrong, then very clearly accuracy becomes a far more vital priority than efficiency in a lower stakes environment where what you’re trying to do is simply automate a process in a way that actually uses resources in the most efficient way so that you don’t have a lot of waste, which is something obviously that from an environmental standpoint is of great urgency, right?
Then accuracy in that case perhaps matters less than the efficiency with which the system can drive the operation. But one of the things that we haven’t talked about, although I think it’s in the background, is that AI is not just large language models. AI is not just even machine learning, right? AI is a host of many different kinds of technologies that are suited for different purposes and that work well in some environments and not in others. So I think what we really need to see more investment in is the kind of combined technical and social and political and legal expertise such that people understand the capabilities of the systems and their effects at the same time. Right now, what we have is we have a lot of very smart technologists who understand very well the emerging capabilities of the systems they’re building, but have a very, very limited view of the context in which they’re being deployed and the likely consequences. On the other hand, you have a lot of social scientists and humanities researchers and law scholars who understand those contexts deeply, but are often not given the opportunity to or haven’t invested enough time themselves in understanding the technical capabilities of the systems that they are proposing regulating or governing. But at the University of Edinburgh and lots of other programs around the world, we’re seeing more of an investment in that kind of interdisciplinary expertise that will be needed to govern these systems. Well, and perhaps this is something that Congress can accelerate as well.
Sen. Jacky Rosen (D-NV):
Well, it’s not enough to have a brain. Maybe you have to have heart as well, and that’s when you marry both together in the easiest of ways. It’s much more complicated than that for sure. I have a few minutes left. I want to move on from this. We could talk about this all day. I really think this is the fear and the future of ai, and this is the fear that we have. Will it have a heart, right? It’ll be really plenty smart, but we need more. But I want to move on to international cooperation because that’s also really important. So last week, multiple countries including China, India, Japan, they signed their first international agreement regarding AI, which is committed to deploy and develop AI in a safe and responsible way. But this agreement, of course, is very broad and only in the first steps. So Dr. Acemoglu, in the us, we hold ideals like equality, democracy, and freedom. Values are not always shared by our foreign adversaries. That’s why safe and responsible is in the eye of the beholder perhaps. So how do we ensure that international standards hold these key values? And again, talking about values the same as the last question, prioritize them when we set standards of allowable AI uses.
Daron Acemoglu:
That’s a very, very important issue, and I don’t think we can control what China will do directly, but we can form alliances with our allies and neutral countries for developing the right ethical standards, but both for this type of relationship with friendly countries, but even for where the direction of innovation will go in China and Russia, I think US scientific leadership is really important. So the same sort of concerns that people are voicing about AI right now, were also voiced when it came to mitigating climate change. So the key was we can try to fight climate change in Europe and the US, but China and India won’t. But what we have seen over the last decade is that when the world led by Western countries, but some others as well invests in renewables, then renewables become attractive in China and India as well. So I think it’s the same thing if we let China set the leadership, AI is going to go more and more in a direction that takes control and agency from humans. It does not value equality and it emphasizes surveillance, monitoring censorship. But if we can take that leadership and push it in a direction that’s much more pro-human supporting of democracy, supporting of equality, I think it will even have beneficial effects on China.
Sen. Jacky Rosen (D-NV):
Mr. Chair, can I ask one last thing, follow up on this? Because we talk about the human and the values and there’s a workforce transition that we’re grappling with, and doctor, you’ve been talking about worker use of ai. Could you explain to us here in Congress what you mean by that? Because all of this doesn’t happen on its own and we have to prepare our workforce all across the spectrum. So could you just as a finish up, just explain that to us?
Daron Acemoglu:
Absolutely. I think it’s very, very important, and if I may, I’ll go back to the values discussion that Professor Vallor and Senator Peters said, and you contributed as well. The values that we are going to have for AI really depends on the direction in which we use AI and who controls ai. If AI is in the hands of a few elite entrepreneurs and tech barons, that’s going to be a very different direction of AI. We won’t put the resources to help low skill workers, and the more we don’t put the resources to train them, to provide them with the technologies and the knowledge, the more they will appear to us that they are useless and that will centralize ai, that will centralize all of the resources. The pro worker agenda is using these AI tools to help the workers provide them with better information, and I think the capabilities that we have with the even existing knowledge in AI research is we can provide much better information so that educators become better decision makers. Skilled craftspeople like electricians, plumbers become decision makers, healthcare workers, nurses become decision makers. Blue collar workers can be much better. We have seen in a few companies how the right type of augmented reality in AI tools, using to train them in precision work can increase their productivity. There’s a lot we can do for the workers, and there’s a lot we can do for democracy.
Sen. Jacky Rosen (D-NV):
Thank you. That’s a great way to finish. I appreciate it.
Sen. Gary Peters (D-MI):
You, Senator Rosen, and I’d like to take this opportunity to thank our witnesses. Thank you for being here. We’re certainly all grateful for the work you do and for your contribution to this very important conversation. Today’s hearing offered a new perspective on artificial intelligence. Our witness help us step back from some of the exciting developments and the hype of AI and consider historical, ethical and philosophical questions that this technology possesses in order to ensure that AI truly works on behalf of the American people, that we can’t stop here. We must continue this deeper examination of our values, our agency, and the future that we want from this technology as we build it in the years ahead. The record for this hearing will remain open for 15 days until 5:00 PM on November 23rd, 2023 for the submission of statements and questions For the record, this hearing is now adjourned.
Kelly Adkins is a business and finance reporter for the Medill News Service on Capitol Hill. She covers topics like technology and money, and how it intersects with government policy at the national level. She is pursuing a masters of science in journalism at Northwestern University’s Medill School of Journalism.