Brad Smith, the president and vice chair of Microsoft Corporation, said in an interview that aired Sunday on “Face the Nation” that he expects the U.S. government to regulate artificial intelligence in the year ahead.
The European Union and China have already crafted national strategies but the U.S. has yet to do so.
“I was in Japan just three weeks ago, and they have a national A.I. strategy. The government has adopted it,” Smith said. “The world is moving forward. Let’s make sure that the United States at least keeps pace with the rest of the world.”
“Artificial intelligence” is an umbrella term for computer systems which are able to perform tasks that require human intelligence, and includes technology used in familiar devices such as Siri and a Roomba. Recently, A.I. systems capable of creating text, audio, and images have made headlines with the debut of chatbots like Google’s Bard or ChatGPT-4, or image generators like Dall-E.
Smith said he believes that the country needs standards on how A.I. generated content is regulated, especially concerning content that mimics human beings.
Last week, a deepfake image circulated online of an explosion near the Pentagon that potentially partially created by AI. Although the images were quickly debunked, it did move markets, “Face the Nation” moderator Margaret Brennan noted. Smith said “we’ll need a system that we and so many others have been working to develop that protects content, that puts a watermark on it so that if somebody alters it, if somebody removes the watermark, if they do that to try to deceive or defraud someone, first of all, they’re doing something that the law makes unlawful.”
But as Brennan noted, Washington is coming into a presidential election year — and these deepfake images could impact the election. A recent political attack ad which used A.I.-generated images to depict an imagined dystopian future. The ad, released by the Republican National Committee, mimics a news report from 2024 after the presidential election. It shows images created by artificial intelligence China invading Taiwan, businesses boarded up, and images of President Joe Biden and Vice President Kamala Harris celebrating being reelected.
“Well, I think there is an opportunity to take real steps in 2023, so that we have guardrails in place for 2024,” Smith said. “So that we are identifying in my view, especially when we’re seeing foreign cyber influence operations from a Russia or China or Iran, that is pumping out information that they know is false and is designed to deceive, including using artificial intelligence. And that will require the tech sector coming together with government and it really will require more than one government.”
In Congress, Democratic Sens. Michael Bennet of Colorado and Peter Welch of Vermont have proposed legislation to create a commission tasked to regulate the artificial intelligence industry and ensure it is safe and accessible to American citizens. Earlier this month, the White House announced new initiatives promoting responsible innovation in A.I.
Smith said that Microsoft is specifically focusing on how news organizations can protect its content, and how candidates and campaigns can protect the cybersecurity of their operations. He also told Brennan that Microsoft has been working with the White House to answer their questions.
“They, and really people across Washington D.C. fundamentally in both political parties, are asking the same questions,” Smith said. “What does this mean for the future of my job? What does it mean for the future of school for my kids? Fundamentally, we’re all asking ourselves, how do we get the good out of this and put in place the kinds of guardrails to protect against the risks that may be creating.”
Smith said that while existing laws need to be applied to A.I., he believes the country would benefit from a new framework to regulate artificial intelligence specifically.
“When it comes to the protection of the nation’s security. I do think we would benefit from a new agency, a new licensing system, something that would ensure not only that these models are developed safely, but they’re deployed in, say, large data centers, where they can be protected from cybersecurity, physical security and national security threats,” Smith said.
Brennan said Stability AI’s CEO said AI is going to be a “bigger disruption than the pandemic,” and the head of one of the largest teachers unions in the country has asked what it means for education. Smith has suggested math exams could be AI, which as Brennan noted, will cost jobs.
“Well, actually think about the shortage of teachers we have, and the shortage of time for the teachers we have,” Smith. “What would be better? To have a teacher sitting and grading a math exam, comparing the numbers with the table of the right answers or freeing that teacher up so they can spend more time with kids? So they can think about what they want to teach the next day. So they can use this technology to prepare more quickly and effectively for that class the next day.”
In creative industries, AI can build upon work that has already been done — so Brennan asked how compensation will be worked out?
Smith said there are two different aspects in compensating people in creative industries. First, “will we live in where people who create things of value continue to get compensated for it?” He said the answer “is and should be yes” and “we’ll have copyright and other intellectual property laws that continue to apply and make that a reality.”
But, he said, there is a “broader aspect” to the question of compensation, which is that AI will make “good” employees better, while “weaker” employees could be challenged.
“What should excite us is the opportunity to use it to get better,” Smith said. “Frankly, to eliminate things that are sort of drudgery. And yes, it will raise the bar. Life happens in that way. So let’s all seize the moment, let’s make the skilling opportunities broadly available. Let’s make it easy. Let’s even make it fun for people to learn.”
Smith said that A.I. will create and displace jobs over the next few years.
“There will be some new jobs that will be created. There are jobs that exist today that didn’t exist a year ago in this field,” Smith said. “And there will be some jobs that are displaced. There always are.”
“I think we’ll see it unfold over years, not months,” Smith said. “But it will be years, not decades, although things will progress over decades as well. There will be some new jobs that will be created. There are jobs that exist today that didn’t exist a year ago in this field. And there will be some jobs that are displaced. There always are. But I think for most of us, the way we work will change. This will be a new skill set, we’ll need to, frankly, develop and acquire.”
Smith advised against a six-month pause on A.I. experimentation, something tech giants Elon Musk and Apple co-founder Steve Wozniak proposed in an open letter several months ago.
“I think the more important question is, look, what’s going to happen in six months, that’s different from today? How would we use the six months to put in place the guardrails that would protect safety and the like? Well, let’s go do that,” Smith said. “Rather than slow down the pace of technology, which I think is extraordinarily difficult, I don’t think China’s going to jump on that bandwagon. Let’s use six months to go faster.”