security

Can policy get smart enough for artificial intelligence? – ASU News Now


August 23, 2023

3 ASU experts give their take on AI policy solutions

Editor’s note: This article is the second of a two-part series about the ways that AI, including large language models like ChatGPT, impacts society and how ASU researchers are addressing its oportunities and challenges. Read more: Arizona State University experts explore national security risks of ChatGPT

Artificial intelligence offers enormous potential for good. For example, the Beatles used AI technology to extract the musicians’ voices from an old demo cassette and produce a new song. And a new AI tool called AlphaFold boosted medical science by predicting the structures of nearly 200 million proteins.

But AI is also prone to bias, misrepresenting existing data and giving answers out of context. Chatbots, also called large language models, have put these issues in the spotlight. Users of ChatGPT, Google’s Bard and similar programs have encountered AI “hallucinations,” when the chatbot invents a convincing fake data source to answer a question. One investigative journalist found that when pressed, Microsoft Bing’s AI chatbot created a devious alter ego.

RELATED: What are the differences between all the chatbots on the market today?

Faced with AI’s immense power for good or bad, lawmakers and even AI developers are scrambling for policy solutions. Tech leaders across the nation wrote an open letter urging a six-month pause in AI development — a period ending next month — to establish safety standards. The Federal Trade Commission began investigating OpenAI, the maker of ChatGPT, to assess consumer harm.

Additionally, the European Union is seeking to pass the Artificial Intelligence Act, which could pave the way for global AI regulations. On the home front, the White House published a “Blueprint for an AI Bill of Rights” that protects U.S. citizens from harm caused by AI, including privacy breaches, discrimination and unauthorized use of original works.

But is traditional policy the best way to handle AI’s capabilities? How do different work industries incorporate AI fairly? What does an ethical AI environment even look like?

To find out, we spoke to three experts from Arizona State University’s Global Security InitiativeJamie Winterton, senior director of research strategy; Raghu Santanam, affiliate researcher and a professor and McCord Chair of Business in the W. P. Carey School of Business; and Mickey Mancenido, affiliate researcher and assistant professor in the School of Mathematical and Natural Sciences.

Note: Answers have been edited for length and clarity.

“I think, if we’re going to be pragmatic, the thing to do is figure out a reasonably global, agile, soft law approach.”

— Jaimie Winterton, Global Security Initiative senior director of research strategy

Question: Is it worth it to stop AI development as some have suggested and set up policy, or should we work on both at the same time?

Winterton: I think we will have to figure out how to do them at the same time, because these technologies are always moving forward, and the ways that people use these technologies are always changing as well. So now we’re in the position of having to come up with flexible and fast-moving policy. Policy is neither.

I think, if we’re going to be pragmatic, the thing to do is figure out a reasonably global, agile, soft law approach. Instead of saying, “Let’s put together these laws about it,” let’s work with international organizations and gather coalitions to create standards. If enough big companies come together and agree to work by a particular bill of rights or framework, it adds pressure for others to get on board. Let’s use that collaborative agreement and social pressure to govern the development and use of some of these technologies.

Gary Marchant in the Sandra Day O’Connor College of Law has said that our law is struggling to keep up with the last iteration of technological developments, let alone where tech is right now. We should be thinking about more creative ways to get the societal outcomes that we’re hoping for, and not just say, “Congress will pass a law.”

Q: What are some top-priority concerns to address through regulation and guidance?

Winterton: I think the biggest one is, what can it be used for and what should it be used or not used for? My colleague, Joshua Garland, recently testified to the New Mexico state legislature, and he recommended that AI-generated imagery and text shouldn’t be used in political ads. I think that’s a great specific recommendation.

It’s very hard to tell what’s been faked and what is legitimate, so efforts on watermarking or having some way to tell what’s true from fake is maybe the biggest issue that we’re facing, with implications for mis- and disinformation and for democracy generally.

My other concern is literacy around these models. What are they good for and not good for? AI and large language models are very confident. They’ll give you an answer immediately that sounds great. But ChatGPT can be very misleading. Working with a large language model can be very useful and effective. But there are certain things it will not be able to do well. We need general literacy for people in different sectors to know how to use it so it will help them and not lead them astray.

Readers Also Like:  'Honesty': Estes details its playbook for responding to a cyberattack - Cybersecurity Dive

There’s also a big cybersecurity aspect that runs through the center of this. We’re having large language models recommend things to us. How do we make sure they haven’t been tampered with by some adversary? That’s one area that would be good to keep in our research and development.

RELATED: ASU researcher bridges security and AI

Q: How can we advocate for good AI policy or soft law approaches?

Winterton: We often think of tech development as being its own thing, but technology is fundamentally social and political. If we’re talking about how these technologies and algorithms might impact different groups of people, it brings the conversation back to what it’s being used for. Academia is a great way to drive it forward because we have the neutral middle ground. We can bring this perspective to our research design, to the conversations that we have with our transition partners in government and industry.

“Any new technology, especially general-purpose technology like AI, does disrupt existing roles. But it also creates new roles.”

— Raghu Santanam, affiliate researcher, Global Security Inititative; professor and McCord Chair of Business, W. P. Carey School of Business

Q: Considering the Hollywood writer strike and other debates around how AI could automate some job tasks, should different industries create policies for AI use and job security?

Santanam: As much as it is a job security question, it is also about having just compensation for intellectual contributions. When you start using these large language models, you’re essentially leveraging all the past creative works to make something new, or at least that appears new. The debate is about whether creative content usage in large language models is ethical and legal. And if content that’s not open source is used, how do we create a just compensation model for the original content creators?

If you think about media that streams on Netflix or Amazon, royalty payments indirectly do accrue to original creators in most cases. So some type of a royalty model may eventually be possible for generative AI usage. At least going forward, compensation structures may get defined in the original agreements with content creators for generative AI usage.

Beyond that, whether AI replaces you or makes you more productive are two separate questions. It’s happened before in society. Any new technology, especially general-purpose technology like AI, does disrupt existing roles. But it also creates new roles. How fast that adjustment period is can determine how disruptive it is for specific occupations.

RELATED: Duplicating talent: The threat of AI in Hollywood

Q: There’s a lot of nervousness surrounding this. Is there a better framework to use when we think about AI and how it will affect our jobs or society in general?

Santanam: A lot of freelance content creators are already seeing business dwindle. If you’re asking what might happen in the future, I would guess that it is going to make most knowledge workers more productive. This has happened before with e-commerce and web 2.0 — those technologies made service and knowledge work more productive, but they also created new job roles. A role like search optimization did not exist before Google came out with their search engine. But at the same time, the internet disrupted the newspaper industry beyond anyone’s expectations.

Although many content creators may see dilution of opportunities, some professionals will be able to leverage new opportunities. For instance, those who are really good at prompt engineering will be sought after. That’s also an upskilling opportunity. If you imbibe this new skill into your role, and you’re able to utilize ChatGPT-like services to your benefit, you’re going to be a more valued knowledge worker in your organization.

RELATED: New ChatGPT course at ASU gives students a competitive edge

Q: Since people want fair compensation for their work, how do we determine whether something is truly someone’s work, versus purely AI-generated content or a blend of the two?

Santanam: That value determination happens in the marketplace. It is likely that end consumers will value original human work and therefore continue to drive higher value for human-generated content. But if consumers perceive high value in hybrid content, then it might actually create a lot of difficulty for those who say, “I’m only going to do original work.”

We can see the market’s role in value determination in several established markets, where products that are advertised as handmade or one of a kind have high valuations. We could also see such differentiation with Broadway shows versus movies in theaters. It will be interesting to watch how markets determine the relative values placed on consuming content that’s original and human made, content that’s hybrid, and content that is actually purely AI generated.

Readers Also Like:  Texas Tech pushes Texas to the brink, but unable to secure road ranked wins - Viva The Matadors

Now my hypothesis is that the AI-generated content, simply because of its scalability, is going to be priced lower. At the same time, exquisite human creations could actually garner an even bigger difference in value over AI-generated works.

RELATED: The state of artificial creativity

Q: When deciding things like what AI should be used for, or what data should be used to train AI, are these questions that are better answered through the marketplace and company to company? Or is AI so transformative that we should have policy addressing it in some way?

Santanam: I think it should happen at both levels. In one sense, the scope of AI technology’s impact could be similar to nuclear technology. In most cases, you don’t know who’s had a hand in developing generative AI systems. But if the system gets implemented widely, one small defect could cause humongous damage across the globe. That’s where I think there is need for regulation. When you’re using algorithms to make decisions for autonomous vehicles, facial recognition, housing and business loans, or health care, we need clear regulatory frameworks that govern their usage.  

For corporate or marketplace policy, you could start thinking about the ethical frameworks that are important to avoid disparities in outcomes and make sure that the benefits accrue equitably to all stakeholders. It is also important to ensure that ethical boundaries for content usage are as clearly outlined as possible. When there is a broad agreement on the ethical framework for creation and usage of generative AI, it becomes easier for pricing, distribution and business model innovations to take place.

Q: What industries could really benefit from using AI as a tool, and what should they consider before doing so?

Santanam: I think a lot about health care and how generative AI might or might not disrupt the health care industry, or for that matter any industry that directly impacts human life. What are the policies and governance mechanisms that we’ll need to put in place so that we do not end up worsening disparities or putting human lives at risk?

The financial industry already uses AI extensively, but it definitely is a context where there is opportunity for biases to creep into business practices. For example, when you make decisions on loans, there are concerns about what data you’re utilizing and how decision-making is delegated between humans and AI. More importantly, deep learning networks are black boxes, so you need to start thinking about how to build more transparency in data usage, decision processes and delegation mechanisms.

Finally, AI also provides a way to scale education. There’s already some work going on in terms of providing office hour mentoring and tutoring. Grading also is ripe for automation, at least partially.

In all these industries, common concerns include the underlying data and the biases inherent in them, the inadequate organizational control and process oversight to ensure fairness and equity, and the lack of frameworks for ethical decision-making and policies for due process.

“As human beings, we have our baseline definition of ethics and morality, but an organization’s mandate defines what it perceives as ethical in a specific context.”

— Mickey Mancenido, affiliate researcher, Global Security Initiative; assistant professor, School of Mathematical and Natural Sciences

Q: Large language models are fed lots of data, which affects how they answer questions. Who decides where that training data comes from, and is there a way to limit AI to ethically sourced data?

Mancenido: That’s highly dependent on the developers of the algorithms. In the case of ChatGPT, that would be OpenAI. My understanding of how they acquire large amounts of training data is that they scrape this from the internet, including news websites, blogs, Reddit, Twitter. Do they have ethical practices in place to ensure that the data is well represented and that the civil liberties of private citizens are not being infringed upon? I have no way of knowing that. Most AI companies do not share their training data. And if you ask ChatGPT to give you a description of the data that it’s being trained on, it’s just going to say, “I am an AI language model; I don’t have access to my training information.”

Q: Do you think there will be a move to have federal oversight of that in the future?

Mancenido: It’s a little different from other technologies. Let’s say back in the 1950s, they were regulating the safety of vehicles. In that case, your technology has discrete components, and you know exactly how each affects the overall system with respect to safety and reliability. But in the case of AI, there are too many moving parts. I believe that even the developers most of the time have no idea how every single component and parameter affects the prediction or the interface. So, I don’t know what that regulation will look like. I can imagine that it’s going to be a very difficult thing to police, just because anyone with a computer can scrape data from the internet and train a model.

Readers Also Like:  How to downgrade Chrome version - Ghacks

Q: There are many conversations around ensuring that an AI model’s data source is ethical. But how would we define ethical data?

Mancenido: Ethics really is context- and organization-dependent. For example, I work a lot in the public service domain, such as the Department of Homeland Security. Every organization has a mandate. The mandate of the TSA is to ensure that passengers are safe during air travel. That’s a different mandate from, let’s say, the Department of Commerce.

In the context of the TSA mandate, if you’re deploying AI for facial recognition at airport security, would you err on the side of a false match or a false mismatch? Which error is going to be less impactful for your organization and for the traveling public? AI systems are very good at situations they’ve been trained on, but not so much on edge cases, which is where I think humans still have superiority. As human beings, we have our baseline definition of ethics and morality, but an organization’s mandate defines what it perceives as ethical in a specific context.

Q: When thinking about facial recognition, is that biometric data protected by law?

Mancenido: It depends on which country you’re in. The USA puts a lot of importance on the rights and civil liberties of private citizens. If you go to a security checkpoint, and they have this technology, you will see a display that says that they only keep the images for 12 hours, after which they’re deleted from the system. That’s one of the safeguards that they have. And they don’t use your photos in training their models. Among public-serving organizations, that’s pretty much the status quo.

Now, when you think about other countries where the privacy of ordinary citizens is considered very lightly or not at all, then I would say that yes, they do use private citizen data to train their models. That’s why some countries have AI models that are being deployed in spaces where it would be unthinkable to do so in the U.S. Here in this country, there are a lot more regulations when it comes to privacy, and that should make us sleep better at night. I prefer that.

Q: How can we safeguard society while allowing for innovation that benefits society?

Mancenido: That’s a very difficult question. In this country, I believe that we need better privacy-preserving laws. I think the top research funding priority should be privacy preservation, because that’s balancing between optimal AI performance and possibly infringing on citizens’ rights when it comes to providing your AI technologies with as much data as possible. I think that’s the key if you ask me how to balance safety and innovation.

I’m also really an advocate for public education. With AI, we want people to know enough that they can make informed decisions. But at the same time, we don’t want them to be too overwhelmed, because it can get very overwhelming very fast. Educating our congress members and the people who make the decisions, I think that’s the tricky part here. Can we make them understand enough to make sound decisions about the extent of AI regulation? Most researchers, in my opinion, need to keep in mind, “How do I make my research more accessible, in such a way that a regular layman can understand it?”

RELATED: ASU Lincoln Center to launch course on human impacts of AI

ASU’s Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund (TRIF). TRIF investment has enabled hands-on training for tens of thousands of students across Arizona’s universities, thousands of scientific discoveries and patented technologies, and hundreds of new start-up companies. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.

Top graphic by Alec Lund/ASU

Mikala Kass

Communications Specialist , ASU Knowledge Enterprise


480-727-5616



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.