security

ASU experts explore national security risks of ChatGPT – ASU News Now


August 15, 2023

As the editor in chief of a Georgia daily newswire turned to ChatGPT to research a Second Amendment lawsuit pending in Washington, D.C., it delivered a newsworthy nugget.

ChatGPT, a natural language processing tool driven by AI technology, stated that the legal complaint accused a syndicated radio talk show and podcast host of defrauding and embezzling funds from the Second Amendment Foundation. It seemed scandalous, but ChatGPT was simply making up facts that sounded convincing, a phenomenon known as “hallucination.” The editor in chief never published the fabricated information but did share it with the talk show host, who filed a defamation lawsuit against ChatGPT’s developer, Open AI.

In New York, a federal judge imposed fines on two attorneys and a law firm for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. The judge said the lawyers and their firm “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

These are just two recent examples of how ChatGPT, the internet’s favorite plaything since its debut in November, poses a threat to our reputations, jobs, privacy — and even truth itself. Arizona State University experts say another threat looms large but often escapes our notice: how ChatGPT and other chatbots pose a risk to national security.

A real game changer

“ChatGPT is a real game changer. This is the first time in human history that we are facing a democratic process with the 2024 election with an incredibly powerful artificial intelligence potentially being used to undermine the process in ways that we have never ever seen before,” says Andrew Maynard, a scientist, author and professor in ASU’s School for the Future of Innovation in Society, a unit of the College of Global Futures.

Maynard’s work focuses on how society can successfully transition to a future in which transformative technologies have the power to fundamentally change every aspect of life. He writes about emerging technologies and responsible innovation in “The Future of Being Human” on Substack.

“People can use generative AI to create content that looks legitimate and human sourced. That’s one side of things,” he says. “On the other side of things is social or human hacking. How can you use persuasive content to get underneath people’s defense mechanisms and critical thinking and nudge them in certain ways?

“This is what skilled manipulators and skilled politicians do: They use rhetoric in a way that makes it far harder for us to engage in critical thinking, and far easier to just go with the flow of their ideas. They do that through an incredibly clever use of language that plays to our internal biases. And now we’ve taught machines how to manipulate how we think, feel and behave in a way that has never been done before.”

Readers Also Like:  Appen, which helps Amazon and Google train AI, is reeling from exec turnover and mounting losses - CNBC

As hundreds — if not thousands — of chatbots join forces with social media and media outlets, that is a lot of political persuasion in the run-up to the 2024 election.

The holy grail of disinformation research

“The holy grail of disinformation research is to not only detect manipulation, but also intent. It’s at the heart of a lot of national security questions,” says Joshua Garland, associate research professor and interim director at ASU’s Center on Narrative, Disinformation and Strategic Influence (NDSI), part of the Global Security Initiative.

NDSI conducts research on strategic communication, influence, data analytics and more to generate actionable insights, tools and methodologies for security practitioners to help them navigate today’s (dis)information age. One example is the Semantic Forensics (SemaFor) program, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), which aims to create innovative technologies to detect, attribute and characterize disinformation that can threaten our national security and daily lives.

ASU is participating in the SemaFor program as part of a federal contract with Kitware Inc., an international software research and development company. Their project, Semantic Information Defender (SID), aims to produce new falsified-media detection technology. The multi-algorithm system will ingest significant amounts of media data, detect falsified media, attribute where it came from and characterize malicious disinformation.

“Disinformation is a direct threat to U.S. democracy because it creates polarization and a lack of shared reality between citizens. This will most likely be severely exacerbated by generative AI technologies like large language models,” says Garland.

The disinformation and polarization surrounding the topic of climate change could also worsen.

“The Department of Defense has recognized climate change as a national security threat,” he says. “So, if you have AI producing false information and exacerbating misperceptions about climate policy, that’s a threat to national security.”

Garland adds that the technology’s climate impact goes beyond disinformation.

“It’s really interesting to look at the actual climate impact of these popular large language models,” he says.

Programs like Open AI’s ChatGPT and Google’s Bard are energy intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those data centers consumes vast amounts of water, as well.

Researchers from the University of California, Riverside and University of Texas at Arlington published AI water consumption estimates in a pre-print paper titled “Making AI Less ‘Thirsty.’” The authors reported it required 185,000 gallons of water (or about a third of the water needed to fill an Olympic sized swimming pool) to train GPT-3 alone. Using these numbers, it was determined that ChatGPT would require a standard 16.9-ounce water bottle for every 20 to 50 questions answered. Given the chatbot’s unprecedented popularity, researchers like Garland fear it could take a troubling toll on water supplies amid historic droughts in the U.S.

Readers Also Like:  Woman arrested ‘over potential co-ordinated disruption’ at Grand National

The promise (and pitfalls) of rapid adoption

“Right now, we are seeing rapid adoption of an incredibly sophisticated technology, and there’s a significant disconnect between the people who have developed this technology and the people who are using it. Whenever this sort of thing happens, there are usually substantial security implications,” says Nadya Bliss, executive director of ASU’s Global Security Initiative who also serves as chair of the DARPA Information Science and Technology Study Group.  

She says ChatGPT could be exploited to craft phishing emails and messages, targeting unsuspecting victims and tricking them into revealing sensitive information or installing malware. The technology can produce a high volume of these emails that are harder to detect.

“There’s the potential to accelerate and at the same time reduce the cost of rather sophisticated phishing attacks,” Bliss says.

ChatGPT also poses a cybersecurity threat through its ability to rapidly generate malicious code that could enable attackers to create and deploy new threats faster, outpacing the development of security countermeasures. The generated code could be rapidly updated to dodge detection by traditional antivirus software and signature-based detection mechanisms.

Turning ChatGPT into a force for good

At GSI’s Center for Human, Artificial Intelligence, and Robot Teaming (CHART), Director Nancy Cooke and her team explore the potential legal and ethical issues that arise as robots and AI are assigned increasing autonomy. They also study how teams of humans and synthetic agents can work together effectively, from communicating verbally and nonverbally to engendering the appropriate level of human trust in AI.

“In my experiments where I bring participants into the lab, train them on a task and tell them they will be working with AI, in many cases the participants trust the AI too much,” she says. “When the AI starts making mistakes, participants often think, ‘I just don’t know what I’m doing because I’m new to this task. AI must be better than I am.’”

If people suspect AI can perform tasks better than they can, it’s only natural for professionals such as computer programmers, financial advisers, writers and others to fear for their job security.

While it is likely — if not inevitable — that ChatGPT will wipe out jobs, Cooke believes it is possible for humans to be empowered by AI, not threatened by it. She gives the example of playing chess.

“Creating teams that are half human and half machine (known as “centaurs”) happens in chess, where you have a pretty good chess player matched with a pretty good chess program, and together they beat the most famous grandmaster, Garry Kasparov, as well as the very best chess program. By using two different kinds of intelligences, you can take the best of what AI has to offer and team it with the best of what humans have to offer. At CHART, we call it making humans with superhuman capabilities,” says Cooke.

Readers Also Like:  GyroPalm LLC wins Best of Sensors 2023 for its GyroPalm Spectrum - FierceElectronics

For technology to transform us into humans with superhuman capabilities, Cooke says we first need to build guardrails to ensure it works on our side.

“What if we were to regulate it such that developers of AI would need to produce a report card telling us in what ways the technology would be good and bad for human well-being?” she asks.

Developing ChatGPT literacy

The phrase made popular by Ronald Reagan during his presidency, “Trust but verify,” rings especially true today with the growing popularity of ChatGPT. After reaching more than 100 million active users less than a year after its launch, ChatGPT is the fastest-growing consumer application in history, according to a UBS study.

Bliss recommends maintaining a healthy dose of skepticism when using this application that is more glib than accurate. Triggering emotion is at the heart of successful disinformation campaigns, and Bliss suggests pausing if you read something that triggers a strong reaction.

“If I read something that makes me feel sad, happy or angry, I will usually go back and research to see if there is a reliable source that has a story on a similar topic,” she says. “I’m a big fan of checking sources and making sure those sources are reliable.”

To help people get better results from chatbots, Maynard teaches a new ASU Online course called Basic Prompt Engineering with ChatGPT: Introduction. The course is open to students in any major and, despite its name, is not really about engineering. Maynard says it is like driver’s ed for ChatGPT users.

“Having a car is great, but having people driving them without knowing the rules of the road or basic driving skills doesn’t lead to safe roads,” he says. “It’s the same with ChatGPT. The more people understand how to use it in safe and responsible ways, the more likely we’ll see the benefits of it.”

ASU’s Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund (TRIF). TRIF investment has enabled hands-on training for tens of thousands of students across Arizona’s universities, thousands of scientific discoveries and patented technologies, and hundreds of new startup companies. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.

Written by Lori Baker



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.