In early 2022, the City of Syracuse’s Surveillance Technology Working Group met to discuss a proposal from the police department to install street cameras that automatically scan license plates as an aid for investigating crime. While the technology had the potential to help detectives identify and apprehend suspects, it also raised real concerns about privacy and oversight. How would data be used and stored, and who would have access to it?
The Surveillance Technology Working Group—a mix of city employees and community members—was created by Syracuse Mayor Ben Walsh ’05 M.P.A. to address these kinds of policy questions and give the public a voice in the process. In the end, the group voted to approve the license plate scanners with important stipulations. The data can only be used for identifying vehicles and occupants that are part of an active criminal investigation or have been reported missing. It cannot be used for immigration enforcement, and it must be purged after a set time.
Johannes Himmelreich, a member of the working group, has significant experience in weighing issues of technology and policy. An assistant professor of public administration and international affairs at the Maxwell School, Himmelreich’s research focuses on the ethics and governance of technologies such as self-driving cars, autonomous weapons and machine learning in the public sector. “I think it’s really important work that the mayor has started,” says Himmelreich. “Sometimes, the work of the group is understanding: What is the technology? Is it a surveillance technology? What do we use it for, and what are the risks and trade-offs? This is a way of collaborative, participatory policymaking that has been very successful.”
As autonomous systems and artificial intelligence (AI) continue to advance, the need to understand these new technologies and ensure they are used safely and ethically is more acute than ever.
Himmelreich is among numerous scholars at Maxwell who are rising to this challenge—applying the tools of the social sciences to emerging technologies. He and colleagues across disciplines are conducting important research on everything from drones to robotics to generative AI tools like ChatGPT, helping produce data and shape policy that impacts the American public and beyond. Their scholarship has a symbiotic effect on students, who will no doubt navigate different technologies in their future careers.
CREATING A HUB
Syracuse University has been ahead of the curve in focusing on the policy and social impact of emerging technologies, and much of that is centered on an institute housed in the Maxwell School.
About five years ago—amid increasing conversations about autonomous vehicles, robotics and AI—a number of academic leaders began laying the groundwork for a Universitywide initiative focused on the intersection of technology, policy and society. To gauge interest on and off campus, Jamie Winders, a professor of geography and the environment at the Maxwell School and Syracuse University’s associate provost for faculty affairs, met with a wide range of University scholars and outside experts.
“What became immediately clear was that not only did we have a critical mass of faculty interest across all schools and colleges, but also that we as a University had an opportunity to approach this area in ways that were different from what we were seeing elsewhere,” Winders recalls. “I spoke with about 100 industry leaders, advocates and policymakers, and when they talked about how they saw these fields developing, they kept pointing to the absence of work where technology meets policy, and on wider societal impacts and public perception.”
To address that gap, in 2019, the University launched the Autonomous Systems Policy Institute (ASPI). J. Michael Haynie, vice chancellor for strategic initiatives and innovation, was a driving force, along with Maxwell Dean David M. Van Slyke, who named Winders its founding director. “We had the opportunity to position ASPI as interdisciplinary at its core,” Winders says. “We thought of our existing interests in autonomous systems as three circles on a Venn diagram. We have faculty who are really interested in the technology and design aspects; we have folks interested in the policy, law and governance; and we have many interested in the societal impacts. From the beginning, ASPI sat at the center of that Venn diagram.”
The institute now has more than 60 affiliated faculty researchers, connecting scholars from the social sciences, humanities, computer science and engineering, information studies, law, and communications.
Among the Maxwell faculty who serve as senior research associates with ASPI are Himmelreich, geographer Jane Read, Austin Zwick in policy studies, sociologist Aaron Benanav and political scientist Baobao Zhang. Zhang and Himmelreich, along with colleagues from other universities, are editors of the forthcoming “Oxford Handbook of AI Governance” (Oxford University Press).
One core function of ASPI is to foster collaboration among scholars in different fields. Its Artificial Intelligence Research Working Group, for instance, meets monthly for faculty to share ideas and projects. “I work with computer scientists, experimental psychologists, philosophers and communication studies scholars,” Zhang says, “and it’s great that ASPI exists as a hub to support us.”
In early April, ASPI coordinated a panel of faculty from across the University to tackle a tech issue that has garnered much media attention in recent months: ChatGPT.
CROSSING DISCIPLINES
The April ChatGPT panel included a newcomer, Hamid Ekbia, whose addition to Maxwell and the University complements efforts to harness the social sciences and shape decisions about technology.
Ekbia, a University Professor, takes the helm as director of ASPI this July. In keeping with the collaborative nature of ASPI, Ekbia has long worked across disciplinary lines. Initially trained as an engineer in his native Iran and at UCLA, he was drawn by advances in artificial intelligence and went on to earn a Ph.D. in computer science and cognitive science from Indiana University Bloomington.
Ekbia considers himself a humanist and describes himself as a “poet of technology” in multiple senses—including as an acronym for the policy and ethics of technology, a formulation that is at the heart of ASPI. “I see these as closely intertwined,” he writes on his website For a Better Future, “with ethics guiding our thinking about the potential harms and benefits of technology, and policy giving the thinking teeth and legs.”
A key goal of ASPI, in Ekbia’s view, is to bring the broader population into the conversation about how emerging technologies are used and regulated. “The average user, as they say, does not have much of a voice so far,” he says. “Nobody comes and asks us what technology we’d like to have in our homes and offices and working spaces.”
GOVERNING AI
Bringing the public into the policymaking process is the focus of a major new research project by Baobao Zhang, who is a Yale graduate with an M.A. in statistics and Ph.D. in political science. Zhang is one of 15 scholars from across the U.S. chosen by the philanthropic organization Schmidt Futures to serve in the inaugural cohort of AI2050 Early Career Fellows.
The fellowship provides Zhang with up to $200,000 over two years for multidisciplinary research in artificial intelligence. For the project, Zhang is creating a mini-public of regular citizens to learn about a topic and make policy recommendations. She is working with the nonpartisan Center for New Democratic Processes to recruit a group of 40 participants, randomly selected from the U.S. adult population. Through a 40-hour process planned for this summer, this group will learn about AI systems from computer scientists, ethicists and social scientists and deliberate on how to classify risk from AI systems.
Navigating between the marketing hype about new technology and skepticism or alarm about it can be difficult for citizens and policymakers alike, Zhang says. She cites the example of large language models such as ChatGPT, which can generate remarkably cogent writing from a prompt but also false information—like providing a citation from a book that doesn’t exist.
“The question is, should we classify these large language models as high risk?” Zhang says. “A general-purpose AI system like ChatGPT can do many things; it can play chess with you or write a joke. But it can also generate spear phishing emails. There are also researchers trying to fine-tune it to give medical diagnoses, which is pretty high risk. So as more and more of these general-purpose AI systems come online, we need to think about risk differently. The technology can be used in many sectors where it’s not very risky, but in some cases, it can really cause a lot of harm if not used correctly.”
EXPANDING CURRICULUM
Along with fostering collaborative research, ASPI supports opportunities for undergraduate students to delve into the field through courses such as Using Robots to Understand the Mind, Introduction to Unmanned Aerial Vehicles, and Ethics of Emerging Technology. “It is important to shape the research agenda,” Winders notes. “But it’s as important to help produce the next generation of thought leaders in this area, who are excited about issues and also committed to the public good—who want to think about how to innovate in an equitable manner.”
Course offerings continue to grow. In the fall semester, Ekbia will introduce a course called AI and Humanity: Charting Possible Futures, designed as an introduction to the field for undergraduates with varying backgrounds—from the arts, engineering, and natural and social sciences to humanities, law and media.
A group of faculty connected with ASPI, led by Zhang, is also working toward introducing an undergraduate minor in artificial intelligence and public policy. The proposed minor would expand the curriculum with courses on topics such as governance and ethics of AI and the responsible design and auditing of algorithms, with the goal of equipping students with the technical and ethical skills to responsibly develop and deploy AI systems.
POLICY IMPACT
The growing body of work on emerging technologies by Maxwell scholars is helping frame issues and shaping policy beyond campus. For instance, Winders was invited to present at a White House summit on developing advanced air mobility systems that rely on automated or autonomous technologies.
Himmelreich, meanwhile, is studying the use of automated risk-scoring tools in unemployment insurance—where determinations about eligibility have a huge impact on individuals’ lives. And Benanav, assistant professor of sociology at Maxwell, argues that the mass job displacement by robots forecasted a decade ago has not materialized and that there are good reasons to doubt the same predictions about AI chatbots—and to focus on using these tools equitably and ethically.
Alumni applying their University instruction and experience to work in careers centered on the rising technology and its implications include Scott Renda ’05 M.A. (IR), who held a series of technology advisory and policy development roles at the U.S. Department of Commerce, U.S. Department of the Treasury and the Executive Office of the President at the White House. Kerstin Vignard ’96 M.A. (IR) is an international security policy professional whose 26-year career at the United Nations Institute for Disarmament Research included leading efforts to support governments to develop international normative and regulatory frameworks for increasingly autonomous weapon systems. And, Travis Mason ’06 B.A. (PSc), a member of the Maxwell Advisory Board, works in the field of autonomous aviation systems as the first-ever chief policy officer for Merlin Labs.
This story was edited for length and originally appeared in the Spring 2023 Maxwell Perspective magazine. The full version is available on the Perspective’s online companion.