Few early observers of the Cold War could have imagined that the worst nuclear catastrophe of the era would occur at an obscure power facility in Ukraine. The 1986 Chernobyl disaster was the result of a flawed nuclear reactor design and a series of mistakes made by the plant operators. The fact that the world’s superpowers were spiraling into an arms race of potentially world-ending magnitude tended to eclipse the less obvious dangers of what was, at the time, an experimental new technology. And yet despite hair-raising episodes such as the Cuban missile crisis of 1962, it was a failure of simple safety measures, exacerbated by authoritarian crisis bungling, that resulted in the uncontrolled release of 400 times the radiation emitted by the U.S. nuclear bomb dropped on Hiroshima in 1945. Estimates of the devastation from Chernobyl range from hundreds to tens of thousands of premature deaths from radiation—not to mention an “exclusion zone” that is twice the size of London and remains largely abandoned to this day.
As the world settles into a new era of rivalry—this time between China and the United States—competition over another revolutionary technology, artificial intelligence, has sparked a flurry of military and ethical concerns parallel to those initiated by the nuclear race. Those concerns are well worth the attention they are receiving, and more: a world of autonomous weapons and machine-speed war could have devastating consequences for humanity. Beijing’s use of AI tools to help fuel its crimes against humanity against the Uyghur people in Xinjiang already amounts to a catastrophe.
But of equal concern should be the likelihood of AI engineers’ inadvertently causing accidents with tragic consequences. Although AI systems do not explode like nuclear reactors, their far-reaching potential for destruction includes everything from the development of deadly new pathogens to the hacking of critical systems such as electrical grids and oil pipelines. Due to Beijing’s lax approach toward technological hazards and its chronic mismanagement of crises, the danger of AI accidents is most severe in China. A clear-eyed assessment of these risks—and the potential for spillover well beyond China’s borders—should reshape how the AI sector considers the hazards of its work.
A DIFFERENT SENSE OF DANGER
Characterizing AI risk has been a matter of public debate in recent months, with some experts claiming that superhuman intelligence will someday pose an existential threat to humanity and others lambasting “AI doomers” for catastrophizing. But even putting aside the most extreme fears of an AI dystopia, previous incidents have provided plenty of reasons to worry about unintended large-scale calamities in the near term.
For instance, compounding machine-speed interactions between AI systems in finance could inadvertently crash markets, as algorithmic trading did in the 2010 “flash crash,” which temporarily wiped out a trillion dollars’ worth of stocks in minutes. When drug researchers used AI to develop 40,000 potential biochemical weapons in less than six hours last year, they demonstrated how relatively simple AI systems can be easily adjusted to devastating effect. Sophisticated AI-powered cyberattacks could likewise go haywire, indiscriminately derailing critical systems that societies depend on, not unlike the infamous NotPetya attack, which Russia launched against Ukraine in 2017 but eventually infected computers across the globe. Despite these warning signs, AI technology continues to advance at breakneck speed, causing the safety risks to multiply faster than solutions can be created.
Most Americans may not be well versed in the specifics of these risks but nonetheless recognize the dangers of building powerful new technologies into complex, consequential systems. According to an Ipsos survey published in 2022, only 35 percent of Americans believe that AI’s benefits outweigh its risks, making the United States among the most pessimistic countries in the world about the technology’s promise. Surveys of engineers in American AI labs suggest that they may be, if anything, more safety-conscious than the broader public. Geoffrey Hinton, known as the “godfather of AI,” and until recently a vice president at Google, has quit the industry to advocate that scientists refrain from scaling up AI technology “until they have understood whether they can control it.”
China, by contrast, ranks as the most optimistic country in the world when it comes to AI, with nearly four out of five Chinese nationals professing faith in its benefits over its risks. Whereas the United States government and Silicon Valley are many years into a backlash against a “move fast and break things” mentality, China’s tech companies and government still pride themselves on embracing that ethos. Chinese technology leaders are enthusiastic about their government’s willingness to live with AI risks that, in the words of veteran AI expert and Chinese technology executive Kai-Fu Lee, would “scare away risk-sensitive American politicians.”
DISASTER AMNESIA
The disparity between Chinese and American perceptions of the hazards of AI—and their respective tech sectors’ willingness to take risks—is no accident. It is a result of Chinese policies that systematically suppress citizens’ experience of disasters to protect the government from public criticism.
In the United States, disasters tend to prompt an elevated public consciousness and enhanced safety measures as their heart-rending consequences ripple through the media and society—in machinery-intensive industries such as oil drilling, everyday food and drug production, and the processing of dangerous chemicals. Even now, legislators in Ohio are making progress on new safety regulations in the wake of a fiery train derailment in February that shot a plume of toxic chemicals above the town of East Palestine.
But in China, these types of accidents rarely reverberate through the media as the state maintains a chokehold on information to promote a constant atmosphere of stability. The Chinese Communist Party smothers information when disaster responses are mismanaged and routinely falsifies death tolls. The government sometimes refuses to acknowledge, let alone report on, vast tragedies such as the mass radiation poisoning that resulted from at least 40 nuclear tests conducted between 1964 and 1996, which led to the premature deaths of nearly 200,000 citizens.
The danger of AI accidents is most severe in China.
The result is a culture of disaster amnesia in which it is often impossible for the public to demand change or for the government to be forced to learn from costly accidents. Little accountability for mistakes means that business owners tend to play fast and loose with safety, as evidenced by China’s grisly history of industrial accidents. Even the rare instances in which mishaps are publicly exposed lack the staying power that might result in serious reform. For example, the public outcry about mass-produced toxic toothpaste in 2007, poisoned infant milk formula in 2008, and the collision of high-speed trains near Wenzhou in 2011 prompted well-publicized displays of scapegoating and loudly proclaimed government reform plans but had limited impacts on public safety. The Chinese government often projects a facade of responsiveness but then buries information about the events, quite literally in the case of the now-underground remains of the Wenzhou train wreckage. Given that China has a far more restrictive media ecosystem under Xi Jinping than it did when these incidents occurred, public exposure is even less likely today.
With the worst run-ins with emerging technologies routinely excised from public consciousness, Chinese society exhibits a seemingly boundless sense of techno-optimism, especially toward new technologies such as AI. Given that China’s historic ascent from poverty went hand in hand with high-speed technological advancement, accelerated scientific research is practically synonymous with national progress in the Chinese zeitgeist—viewed as having few, if any, downsides.
To see this full-steam-ahead approach in action, look no further than He Jiankui, the Chinese scientist who shocked the world in 2018 by genetically modifying human embryos in secret to produce the world’s first gene-edited babies. The doctor expected, and initially received, high praise in China for his feat, but the government clumsily pulled an about-face in response to international outrage over his unilateral decision to push humanity into uncharted territory. Unsurprisingly, further examination showed that He irreversibly botched his experiment, in what one geneticist called “a graphic demonstration of attempted gene editing gone awry.” He not only likely failed to make the modified babies (and their potential offspring) HIV-resistant as intended, but also potentially increased their susceptibility to influenza, cancer, and other diseases. After a stint in prison, He was released and continues his research, alongside new Chinese legislation providing loopholes for similar ethically fraught and potentially lucrative genetic experimentation.
UNBRIDLED AMBITION
Not only are experimental technologies seen as largely risk-free in China, but the country has also committed itself to a feverish sprint to become “the world’s premier artificial intelligence innovation center” by 2030.
China’s efforts to overtake the United States in AI have been a priority for the Communist Party since at least 2015, when Xi announced his “Made in China 2025” strategy. This emphasis on AI has since been reiterated in various national documents and speeches. AI has become a linchpin of China’s military modernization strategy and is increasingly integral to the country’s system of state surveillance, repression, and control. With so much at stake, it is no surprise that China’s government has been investing tens of billions of dollars annually into its AI sector and leveraging its vast espionage network to try to steal foreign corporate technology secrets.
China’s AI frenzy is paying off. The country produces more top-tier AI engineers than any other country—around 45 percent more than the United States, its closest competitor. It has also overtaken the United States in publishing high-quality AI research, accounting for nearly 30 percent of citations in AI journals globally in 2021, compared with 15 percent for the United States. This year, China is projected to overtake the United States in its share of the top one percent of the world’s most-cited AI papers. As the U.S. National Security Commission on Artificial Intelligence warned, “China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.”
Theorists have long worried that AI competition might initiate a race to the bottom on safety. But in competitions between major powers, established incumbents and ambitious challengers usually have vastly different levels of risk aversion, with the latter often demonstrating far more appetite for risk in a quest to rebalance perceived asymmetries.
Today’s AI sprint would not be the first time Beijing’s desire to hasten progress invited disaster. The Chinese leader Mao Zedong’s attempt to collectivize farms, melt down agricultural tools to feed industrial development, and turbocharge steel production in his so-called Great Leap Forward plunged China into the worst famine in human history, with an estimated 30 million people starving to death between 1959 and 1961. Later, China’s attempt to slam the brakes on population growth through its 1979 one-child policy—adjusted to a two-child policy only in 2015—led to widespread forced abortions and infanticide, a population imbalance of roughly 33 million more males than females, and a severe demographic aging crisis across the country.
Less encompassing acceleration efforts, such as China’s 1990s rush to cash in on the commercial satellite launch industry, also catalyzed tragedy when a rocket blasted into a Chinese town in 1996, killing an unknown number of victims. Today, according to investigative reporting by The Wall Street Journal, hydroelectric plants, social housing complexes, and schools built around the world as part of China’s ambitious Belt and Road Initiative are literally falling apart, imposing vast costs on already impoverished countries.
A TRAGIC TRACK RECORD
China’s drive to outdo the United States in AI capabilities has not yet produced any crises. But history suggests that if one occurred, Beijing’s response would be calamitous. Authoritarian states routinely mismanage emergencies, turning accidents into full-blown tragedies. Averting the worst outcomes depends on recognizing anomalies early on, especially those that might suggest bad news. But autocracies struggle to do that. There is no reason to expect anything different as the perils linked to AI take shape.
When worrisome developments arise in China, party officials are incentivized to suppress troubling information rather than risk their positions by reporting bad news to their superiors, beginning a vicious cycle that tends toward catastrophe. The famine caused by Mao’s Great Leap Forward is a case in point. If farm collectivization and melting down tools set the stage for a crisis, it was the official cover-up of the early signs of danger that ultimately snowballed a bad harvest to mass starvation.
The same pattern recurs in contemporary China with unnerving frequency. Consider, for example, the layers of government obfuscation that led at least one million Chinese men, women, and children to contract HIV by selling blood or receiving transfusions of contaminated blood in the 1990s. Despite regular early reports of the budding disaster, local officials aggressively suppressed evidence for years to protect their careers. Many of them were promoted even after the suppression and its effects were known.
The highly lethal 2002 SARS outbreak in China was likewise covered up by the Chinese government for about four months, even as the deadly virus infected more than 8,000 people around the world and killed 774. And despite investing $850 million in public health mechanisms specifically designed to ensure that a SARS-like cover-up did not happen again, the government followed a similar path in its response to the COVID-19 pandemic. Critical weeks elapsed between the first recorded COVID case in Wuhan in December 2019 and China’s acknowledgment of the risk of human-to-human transmission on January 20, 2020. By that time, seven million individuals had freely traveled from Wuhan to other locales, spreading the virus across the country and beyond.
During and after the COVID-19 outbreak, the Chinese government harassed and detained doctors and journalists who tried to bring life-saving information about the virus to light. At the same time, it lied to the World Health Organization, causing the WHO to fatally misadvise the rest of the world on the risks of COVID in the early weeks of the pandemic. To this day, the Chinese government refuses to offer the transparency needed to determine the origins of the virus, which may well have been the result of yet another Chinese high-tech accident.
BE AWARE, TAKE CARE
Those skeptical of the risks of a Chinese AI catastrophe might point to the government’s comparative willingness to regulate AI, or the fact that China still lags behind the United States in building the most sophisticated and therefore risky capabilities. But as with the rules governing the Chinese Internet, much of Beijing’s AI legislation aims only to ensure that companies remain subservient to the government and able to compete with their Western counterparts. These laws are also designed to suppress any information that threatens the regime.
And although it may be true that the United States leads in cutting-edge AI technology, it is also clear that clever tinkering with others’ systems can bring out some of their most dangerous and unanticipated capabilities, such as combining models for enhanced capacity to plan and execute chemical experiments. It is all too easy to steal, copy, or clone advanced AI models and tweak them, potentially stripping them of safety features. And if China’s technology sector is good at anything, it is quickly adapting others’ creations for maximum impact, a strategy that has been the engine of its growth for decades.
The potential for AI tragedy is hardly unique to China. As with any powerful new technology, disasters could strike anywhere, and smaller-scale incidents from accidents and misuse are already occurring regularly around the world with self-driving car crashes, AI voice-cloning scams, and the misidentification of people by law enforcement agencies using facial recognition, to name a few. Given its leading role in pioneering new AI capabilities, the United States in particular confronts a high risk of costly failures. American companies and the government would be wise to bolster their own AI safety efforts, as many lawmakers are now realizing.
China ranks as the most optimistic country in the world when it comes to AI.
But from Chernobyl to COVID, history shows that the most acute risks of catastrophe stem from authoritarian states, which are far more prone to systemic missteps that exacerbate an initial mistake or accident. China’s blithe attitude toward technological risk, the government’s reckless ambition, and Beijing’s crisis mismanagement are all on a collision course with the escalating dangers of AI.
A variety of U.S. policy measures could help mitigate these risks. Industry and government could double down on restricting the commercial flow of easily weaponized AI research to China in recognition of the serious threat it poses, including through technology transfer tactics that leverage joint ventures and Chinese investments. American diplomacy could champion the establishment of global AI safety standards. The United States, in coordination with the international community, could also monitor potential safety concerns in advanced AI labs around the world with an eye toward crafting contingency plans in the case of failures with spillover effects. There is precedent for doing so: the United States has been known to monitor the risks of accidents from dangerous behavior in Chinese biolabs, nuclear reactors, and space operations.
But the first step is rightly prioritizing the threat. As in the Cold War, weapons races and technological competition may attract a great deal of attention, but safety risks are equally worthy of concern, especially in authoritarian states. And to avoid another Chernobyl-like calamity, addressing China’s risks of an AI catastrophe should be at the top of the agenda.
Loading…