science

As AI weaponry enters the arms race, America is feeling very, very afraid – The Guardian


Opinion

Will technological advantages be enough for China to replace the US as the world’s AI superpower?

The Bible maintains that “the race is not to the swift, nor the battle to the strong”, but, as Damon Runyon used to say, “that is the way to bet”. As a species, we take the same view, which is why we are obsessed with “races”. Political journalism, for example, is mostly horserace coverage – runners and riders, favourites, outsiders, each-way bets, etc. And when we get into geopolitics and international relations we find a field obsessed with arms “races”.

In recent times, a new kind of weaponry – loosely called “AI” – has entered the race. In 2021, we belatedly discovered how worried the US government was about it. A National Security Commission on Artificial Intelligence was convened under the chairmanship of Eric Schmidt, the former chair of Google. In its report, issued in March of that year, the commission warned: that China could soon replace the US as the world’s “AI superpower”; that AI systems will be used (surprise, surprise!) in the “pursuit of power”; and that “AI will not stay in the domain of superpowers or the realm of science fiction”. It also urged President Biden to reject calls for a global ban on highly controversial AI-powered autonomous weapons, saying that China and Russia were unlikely to keep to any treaty they signed.

It was the strongest indication to date of the hegemonic anxiety gripping the US in the face of growing Chinese assertiveness on the global stage. It also explains why an open letter signed by many researchers calling on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4 (and adding that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium”) fell on closed ears in Washington and Silicon Valley.

Readers Also Like:  Bird flu killed more than 300 New England seals in 2022 - adding to fears humans could be next

For a glimpse of the anxieties that grip the US, the first chapter of 2034: A Novel of the Next World War, co-authored by a thriller writer and a former US admiral, might be illuminating. An American carrier group in the South China Sea goes to the assistance of a Chinese fishing boat that is on fire. The boat turns out to have interesting electronic kit aboard. The Chinese demand the instant release of the vessel, at which point the Americans, who are not disposed to comply, discover that all of their electronic systems have gone blank and that they are surrounded by a group of Chinese warships of whose proximity they had been entirely unaware. This is what technological inferiority feels like if you’re a superpower.

The well-meaning but futile “pause” letter was motivated by fears that machine-learning technology had crossed a significant threshold on the path to AGI (artificial general intelligence), ie, superintelligent machines. This is only plausible if you believe – as some in the machine-learning world do – that massive expansion of LLMs (large language models) will eventually get us to AGI. And if that were to happen (so the panicky reasoning goes), it might be bad news for humanity, unless the machines were content to keep humans as pets.

For the foreign-policy establishment in Washington, though, the prospect that China might get to AGI before the US looks like an existential threat to American hegemony. The local tech giants who dominate the technology assiduously fan these existential fears. And so the world could be faced with a new “arms race” fuelled by future generations of the technology that brought us ChatGPT, with all the waste and corruption that such spending sprees bring in their wake.

Readers Also Like:  Chinese satellites 'provided images' to Wagner thugs as Xi threatens to escalate war

This line of thinking is based on two pillars that look pretty shaky. The first is an article of faith; the second is a misconception about the nature of technological competition. The article of faith is a belief that accelerated expansion of machine-learning technology will eventually produce AGI. This looks like a pretty heroic assumption. As the philosopher Kenneth A Taylor pointed out before his untimely death, artificial intelligence research comes in two flavours: AI as engineering and AI as cognitive science. The emergence of LLMs and chatbots shows that significant progress has been made on the engineering side, but in the cognitive area we are still nowhere near equivalent breakthroughs. Yet that is where spectacular advances are needed if reasoning machines are to be a viable proposition.

The misconception is that there are clear winners in arms races. As Scott Alexander noted the other day, victories in such races tend to be fleeting, though sometimes a technological advantage may be enough to tip the balance in a conflict – as nuclear weapons were in 1946. But that was a binary situation, where one either had nukes or one didn’t. That wasn’t the case with other technologies – electricity, cars or even computers. Nor would it be the case with AGI, if we ever get to it. And at the moment we have enough trouble trying to manage the tech we have without obsessing about a speculative and distant future.

What I’ve been reading

Back to the future
Philip K Dick and the Fake Humans is a lovely Boston Review essay by Henry Farrell, arguing that we live in Philip K Dick’s future, not George Orwell’s or Aldous Huxley’s.

Readers Also Like:  ‘Super‐slippery’ 3D printed toilet bowl to make ‘skid marks’ a domestic horror of the past

Image conscious
How Will AI Transform Photography? is thought-provoking aperture essay by Charlotte Kent.

The transformers
Nick St Pierre’s fascinating Twitter thread about how prompts change generative AI outputs.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.