internet

Chatbots are social media on steroids – trapping us in an even more tangled web | John Naughton


In The Beginning Was The Internet, which was first switched on in January 1983 and designed from the outset as a platform for what became known as “permissionless innovation”. If you had a good idea that could be implemented using the network – and were smart enough to write the software to make it work – then the internet would do it for you, no questions asked.

In the early 1990s, the physicist Tim Berners-Lee used it as the foundation on which to build a new platform for permissionless innovation called the “world wide web”. The non-technical world discovered this new platform in 1993 and spent the next 30 years using it as the foundation on which to build lots of new things – online shopping, social media, Amazon, Google, blogging etc, etc. The web also enabled Wikipedia, an improbable project to create an encyclopedia that anyone, but anyone, could contribute to and edit, and which is now one of the wonders of the networked world.

In 2023, that same world woke up to discover that the web (particularly the unimaginable amounts of data it had produced) had enabled the creation of another new kind of platform for permissionless innovation. The posh name for it is “generative AI”, a portmanteau term that takes in large language models (LLMs) and their associated chatbots, plus other systems that can generate plausible images, videos and other creative outputs in response to textual prompts.

The non-technical world is now in the early stages of hysterical excitement, angst and irrational exuberance about this new platform. There’s a lot of philosophising about whether the technology poses an existential risk to humanity. It probably doesn’t; but it could be a serious threat to democracy, for reasons we will come to presently. The tech industry, as ever unconcerned about democracy, is now riven by different concerns, in particular acute Fomo. This has triggered a “Cambrian explosion” in which thousands of startups (egged on by their venture-capital funders) are racing to build new things on the foundation provided by the generative AI models created by the big tech companies at significant cost to the environment.

Readers Also Like:  Samsung considering bringing back two Theme Park features - SamMobile - Samsung news

Given that existential risks are above the pay grade of newspaper columnists, let us focus instead on more immediate problems posed by LLMs. Chief among this is the fact that they sometimes make things up. Worse still, they always sound authoritative, even when they are flat-out wrong. For bad actors interested in misleading people online, this is a real bonus. Although many people are nowadays accustomed to being bombarded with propaganda on social media, at least some of it gets through whatever filters they employ to sort out truth from fiction. But what chatbots offer is not bombardment but one-to-one interaction with an apparently engaged and knowledgable bot. In other words, social media on steroids. And with elections in at least two polarised democracies next year, this could be significant.

The other thing about chatbots is they enable the effortless creation of massive quantities of “content” on an extraordinary scale. As James Vincent of the Verge puts it, “Given money and compute, AI systems – particularly the generative models currently in vogue – scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical on the web today. These models are trained on strata of data laid down during the last web age, which they recreate imperfectly.”

Readers Also Like:  ChromeOS adds location control for individual apps in new update - Android Headlines

Soon, though, the web might consist not only of what was there in the pre-AI era, but all the stuff created by current and future chatbots. Which raises the intriguing possibility of an online world populated by bots inhaling the textual exhaust of their mechanical peers, and a consequent spiral into the infinite recursion that programmers call “stack overflow”!

In such circumstances, what should truth-seeking institutions do? Answer: look at what they are doing at Wikipedia. One of the most amazing things about that project is how far-sighted its community has been about the task of sorting cognitive wheat from chaff. In its early days, observers wondered why Wikipedia was building such an elaborate set of processes and tools for evaluating the quality of submissions. Now we know: they saw what was coming down the track. We could learn a thing or two from them now.

What I’ve been reading

AI saves the world
Marc Andreessen’s paean to “progress” is on his Substack – think of it as Dr Pangloss’s take on AI.

Breast cancer optimism
There is a gratifying paper in Nature about the “Huge leap in breast cancer survival rate”.

Brought to book
The Casual Ignominy of the Book Tours of Yore is a wonderful memoir by John Banville in Esquire magazine.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.