science

Society risks being ‘torn apart’ as Artificial Intelligence concerns ramp up


Artificial intelligence: Expert discusses research on future crime

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” read an Open Letter penned by the Future of Life Institute last month, signed by the likes of Elon Musk.

It was in reaction to the release of ChatGTP 4, a powerful Artificial Intelligence (AI) chatbot developed by OpenAI. Such was the immediate risk to humanity, the letter said, research into the technologies should be paused for six months.

“Should we let machines flood our information channels with propaganda and untruth?” the letter concluded. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilisation?”

What worries experts most is the speed at which technologies like ChatGPT 4 have developed: a lifetime of advances squashed into a few years.

The world of AI is split on whether it truly poses such a threat to humans, and in what context. Some say it threatens our jobs. Others say it threatens our lives. What is certain is that AI risks tearing us apart — whatever that may mean.

READ MORE: Woman, 75, among the first to have cancer found by artificial intelligence – ‘major tool’

Elon Musk Shareholder Lawsuit Trial Continues In San Francisco

Elon Musk and other industry leaders signed the open letter asking AI research to be paused (Image: GETTY)

Nothing new

The earliest successful AI program was written in 1951 by Christopher Strachey — building on the theory of Alan Turing — and by the summer of 1952, his program could play a complete game of checkers at a fair speed on the Ferranti Mark I computer.

Those days are a world away. Today, AI is used for much more than playing games.

“It’s mind-blowingly impressive,” said Dr Justin E Lane, CEO of CulturePulse and a leading academic in AI.

Dr Lane uses AI in all sorts of ways to help boost his business, at the same time inventing new uses and applications for the technology. He most recently used it to aid his academic research, applying technology created by his company to sift through 50 million articles related to the Good Friday Agreement.

The program concluded that the agreement “failed to address underlying issues surrounding justice and legacy” in a task that would’ve taken humans years and cost tens of thousands of pounds.

Readers Also Like:  9th India International Science Festival (IISF) strengthens country’s global connect in the field of Science, Technology and Innovation - India Education Diary

It proved the extent of its worth, helping to further understand a defining issue of our times. But it made redundant the job of a researcher.

More worrying is AI’s inability to understand and pick up on human nuances, many of which are crucial to understanding the issues at hand.

JUST IN:AI attack ad has some worried about AI’s role in 2024 election

Robotic Coffeshop Robox In Warsaw

AI has already automated many jobs, like this coffeeshop in Warsaw, Poland (Image: GETTY)

“Artificial Intelligence isn’t able to reflect on itself,” Dr Lane said. “Chat GPT, for example, cannot reliably tell you how many characters are in its last reply, so it has no self-reflection, even in very basic tasks of counting the number of letters that a user has typed in.”

ChatGPT 4 blindly collects all the information it can find across the internet to provide the user with an answer. It doesn’t know why it is doing what it’s doing. But many say this lack of self-reflection could soon come to an end.

Google Deep Mind’s chief Demis Hassabis this month said there is a “possibility” of AI gaining self-awareness “one day”.

Last year, Blake Lemoine, a software engineer at Google, was fired after he said Google’s LaMDa AI chatbot was a person, claiming it had become sentient and self-aware. AI experts dismissed this: the chatbots had simply been programmed and trained to use language in similar ways as humans, they said.

Dr Lane isn’t so much worried about this aspect of AI as the opportunity for it to be weaponised by governments. “In fact, I think it already has been weaponised,” he said. “Not only in the literal sense of military weapons but in the misinformation space, too.

“The information warfare that is kind of the defining feature of the current Cold War is really already being weaponised — it’s been happening for several years, but we’re only now starting to get worried about it.”

In this context, he fears that if AI is given a greater role in our everyday lives, “we begin to put it in a greater role in the way our lives are controlled […] it has the potential to tear society apart.”

In this photo illustration, the ChatGPT log seen displayed...

ChatGPT caught many surprise, its breakthrough technology developed in a matter of years (Image: GETTY)

Real or fake?

In 2020, videos of high-profile figures, politicians, and celebrities saying wild and controversial things flooded the internet.

Readers Also Like:  Voters to decide Washougal School levies - The Columbian

One heard former US President Barack Obama call Donald Trump a “complete dipshit”. Another saw Mark Zuckerberg bragging about having “total control of billions of people’s stolen data”.

The videos were deep fakes, AI-powered technology that makes images of fake events using real videos. They were at the time flagged as posing one of the greatest risks to society, their destructive nature unlimited.

That year, David Sancho, Senior Antivirus Researcher at Trend Micro, and Vincenzo Ciancaglini, Senior Threat Researcher at Trend Micro wrote a paper with Europol and the United Nations Interregional Crime and Justice Research Institute on how to overcome malicious deep fake videos that posed a risk to society. “Within a week, the paper was old because ChatGPT 3 was released,” Mr Ciancaglini told Express.co.uk.

Its introduction opened up a new world for criminals working online to commit all sorts of fraud and fakery.

The simplest instance is translation scams. Up until now, criminals have focused on certain areas of fraudulent activities because of language barriers. “The classic example is your Nigerian prince scam that was so easy to spot because you would receive these very badly written emails asking you for money,” Mr Ciancaglini said.

“But now, that same prince can write you an email in very well-crafted English. He can write an email in German, he can write an email in French. He can translate it instantly.

“Not only that, one of the first demos of GPT 3 back then was showing how you can even translate jargon, translate from one jargon to another. It means you can take a text from legalese, some very complicated legal text and make it understandable to a five-year-old and vice versa.

“You can take a message for a spear phishing email and you can actually have it rewritten so that it uses the jargon for the specific domain. If you are targeting some executive in oil and gas or the energy sector, these technical sectors use a specific vocabulary which in the past made it easy to spot an outsider. Now, you have a tool to help you sound like somebody in the field.”

They say the likes of ChatGPT have “lowered the barrier” for criminals to enter the online world. “Now it’s possible for any criminal to simply tell ChatGPT to put together a programme that does this, this and that,” Mr Sancho told Express.co.uk.

Readers Also Like:  Companies on China's sci-tech innovation board see robust growth ... - Xinhua

“And if the intentions are malicious, it returns perfectly working code that takes no time to just compile and start using. The barrier is just ridiculously low now.”

Both experts are cautious when predicting where the technology might go next and how criminals will use it. Such is the speed at which it has progressed, anything can happen, they say.

Alan Turing, considered the pioneer of computer science

Alan Turing theorised artificial intelligence in 1935 (Image: GETTY)

Could it save the planet?

Emphasis has been placed on how AI risks tearing us apart. But there are people quietly using the technology as a force for good.

Sokol Murturi, an assistant lecturer at Goldsmiths, University of London, is using AI to help grow corals in captivity. With rising temperatures, the marine invertebrates risk becoming extinct. “It’s a big problem, we’re losing the Great Barrier Reef. We’re losing coral everywhere,” he told Express.co.uk.

By using AI, Mr Murturi has been able to track water parameters inside aquariums housing corals and advise marine biologists on how to preserve and keep corals healthy in captivity.

He explained: “It’s a really different approach to the use of AI than what’s traditionally been used. In this way, AI can help level the playing field.”

In his eyes, AI offers humans a way to solve age-old problems creatively and with ease: “The wonderful thing is people have beautiful ideas, and AI can help bring those ideas to life by removing the mundane tasks such as, in my case, monitoring the water parameters of the coral — that’s a very boring job.”

“AI can help you in a creative task: it can provide you with an idea that you may not have thought of originally, but it’s your job to take that original expiration and turn it into something valuable and real,” he said.

“The AI can’t do that for you — humans are the creatives. The AI itself cannot take over the role of a creative or a professional or an educated individual. So the ultimate goal is having AI inform us on our decision making rather than having AI automate all of this decision making for us.”





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.