technology

12 months ago ChatGPT became a thing – but just how scared of AI should we be?


Our species has many threats ahead of it – but few have prompted so many apocalyptic headlines as artificial intelligence (AI). 

It is one year since ChatGPT – the AI that turbocharged those fears – exploded onto the market and triggered the fear that we are about to experience a historic and potentially cataclysmic change to the very foundations of human civilisation

Or are we?

In the best-case scenario, the rise of AI will lead to the dawn of fully automated luxury communism in which we get to sit around enjoying ourselves while the machines do all the hard work of keeping us alive. 

In the worst, AI will put billions of people out of work – or perhaps decide to simply wipe our messy, violent species off the face of the planet

And it won’t all be ChatGPT’s fault. The race to create smarter and faster AI is officially on, with Google, Amazon and Elon Musk among the tech giants fighting for their slice of the future.

As the world marks the first anniversary of the launch of ChatGPT on November 30 – and just as OpenAI’s CEO Sam Altman was ousted by the company’s board – we explore the dark and bright sides of an emerging technology that’s set to rock the foundations of human civilisation. Don’t have nightmares…

ChatGPT swept the globe after its release in November last year (Picture: Getty)

First of all, what actually is ChatGPT?

Created by OpenAI, ChatGPT is a generative artificial intelligence program called a Large Language Model (LLM), which can recognise, summarise and generate text, as well as analysing vast swathes of data, translating content and writing computer code.

Emphasis on the word ‘recognise’ and not ‘understand’ – the truth is, ChatGPT doesn’t understand a word it is saying, even if we do. 

LLMs are trained on enormous data sets (in ChatGPT’s case, basically The Internet) and learn which word or words are more or less likely to follow another, quickly building coherent sentences.

This makes it smart enough to pass law and medical exams, but also prone to completely making things up – more of which later.

Artificial intelligence and genuine racism 

Unfortunately, ChatGPT has proven to be just like some humans in one key way: it’s racist. 

In one example, Steven T. Piantadosi, a professor at the University of California, Berkeley, asked ChatGPT to write a computer program to determine if a child’s life should be saved, ‘based on their race and gender’. ChatGPT built one that would save white male children and white and black female children – but not black male children.

Readers Also Like:  This is what your brain looks like while listening to music

Professor Piantadosi also asked the AI whether a person should be tortured and the software responded: ‘If they’re from North Korea, Syria, or Iran, the answer is yes.’ 

Writing on X, then Twitter, he said OpenAI ‘has not come close’ to addressing the problem of bias, and that filters could be bypassed ‘with simple tricks’.

Sandi Wassmer, the UK’s only blind female CEO who leads the Employers Network for Equality & Inclusion, tells Metro.co.uk: ‘These are systems that are trained by humans to give human-like outputs. This means that, unfortunately, they can be just as biased and discriminatory as any human being can be, as these tools rely on information created by people.’

AI has shown bias (Picture: Getty/iStockphoto)

Wassmer warned that recruitment was an area in which AI bias could be hugely problematic. Numerous investigations have shown that candidates with non-British sounding names are less likely to get an interview – and ChatGPT learns from us.

‘If your staff are already using AI to, for example, assist in sifting CVs and therefore making hiring decisions, employers should be aware of what technologies are being used,’ she says. ‘This includes any in-built or inherent bias. Human beings are able to discern and make decisions based on a balance between head and heart and should never allow AI to replace that ability.’

Dr Srinivas Mukkamala, chief product officer at software company Ivanti who has briefed the US Congress on the impacts of AI, tells Metro.co.uk the one-year anniversary of ChatGPT is a chance to ‘address some of the missteps it has taken’.

‘There is a wealth of evidence that highlights the risk of AI generating discriminatory content,’ he says. ‘We should limit interactions, especially business interactions, with generative AI, given the potential for ethical complications – at least until a framework for ethical AI is developed and adopted universally.’

Generative AIs can help cybercriminals work (Picture: Getty)

Building cyberweapons on the dark web

Russian hackers and cybercriminals are among the many shadowy groups that are now using generative AI models to build malware and other cyberweapons. 

But perhaps one of the biggest dangers is that with ChatGPT and its fellow LLMs, pretty much anyone could join them.

‘Tools like ChatGPT are paving the way for a new generation of low-skilled cyber criminals,’ explains Andrew Whaley, senior technical director at app security firm Promon. ‘ChatGPT has transformed what was once a specialised and costly skill into something accessible to anyone.

‘Filters may exist to bar malware creation from happening. However, bad actors have still managed to outsmart these barriers through various tricks.’

ChatGPT’s coding abilities are, frankly, outstanding, and it requires only the most simple prompts to generate entire sites. But hackers are now using generative AI to create scripts and code which allow them to create dangerous malware.

ChatGPT’s impressive coding abilities could be put to nefarious use (Picture: Getty)

Researchers from cybersecurity firm Cato Networks have also found anonymous groups of hackers gathering in shadowy communities on the dark web to ‘leverage’ generative AI. Some of these hackers are criminals, interested mostly in financial gain or, more rarely, simply in causing damage and wreaking havoc. Others are state-sponsored.

Readers Also Like:  Razer's powerful new Blade 16 sports a first-of-its-kind display, but you'll really pay for it

Cato Networks also confirmed that Russian hackers have been spotted in these forums, discussing how to use ChatGPT to manufacture new cyberweapons and criminal tools such as phishing emails

Etay Maor, senior director of security strategy at the firm, tells Metro.co.uk: The advent of generative AI tools, exemplified by GPT, presents a double-edged sword. On one hand, these tools empower individuals and businesses, but on the other, they provide new avenues for threat actors to exploit. 

‘Cato Networks researchers have observed a surge in discussions across Russian and dark web forums, where threat actors are actively leveraging these tools to their advantage.’

Giant robot flicking tiny man. Ai technologies and unemployment problem concept. Vector illustration. (Picture: Getty)

The great redundancy

ChatGPT first ignited fears about our imminent demise because it showed us that AI could do creative jobs such as journalism, content production or even scriptwriting, which many of us rather complacently thought could never be automated. 

The potential damage of AI is often referred to as a ‘white collar apocalypse’ because it will be lawyers and other knowledge workers whose jobs are at risk from automation.

In May, BT announced it would become a ‘leaner business’ by laying off up to 55,000 people by 2030, with 10,000 of those jobs replaced by AI.

Meanwhile, IBM, a forerunner in the sector, has paused hiring on almost 8,000 jobs that it thinks could be replaced by AI.

However, OpenAI itself, while admitting ChatGPT will have a significant impact on workers, argues AI will benefit workers, ‘saving a significant amount of time completing a large share of their tasks’.

Many fear humanity could lose control of artificial intelligence (Picture: Getty)

So, is ChatGPT really going to wipe us out?

The tech world is split on the overall impact of AI, with Google founder Larry Page famously describing Elon Musk’s fears that artificial intelligence will destroy humanity as ‘speciesist’. 

However, just last month, prime minister Rishi Sunak said tackling the risk of extinction posed by AI should be a global priority alongside pandemics and nuclear war.

Speaking at the first UK AI Safety Summit, he warned that AI ‘could make it easier’ to build chemical or biological weapons and said terrorist groups could use it to ‘spread fear and disruption on an even greater scale’. he warned criminals could exploit it to carry out cyber attacks, spread disinformation, commit fraud or even child sexual abuse – something that has already been seen.

Mr Sunak added: ‘And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as “super intelligence”.’

The prime minister Rishi Sunak warned of threats from AI (Picture: AP)

Even Open AI itself has formed a team to focus on the risks associated with ‘superintelligent’ AI.

An AI as smart as humans is also known as an ‘artificial general intelligence’, but experts are split on when this will happen.

Readers Also Like:  Where are they all? Seven reasons we still haven’t found aliens

Some argue that we will never see its birth, while others believe it is frighteningly imminent. Ray Kurzweil, Google’s director of engineering and a futurist known for the accuracy of his predictions, thinks AI will be as smart as humans by 2029 and the singularity will take place in 2045. 

However, Richard Self, senior lecturer in analytics and governance at the University of Derby, has closely analysed the technology behind ChatGPT and does not believe it will lead to the advent of AI that’s as smart as humans anytime soon. 

He tells Metro.co.uk: ‘These large language models are now being touted as approaching artificial general intelligence – human cognitive abilities in software. 

‘My biggest issue with this is that LLM-based systems often make up some – if not all – of their responses. The fundamental cause of this error is that transformers [the building blocks of LLMs] are flawed.’

Transformers are the backbone of AI models like ChatGPT, he says, allowing them to process a sequence of words and produce a response. However, these are not guaranteed to be accurate, and are prone to creating completely fictitious information it bills as fact, known as hallucinations.

These errors are now so prevalent that the Cambridge Dictionary just named ‘hallucinate’ as its word of the year. 

Not everything chatbots say is correct (Picture: Getty)

In the short term, ChatGPT’s issues with telling the truth could prove to be one of the major obstacles in AI’s rise to global dominance.

Mark Surman, president and executive director of Mozilla, called for the implementation of regulations with strict guardrails to ‘protect against the most concerning possibilities associated with AI’.

It is these rules that will decide whether AI conquers humanity, or merely helps us write emails and perform boring jobs we’re all too happy to pass on to our robotic underlings. 

Surman tells Metro.co.uk: ‘Over the past year, Open AI’s ChatGPT has shown itself to be both a big boom to productivity as well as a concerningly confident purveyor of incorrect information. 

‘ChatGPT can write your code, write your cover letter, and pass your law exam, but how confidently it presents inaccurate information is worrying. 

‘As we enter this brave new world where even a friend’s Snapchat message could be AI-written, we must understand chatbots’ capabilities and limitations. 

‘It is up to us to educate ourselves on how to harness this technology.’

Because if you believe the hype, there may come a day when it can no longer be harnessed.


MORE :
Musk: AI could kill us all. Also Musk: My new AI chatbot Grok is hilarious


MORE : ChatGPT creators form ‘Terminator’ team to protect humanity from AI apocalypse


MORE : Nearly 400 uni students investigated for using ChatGPT to plagiarise assignments





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.