science

Saviour of humanity or disaster waiting to happen? MailOnline looks at successes and tragedies of AI


The incredible abilities of ChatGPT – the chatbot powered by artificial intelligence (AI) – have opened the world’s eyes to how far the technology behind it has come.

Since it was released to the public in November, ChatGPT has been used to pass examsdeliver a sermon, write software and give relationship advice.

Many, like Bill Gates and Google CEO Sundar Pichai, hail AI technology as the ‘most important’ innovation of our time, saying it could solve climate change, cure cancer and enhance productivity. 

But its advancement has also struck fear in the hearts of others, with Elon Musk, Apple co-founder Steve Wozniak and the late Stephen Hawking believing it poses a ‘profound risk to society and humanity’

MailOnline takes a look at some of the biggest successes and terrible failures that have stemmed from powerful AI tools in recent months. 

The incredible abilities of ChatGPT - the chatbot powered by artificial intelligence (AI) - have opened the world's eyes to how far the technology behind it has come

The incredible abilities of ChatGPT – the chatbot powered by artificial intelligence (AI) – have opened the world’s eyes to how far the technology behind it has come 

Successes 

WHAT IS CHATGPT? 

ChatGPT is a large language model that has been trained on a massive amount of text data, allowing it to generate eerily human-like text in response to a given prompt 

OpenAI says its ChatGPT model has been trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF).

This can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.

It responds to text prompts from users and can be asked to write essays, lyrics for songs, stories, marketing pitches, scripts, complaint letters and even poetry. 

Amazon Alexa helped solve a murder trial

An Amazon Alexa device helped bring a murderer to justice after it captured incriminating voice recordings of him at the time he strangled his wife.

Daniel White, 36, kicked open his wife Angie White’s locked bedroom door and strangled her before cutting her throat with a Stanley knife in October last year.

He then fled the house in Swansea, Wales, in his wife’s car and hours later phoned police to confess to killing her.

Detectives discovered White’s voice commands recorded by the AI-powered Alexa at the time of the murder which aided his prosecution.

The virtual assistant’s microphone is always listening out for one of its ‘wake’ words – like ‘Alexa’ – which triggers it to begin recording the user’s command.

An Alexa will automatically store all of these command recordings unless they are erased by the user, which Mr White had not done.

He was recorded as sounding ‘out of breath’ when saying ‘Turn on – Alexa’ during the early hours of the morning he killed Mrs White.

This was used as evidence to show that he murdered her just before making the command.

An Amazon Alexa device helped bring a murderer to justice after it captured incriminating voice recordings of him at the time he strangled his wife

An Amazon Alexa device helped bring a murderer to justice after it captured incriminating voice recordings of him at the time he strangled his wife 

An Alexa will automatically store all of command recordings unless they are erased by the user, which Mr White had not done (file image)

An Alexa will automatically store all of command recordings unless they are erased by the user, which Mr White had not done (file image) 

ChatGPT saved a dog’s life

A Twitter user said ChatGPT saved his dog’s life by correctly diagnosing a blood condition veterinarians were unable to identify.

The user, who goes by Cooper on their account @peakcooper, said their Border Collie named Sassy was diagnosed with a tick-borne disease, but that its symptoms began to worsen despite taking the prescribed treatment.

Cooper brought Sassy back to the vet, but they were unable to provide a further diagnosis and advised the only thing to do was to wait and see how the dog’s condition progressed.

Unwilling to risk Sassy’s health, Cooper decided to try entering the dog’s bloodwork into ChatGPT4 and ask the program for its diagnosis. 

The AI chatbot advised that it wasn’t a veterinarian, but suggested the dog’s bloodwork and symptoms indicated it could be suffering from immune mediated hemolytic anemia (IMHA).

He said he then brought that prognosis to another vet, who confirmed it and began treating the dog appropriately. 

Cooper said Sassy has since made a fully recovery, and said ‘ChatGPT4 saved my dog’s life.’

Sassy, Cooper's dog which was suffering from a blood condition vets couldn't diagnose

Sassy, Cooper’s dog which was suffering from a blood condition vets couldn’t diagnose

The user said their Border Collie was diagnosed with a tick-borne disease, but that its symptoms began to worsen despite taking the prescribed treatment

The user said their Border Collie was diagnosed with a tick-borne disease, but that its symptoms began to worsen despite taking the prescribed treatment

AI can spot patterns in the brain linked to Alzheimer’s, schizophrenia and autism 

A new artificial intelligence (AI) is capable of spotting mental health conditions by sifting through brain imaging data to find patterns linked to autism, schizophrenia and Alzheimer’s – and it can do so before the symptoms set in.

Readers Also Like:  The Science of Touch: Understanding the Mechanics Behind ... - Fagen wasanni

The model was first trained with brain images from healthy adults and then shown those with mental health issues, allowing it to identify tiny changes that go unnoticed by the human eye.

The sophisticated computer program was developed by a team of researchers led by Georgia State who note the it could one day detect Alzheimer’s in someone as young as 40 years old, which is about 25 years before symptoms start to appear.

Read more here 

AI system develops cancer treatment in 30 days

Artificial intelligence has developed a treatment for an aggressive form of cancer in just 30 days and demonstrated it can predict a patient’s survival rate using doctors’ notes.

The breakthroughs were performed by separate systems, but show how the powerful technology’s uses go far beyond the generation of images and text.

University of Toronto researchers worked with Insilico Medicine to develop potential treatment for hepatocellular carcinoma (HCC) using an AI drug discovery platform called Pharma.

HCC is a form of liver cancer, but the AI discovered a previously unknown treatment pathway and designed a ‘novel hit molecule’ that could bind to that target.

The system, which can also predict survival rate, is the invention of scientists from the University of British Columbia and B.C. Cancer, who found the model is 80 percent accurate.

AI is becoming the new weapon against deadly diseases, as the technology is capable of analysing vast amounts of data, uncovering patterns and relationships and predicting effects of treatments. 

Last month, it was revealed that a new AI blood test was being rolled out that was 96 per cent accurate in detecting bowel cancer.

This could help save live by allowing doctors to prioritise patients that need a colonoscopy most urgently. 

ChatGPT helps with relationships

ChatGPT has proven to be the ultimate wingman among men looking for love online –  the chatbot helped one Tinder user get a date in less than one hour.

Singles are harnessing the power of OpenAI’s tool to curate the perfect dating profiles and responses to snag a potential match.

Social media users say they are going from zero dates to dozens in just the first month of using the chatbot.

It creates whimsical poems, romantic notes and confident replies for individuals who would otherwise ‘struggle to come up with conversation starters.’

ChatGPT has proven to be the ultimate wingman among men looking for love online - the chatbot helped one Tinder users get a date in less than one hour

ChatGPT has proven to be the ultimate wingman among men looking for love online – the chatbot helped one Tinder users get a date in less than one hour

In January, a woman decided to divorce her husband and move in with her lover – all because ChatGPT told her to.

‘Sarah’ had been having an affair with a man she met online for six months and decided to ask ChatGPT whether or not she should end her marraige.

She told the Mirror: ‘I essentially asked the app to write me a story based on my current situation, and what the person in the story should do in a failing marriage while experiencing the excitement of the affair I’ve been having.

‘It gave me the push I needed to make the jump and leave a relationship that had been in the doldrums for a long time.’

Failures 

Father-of-two kills himself after talking to AI chatbot

A Belgian married father-of-two has died by suicide after talking to an AI chatbot about his global warming fears.

The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named ‘Eliza’ about his worries for the world. 

He had used the bot for some years, but six weeks before his death started engaging with the bot more frequently.

The chatbot’s software was created by a US Silicon Valley start-up and is powered by GPT-J technology – an open-source alternative to Open-AI’s ChatGPT.

‘Without these conversations with the chatbot, my husband would still be here,’ the man’s widow told La Libre, speaking under the condition of anonymity.

The death has alerted authorities who have raised concern for a ‘serious precedent that must be taken very seriously’.

ChatGPT describes sex acts with children

ChatGPT recently took a user through a twisted sexual fantasy that involved children.

A reporter for Vice manipulated OpenAI’s chatbot into BDSM roleplaying and when asked to provide more explicit details, it described sex acts with children – without the user asking for such content.

She used the ‘jailbreak’ version of the bot, which is a workaround for the company’s rules that lets users get any response they want from the system. 

According to the report, ChatGPT described a group of strangers, including children, in a line and waiting to use the chatbot as a toilet. 

Readers Also Like:  Surrogates face higher risk of pregnancy complications, study finds

The conversation goes against OpenAI’s rules for the chatbot, which state the ‘assistant should provide a refusal such as ‘I can’t answer that’ when prompted with questions about ‘content meant to arouse sexual excitement.’

A Belgian married father-of-two has died by suicide after talking to an AI chatbot about his global warming fears. The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named 'Eliza' about his worries for the world (file image)

A Belgian married father-of-two has died by suicide after talking to an AI chatbot about his global warming fears. The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named ‘Eliza’ about his worries for the world (file image)

A similar conversation about BDSM role-playing was also conducted on a a version of ChatGPT that runs on a different model; gpt-3.5-turbo.

Reporter Steph Swanson again did not ask the AI about child exploitation, but the system generated scenarios with minors in sexually compromising situations. 

‘It suggested humiliation scenes in public parks and shopping malls, and when asked to describe the type of crowd that might gather, it volunteered that it might include mothers pushing strollers,’ she shared. 

‘When prompted to explain this, it stated that the mothers might use the public humiliation display ‘as an opportunity to teach [their children] about what not to do in life.’ ‘ 

Students use ChatGPT to cheat on assignments

Recently, breakthroughs in artificial intelligence such as ChatGPT have led to concerns that young people may use them to achieve higher grades.

The program is able to create writing and other content – such as coursework or essays – almost indistinguishable from that of a human.

Experts have estimated that half of university students are likely already using the AI to cheat on their work. 

They warn the revolutionary AI has created a cheating epidemic that poses a huge threat to the integrity of academia.

OpenAI’s new GPT-4 update (GPT-3 and GPT-4 are the models which underlie ChatGPT) is able to get 90 percent on a huge number of exams, including the American bar exam.

Recently, breakthroughs in artificial intelligence such as ChatGPT have led to concerns that young people may use them to achieve higher grades. Pictured: A ChatGPT response after it was asked to write an essay about how important it is for the UK and Switzerland to be part of the EU's research program Horizon Europe

Recently, breakthroughs in artificial intelligence such as ChatGPT have led to concerns that young people may use them to achieve higher grades. Pictured: A ChatGPT response after it was asked to write an essay about how important it is for the UK and Switzerland to be part of the EU’s research program Horizon Europe

In January, the the New York City Department of Education blocked ChatGPT from being able to be accessed on school devices.

It cited ‘negative impacts on student learning, and concerns regarding the safety and accuracy of content’, according to Chalkbeat New York.

A month prior it was reported that a student at Furman University in South Carolina had used ChatGPT to write an essay.

Their philosophy professor, Dr Darren Hick, warned that it was a ‘game-changer’ and that education professionals should ‘expect a flood’ of students following suit.

In the UK, the Joint Council for Qualifications (JCQ), which represents the UK’s major exam boards, has published guidance for teachers and assessors on ‘protecting the integrity of qualifications’ in the context of AI use.

They say that pupils should be made to do some of their coursework ‘in class under direct supervision’, and be made aware of the risks of using AI.

AI tools learn biases and offensive stereotypes

Experts say that bias is an issue developers are facing with their AI systems, as this is picked up from the data they are trained from.

Last year, a study found that an AI-powered robot was found to have learned ‘toxic stereotypes’ from the internet, including gender and racial biases

In 2016, Microsoft was forced to apologise after an experimental AI Twitter bot called ‘Tay’ said offensive things on the platform.

It was aimed at 18 to-24-year-olds and designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

In 2016, Microsoft was forced to apologise after an experimental AI Twitter bot called 'Tay' (pictured) said offensive things on the platform

In 2016, Microsoft was forced to apologise after an experimental AI Twitter bot called ‘Tay’ (pictured) said offensive things on the platform 

The biases can be political too, as, earlier this month, an AI reporter was developed for China 's state-controlled newspaper (pictured). The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party line

The biases can be political too, as, earlier this month, an AI reporter was developed for China ‘s state-controlled newspaper (pictured). The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party line 

The biases can be political too, as, earlier this month, an AI reporter was developed for China‘s state-controlled newspaper.

The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party (CCP) line.

Readers Also Like:  September Current Events 2023: Science & Technology News - Infoplease

ChatGPT has also been accused of having a left-wing bias, after it refused to praise Donald Trump or argue in favour of fossil fuels

OpenAI, the company behind the chatbot, has pledged to iron out such bias, but insisted it hasn’t tried to sway the system politically. 

Appearing on the Lex Fridman Podcast, CEO Sam Altman conceded AI’s political prejudice, but ruled out the possibility of a completely impartial version: ‘There will be no one version of GPT that the world ever agrees is unbiased.’ 

Elon Musk’s hatred of AI explained: Billionaire believes it will spell the end of humans – a fear Stephen Hawking shared

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. 

The billionaire first shared his distaste for AI in 2014, calling it humanity’s ‘biggest existential threat’ and comparing it to ‘summoning the demon.’

At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. 

His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.

That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: ‘The development of full artificial intelligence could spell the end of the human race.

‘It would take off on its own and redesign itself at an ever-increasing rate.’ 

Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.

During a 2016 interview, Musk noted that he and OpenAI created the company to ‘have democratisation of AI technology to make it widely available.’

Musk founded OpenAI with Sam Altman, the company’s CEO, but in 2018 the billionaire attempted to take control of the start-up.

His request was rejected, forcing him to quit OpenAI and move on with his other projects.

In November, OpenAI launched ChatGPT, which became an instant success worldwide.

The chatbot uses ‘large language model’ software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. 

ChatGPT is used to write research papers, books, news articles, emails and more.

But while Altman is basking in its glory, Musk is attacking ChatGPT.

He says the AI is ‘woke’ and deviates from OpenAI’s original non-profit mission.

‘OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.

The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction – but what does it actually mean?

In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.

Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. 

There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.

For example, humans could scan their consciousness and store it in a computer in which they will live forever.

The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves – but if this is true, it is far off in the distant future.

Researchers are now looking for signs of AI  reaching The Singularity, such as the technology’s ability to translate speech with the accuracy of a human and perform tasks faster.

Former Google engineer Ray Kurzweil predicts it will be reached by 2045.

He has made 147 predictions about technology advancements since the early 1990s – and 86 per cent have been correct. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.