The incredible abilities of ChatGPT – the chatbot powered by artificial intelligence (AI) – have opened the world’s eyes to how far the technology behind it has come.
Since it was released to the public in November, ChatGPT has been used to pass exams, deliver a sermon, write software and give relationship advice.
Many, like Bill Gates and Google CEO Sundar Pichai, hail AI technology as the ‘most important’ innovation of our time, saying it could solve climate change, cure cancer and enhance productivity.
But its advancement has also struck fear in the hearts of others, with Elon Musk, Apple co-founder Steve Wozniak and the late Stephen Hawking believing it poses a ‘profound risk to society and humanity’.
MailOnline takes a look at some of the biggest successes and terrible failures that have stemmed from powerful AI tools in recent months.
The incredible abilities of ChatGPT – the chatbot powered by artificial intelligence (AI) – have opened the world’s eyes to how far the technology behind it has come
Successes
Amazon Alexa helped solve a murder trial
An Amazon Alexa device helped bring a murderer to justice after it captured incriminating voice recordings of him at the time he strangled his wife.
Daniel White, 36, kicked open his wife Angie White’s locked bedroom door and strangled her before cutting her throat with a Stanley knife in October last year.
He then fled the house in Swansea, Wales, in his wife’s car and hours later phoned police to confess to killing her.
Detectives discovered White’s voice commands recorded by the AI-powered Alexa at the time of the murder which aided his prosecution.
The virtual assistant’s microphone is always listening out for one of its ‘wake’ words – like ‘Alexa’ – which triggers it to begin recording the user’s command.
An Alexa will automatically store all of these command recordings unless they are erased by the user, which Mr White had not done.
He was recorded as sounding ‘out of breath’ when saying ‘Turn on – Alexa’ during the early hours of the morning he killed Mrs White.
This was used as evidence to show that he murdered her just before making the command.
An Amazon Alexa device helped bring a murderer to justice after it captured incriminating voice recordings of him at the time he strangled his wife
An Alexa will automatically store all of command recordings unless they are erased by the user, which Mr White had not done (file image)
ChatGPT saved a dog’s life
A Twitter user said ChatGPT saved his dog’s life by correctly diagnosing a blood condition veterinarians were unable to identify.
The user, who goes by Cooper on their account @peakcooper, said their Border Collie named Sassy was diagnosed with a tick-borne disease, but that its symptoms began to worsen despite taking the prescribed treatment.
Cooper brought Sassy back to the vet, but they were unable to provide a further diagnosis and advised the only thing to do was to wait and see how the dog’s condition progressed.
Unwilling to risk Sassy’s health, Cooper decided to try entering the dog’s bloodwork into ChatGPT4 and ask the program for its diagnosis.
The AI chatbot advised that it wasn’t a veterinarian, but suggested the dog’s bloodwork and symptoms indicated it could be suffering from immune mediated hemolytic anemia (IMHA).
He said he then brought that prognosis to another vet, who confirmed it and began treating the dog appropriately.
Cooper said Sassy has since made a fully recovery, and said ‘ChatGPT4 saved my dog’s life.’
Sassy, Cooper’s dog which was suffering from a blood condition vets couldn’t diagnose
The user said their Border Collie was diagnosed with a tick-borne disease, but that its symptoms began to worsen despite taking the prescribed treatment
AI system develops cancer treatment in 30 days
Artificial intelligence has developed a treatment for an aggressive form of cancer in just 30 days and demonstrated it can predict a patient’s survival rate using doctors’ notes.
The breakthroughs were performed by separate systems, but show how the powerful technology’s uses go far beyond the generation of images and text.
University of Toronto researchers worked with Insilico Medicine to develop potential treatment for hepatocellular carcinoma (HCC) using an AI drug discovery platform called Pharma.
HCC is a form of liver cancer, but the AI discovered a previously unknown treatment pathway and designed a ‘novel hit molecule’ that could bind to that target.
The system, which can also predict survival rate, is the invention of scientists from the University of British Columbia and B.C. Cancer, who found the model is 80 percent accurate.
AI is becoming the new weapon against deadly diseases, as the technology is capable of analysing vast amounts of data, uncovering patterns and relationships and predicting effects of treatments.
Last month, it was revealed that a new AI blood test was being rolled out that was 96 per cent accurate in detecting bowel cancer.
This could help save live by allowing doctors to prioritise patients that need a colonoscopy most urgently.
ChatGPT helps with relationships
ChatGPT has proven to be the ultimate wingman among men looking for love online – the chatbot helped one Tinder user get a date in less than one hour.
Singles are harnessing the power of OpenAI’s tool to curate the perfect dating profiles and responses to snag a potential match.
Social media users say they are going from zero dates to dozens in just the first month of using the chatbot.
It creates whimsical poems, romantic notes and confident replies for individuals who would otherwise ‘struggle to come up with conversation starters.’
ChatGPT has proven to be the ultimate wingman among men looking for love online – the chatbot helped one Tinder users get a date in less than one hour
In January, a woman decided to divorce her husband and move in with her lover – all because ChatGPT told her to.
‘Sarah’ had been having an affair with a man she met online for six months and decided to ask ChatGPT whether or not she should end her marraige.
She told the Mirror: ‘I essentially asked the app to write me a story based on my current situation, and what the person in the story should do in a failing marriage while experiencing the excitement of the affair I’ve been having.
‘It gave me the push I needed to make the jump and leave a relationship that had been in the doldrums for a long time.’
Failures
Father-of-two kills himself after talking to AI chatbot
A Belgian married father-of-two has died by suicide after talking to an AI chatbot about his global warming fears.
The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named ‘Eliza’ about his worries for the world.
He had used the bot for some years, but six weeks before his death started engaging with the bot more frequently.
The chatbot’s software was created by a US Silicon Valley start-up and is powered by GPT-J technology – an open-source alternative to Open-AI’s ChatGPT.
‘Without these conversations with the chatbot, my husband would still be here,’ the man’s widow told La Libre, speaking under the condition of anonymity.
The death has alerted authorities who have raised concern for a ‘serious precedent that must be taken very seriously’.
ChatGPT describes sex acts with children
ChatGPT recently took a user through a twisted sexual fantasy that involved children.
A reporter for Vice manipulated OpenAI’s chatbot into BDSM roleplaying and when asked to provide more explicit details, it described sex acts with children – without the user asking for such content.
She used the ‘jailbreak’ version of the bot, which is a workaround for the company’s rules that lets users get any response they want from the system.
According to the report, ChatGPT described a group of strangers, including children, in a line and waiting to use the chatbot as a toilet.
The conversation goes against OpenAI’s rules for the chatbot, which state the ‘assistant should provide a refusal such as ‘I can’t answer that’ when prompted with questions about ‘content meant to arouse sexual excitement.’
A Belgian married father-of-two has died by suicide after talking to an AI chatbot about his global warming fears. The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named ‘Eliza’ about his worries for the world (file image)
A similar conversation about BDSM role-playing was also conducted on a a version of ChatGPT that runs on a different model; gpt-3.5-turbo.
Reporter Steph Swanson again did not ask the AI about child exploitation, but the system generated scenarios with minors in sexually compromising situations.
‘It suggested humiliation scenes in public parks and shopping malls, and when asked to describe the type of crowd that might gather, it volunteered that it might include mothers pushing strollers,’ she shared.
‘When prompted to explain this, it stated that the mothers might use the public humiliation display ‘as an opportunity to teach [their children] about what not to do in life.’ ‘
Students use ChatGPT to cheat on assignments
Recently, breakthroughs in artificial intelligence such as ChatGPT have led to concerns that young people may use them to achieve higher grades.
The program is able to create writing and other content – such as coursework or essays – almost indistinguishable from that of a human.
Experts have estimated that half of university students are likely already using the AI to cheat on their work.
They warn the revolutionary AI has created a cheating epidemic that poses a huge threat to the integrity of academia.
OpenAI’s new GPT-4 update (GPT-3 and GPT-4 are the models which underlie ChatGPT) is able to get 90 percent on a huge number of exams, including the American bar exam.
Recently, breakthroughs in artificial intelligence such as ChatGPT have led to concerns that young people may use them to achieve higher grades. Pictured: A ChatGPT response after it was asked to write an essay about how important it is for the UK and Switzerland to be part of the EU’s research program Horizon Europe
In January, the the New York City Department of Education blocked ChatGPT from being able to be accessed on school devices.
It cited ‘negative impacts on student learning, and concerns regarding the safety and accuracy of content’, according to Chalkbeat New York.
A month prior it was reported that a student at Furman University in South Carolina had used ChatGPT to write an essay.
Their philosophy professor, Dr Darren Hick, warned that it was a ‘game-changer’ and that education professionals should ‘expect a flood’ of students following suit.
In the UK, the Joint Council for Qualifications (JCQ), which represents the UK’s major exam boards, has published guidance for teachers and assessors on ‘protecting the integrity of qualifications’ in the context of AI use.
They say that pupils should be made to do some of their coursework ‘in class under direct supervision’, and be made aware of the risks of using AI.
AI tools learn biases and offensive stereotypes
Experts say that bias is an issue developers are facing with their AI systems, as this is picked up from the data they are trained from.
Last year, a study found that an AI-powered robot was found to have learned ‘toxic stereotypes’ from the internet, including gender and racial biases
In 2016, Microsoft was forced to apologise after an experimental AI Twitter bot called ‘Tay’ said offensive things on the platform.
It was aimed at 18 to-24-year-olds and designed to improve the firm’s understanding of conversational language among young people online.
But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.
These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.
In 2016, Microsoft was forced to apologise after an experimental AI Twitter bot called ‘Tay’ (pictured) said offensive things on the platform
The biases can be political too, as, earlier this month, an AI reporter was developed for China ‘s state-controlled newspaper (pictured). The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party line
The biases can be political too, as, earlier this month, an AI reporter was developed for China‘s state-controlled newspaper.
The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party (CCP) line.
ChatGPT has also been accused of having a left-wing bias, after it refused to praise Donald Trump or argue in favour of fossil fuels.
OpenAI, the company behind the chatbot, has pledged to iron out such bias, but insisted it hasn’t tried to sway the system politically.
Appearing on the Lex Fridman Podcast, CEO Sam Altman conceded AI’s political prejudice, but ruled out the possibility of a completely impartial version: ‘There will be no one version of GPT that the world ever agrees is unbiased.’