technology

AI chatbots are here and they’re passing medical exams – should we embrace them?


ChatGPT is just one of a raft of new AI chatbots out there (Picture: Getty)

Ever wondered what it’s like to chat with a robot? It’s pretty cool.

Ask ChatGPT to write a Shakespearean sonnet, tell a funny story or write an essay on quantum theory and it will do it in seconds.

The chatbot’s applications appear to be endless: it’s written sermons for a rabbi in New York, penned a song in the style of Nick Cave (who called it ‘a grotesque mockery’), and when MP Julian Hill warned the Australian parliament on the growth of artificial intelligence (AI), his speech was partly written by the AI itself.

It’s a clever bot to boot. Researchers in the US have tested ChatGPT’s academic credentials, and it met the passing threshold of the three-tier exam, which must be completed by all US medical school graduates, and achieved a B- in a postgraduate business master’s. Doctor Chatbot will see you now!

Developed by research laboratory OpenAI, ChatGPT stands for ‘chat generative pre-trained transformer’: an interactive AI chatbot capable of delivering conversational answers to all kinds of questions and requests, with in-depth responses that often surpass those of average human intelligence.

Since its arrival last year, ChatGPT has hit headlines as people punch in questions and get remarkable replies. But is the hype real – or is the internet just easily impressed?

Nick Cave called a song penned by ChatGPT, supposedly in his style, ‘a grotesque mockery’ (Picture: Alfonso Catalano/Shutterstock)

Beneath the bonnet

If you’ve used ChatGPT to answer a question, you’ve probably wondered how it works. ChatGPT uses a class of machine learning known as Large Language Model (LLMs). LLMs digest masses of text data and then build relationships between the words found within the text. The more an LLM is used, the more their capabilities increase.

Imagine reading an entire library’s worth of books and then being asked a question. Your answer would be an aggregation of your knowledge taken from the books. ChatGPT does a similar thing, only with a mammoth dataset of text scraped from the internet, not a library.

Readers Also Like:  Airbnb acquires AI startup for just under $200 million

But how does it manage to sound like it was written by a real person? In simple terms, ChatGPT works by learning huge swathes of text, spotting patterns, and then reproducing what it has learned to predict a logical sequence of human language text that meets the input brief, ie when asked a question.

ChatGPT works a bit like reading all the books in a library and then offering answers based on aggregated knowledge gleaned from those books – only it scrapes the internet instead (Picture: Getty)

While it sounds convincing, what it says isn’t always logical – or factual. Users have pointed out all sorts of glaring errors, from historical inaccuracies to impossible to follow instructions, all provided with seemingly total confidence. Indeed, the prompt page warns users that ChatGPT ‘may occasionally generate incorrect information’ and ‘may occasionally produce harmful instructions or biased content’ (some have reported instances of sexism and racism in the bot’s answers).

Disinformation researchers had scarier concerns – chatbots could be used to churn out fake news and conspiracy theories more efficiently than ever. OpenAI has been quick to point out ChatGPT’s many limitations, including lacking knowledge after 2021 and its inherent vulnerability to biased content due to the nature of its training data. But it seems to be a bot willing to learn, with the ability to admit its mistakes and even challenge misleading or problematic prompts.



Imitating Gareth May

ChatGPT does an impression of our writer Gareth May (Picture: Jonathan Raa/NurPhoto/Shutterstock)

We asked ChatGPT to ‘write a short piece of writing in the style of the British journalist Gareth May.’ Here’s what the chatbot delivered:

‘As a language model, I can generate text in the style of Gareth May, a British journalist known for his witty and irreverent writing style. Here’s a short piece of writing in his style: Well, well, well – it seems the latest trend in foodie circles is something called a ‘cloud egg’.

Yes, you heard that right, folks. Apparently, the humble egg just isn’t good enough any more, and now we need to add a touch of Instagram-worthy whimsy to our breakfast plates.

Now, I’ll be the first to admit that these cloud eggs do look rather impressive – all fluffy and ethereal, like something you’d see in a Pixar movie. Sure, they look pretty, but the texture is all wrong – like eating a mouthful of foam. Stick to good old-fashioned scrambled or fried eggs, and leave the culinary theatrics to the pros.’

Will the real Gareth May please step forward (Picture: Metro)

Readers Also Like:  Gaming, beauty, travel and fashion among most popular segments in creator economy: Cosmofeed

Playing the Bard

In January, Microsoft announced a 
$10billion investment in OpenAI, on top of its initial $1bn investment in 2019. The next ChatGPT iteration, GPT-4, is due out in the summer. And Microsoft has already integrated ChatGPT into its new and improved Bing search engine, currently testing it in 169 countries.

But it’s not a clear path to chatbot dominance. Google Bard is the main challenger to ChatGPT. Only available to beta testers at present, it’s essentially an augmented version of Google’s own search tools with the advantage of being able to provide up-to-date information directly from the web, allowing it to give you what Google claims are ‘fresh, high-quality responses’.

Google’s own chatbot is called Bard which it launched last month (Picture: AP/Jeff Chiu)

Unfortunately, the launch on February 6 was far from high quality. During its showcase demonstration, Bard incorrectly answered a question about recent discoveries made by the James Webb space telescope. Shares of Alphabet, Google’s parent company, tumbled in response, losing $100billion in market value.

But Google has more than one horse in the race. DeepMind – a subsidiary of Alphabet – is set to launch its own chatbot, called Sparrow, into private beta this year. ChatGPT went viral for its impressive ability to write love letters and well-argued essays but was criticised for its capacity to craft malware and use discriminatory language. DeepMind hopes to avoid these pitfalls by operating within a strict rule-set, providing evidence for scientific answers and swerving ‘inappropriate’ requests.

To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

Other alternatives include YouChat, which can generate visuals or graphics and respond to voice commands; the Socratic app, designed to break down complex concepts for school kids; and finally AI business tools like Rytr, Copy.ai and Jasper, which can knock out blog posts or newsletter content in a matter of minutes. In short, chatbots are here to stay.

Readers Also Like:  Best Disney Plus deals: Get 2 months of streaming for FREE

The future

So what are the implications of chatbots? Former computer science lecturer Graham Glass, now the CEO and founder of Cypher Learning, believes the technology of AI chatbots, like ChatGPT, is a game-changer.

But he warns academic institutions will have to learn to detect AI-produced fakes as pupils increasingly use the program to write essays and pass certification exams. ‘Similar challenges will crop up in business,’ he says, ‘where generative AI applications will be recruited for everyday tasks.’

Chatbots are a boon for the average employer, as they’re never going to skive off (Picture: Getty)

As a workforce, the chatbot is a boon. You won’t find a chatbot skiving off to neck a piña colada on the beach, for example. But Glass argues chatbots won’t make humans obsolete.

‘Many fear generative AI could reshape the job market by displacing large numbers of workers. Displacement will happen, but not to the dystopian, traumatic extent some think,’ Glass concludes.


MORE :
Snapchat launches its own AI chatbot powered by ChatGPT


MORE : Spotify’s AI with ‘stunningly realistic voice’ will tell you what to listen to

Do you have a story to share?

Get in touch by emailing MetroLifestyleTeam@Metro.co.uk.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.