science

AI-augmented care dubbed ‘future of medicine’ as ChatGPT answers patients better than GPs


A doctor, left, and the ChatGPT logo, right

ChatGPT has proved itself able to answer patient questions better than real physicians (Image: Getty Images)

Experts have dubbed artificial intelligence (AI) -augmented healthcare the “future of medicine”, as a chatbot proved itself able to answer patient questions better than real physicians. In fact, a panel of licensed healthcare professionals said that they preferred ChatGPT’s responses 79 percent of the time — calling them “higher quality” and “more empathetic”. While AI is in no danger of replacing your GP any time soon, chatbots could provide physicians with a valuable tool to help cope with recent increases in digital patient communication, researchers said.

Paper author and public health expert Professor Eric Leas of the University of California San Diego (UCSD) said: “The COVID-19 pandemic accelerated virtual healthcare adoption.

“While this made accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice.”

This, he added, has “contributed to record-breaking levels of physician burnout.”

In their study, Prof. Leas and his colleagues set out to see whether OpenAI’s popular artificial intelligence chatbot, ChatGPT, could be helpfully and safely applied to help alleviate some of this growing communications burden on doctors.

A screenshot of r/AskDocs on Reddit

To evaluate ChatGPT, the team sourced questions and answers from Reddit’s ‘r/AskDocs’ (Image: Reddit)

This is not the first time that ChatGPT’s potential applications in clinical medicine have been explored. In fact, a study published in the journal PLOS Digital Health earlier this year found the AI capable of passing the three parts of the US Medical Licensing Exam.

Readers Also Like:  From fleeing Hitler to Mars: the scientist who changed space travel

As UCSD virologist Dr Davey Smith — one of the co-authors on the new study — puts it, “ChatGPT might be able to pass a medical licensing exam, but directly answering patient questions accurately and empathetically is a different ballgame.”

To evaluate its performance at this task, the team turned to Reddit’s “r/AskDocs”, a subreddit with approximately 452,000 members where people post their real-life medical questions into the forum and physicians respond.

While anyone can submit a question, moderators on the subreddit work to verify each doctor’s credentials — which are included with their responses.

READ MORE: AI healthcare is the best for the NHS of the future

The quality of ChatGPT vs real doctor's replies

The panel rated ChatGPT’s responses as ‘good or very good’ 3.6 times more often than the physicians (Image: Ayers et al. / JAMA Internal Medicine)

The empathy of ChatGPT vs real doctor's replies

ChatGPT was rated as ‘empathetic or very empathetic’ 9.8 times more often than the physicians (Image: Ayers et al. / JAMA Internal Medicine)

The researchers said that these question–answer exchanges were reflective of their own clinical experiences, and therefore provided a “fair test” for ChatGPT.

Accordingly, the team randomly sampled 195 question–answer exchanges between the public and verified doctors — and gave the question to ChatGPT to compile a response.

Each question and the two responses were then submitted to a panel of three licensed healthcare professionals, who — without knowing which response came from the chatbot and which came from a real doctor — evaluated them for information quality and empathy.

The panel also reported which of the two responses they preferred.

Readers Also Like:  Six months to Cop28: will the most vital summit yet make meaningful progress?

Overall, the team found that the panel of healthcare professionals preferred the responses penned by ChatGPT 79 percent of the time.

Paper co-author and nurse practitioner Jessica Kelley of Human Longevity, a San Diego-based medical centre, said: “ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses.”

Specifically, the panel rated the chatbot’s responses as “good or very good” in quality 3.6 times more frequently than the physicians (at 78.5 percent compared to 22.1 percent, respectively).

ChatGPT was also rated as being “empathetic or very empathetic” 45.1 percent of the time, compared to just 4.6 percent of the time for real doctors.

Study co-author and UCSD haematologist Dr Aaron Goodman commented: “I never imagined saying this, but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”

An infographic on AI

ChatGPT is built on a neural network architecture (Image: Express.co.uk)

Paper co-author and paediatrician Dr Christopher Longhurst, also of UCSD, said: “Our study is among the first to show how AI assistants can potentially solve real-world healthcare delivery problems.

“These results suggest that tools like ChatGPT can efficiently draft high-quality, personalised medical advice for review by clinicians.”

Indeed, the researchers are keen to stress that the chatbot would be a tool for doctors to use — and not a replacement for them.

Study co-author and computer scientist Professor Adam Poliak said: “While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out together.

Readers Also Like:  Revolutionizing Health Care: Harnessing Artificial Intelligence for ... - Carnegie Mellon University

“Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”

Lead author and UCSD epidemiologist Dr John Ayers concluded: “The opportunities for improving healthcare with AI are massive. AI-augmented care is the future of medicine.”

The full findings of the study were published in the journal JAMA Internal Medicine.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.