A chatbot appears to be “significantly more empathetic” than doctors when answering patients’ questions, according to the scientists.
The researchers asked a team of licensed healthcare professionals to rate responses from doctors and ChatGPT, a computer program designed to simulate online conversations with humans.
They found that the proportion of responses classified as “empathic” or “very empathic” was higher for ChatGPT than for physicians.
ChatGPT also appears to score higher than doctors on the quality of patient responses.
Writing in the journal Jama Internal Medicine, the researchers said more studies are needed to evaluate whether chatbots like ChatGPT can be used in clinical settings to help reduce burnout in doctors and other healthcare professionals.
They said: “In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum.
“Further exploration of this technology is warranted in clinical settings, such as using chatbots to draft responses that doctors could then edit.
“The randomized trials could further evaluate whether using AI assistants could improve responses, reduce physician burnout, and improve patient outcomes.”
For the study, researchers looked at questions patients asked on the social media forum Reddit, which were answered by a verified doctor.
The same questions were then asked to ChatGPT.
Physicians and ChatGPT responses were anonymized and randomly evaluated by healthcare professionals.
Out of the 195 questions and answers, the results showed that, overall, the raters preferred the chatbot’s answers to those of the doctors.
ChatGPT’s responses were also rated significantly more empathetic than responses from doctors, the researchers said, and the percentage of responses rated as “good” or “very good” quality was higher for the chatbot than for doctors. .
Commenting on the study, Mirella Lapata, Professor of Natural Language Processing, University of Edinburgh, said: “The study evaluates ChatGPT’s ability to provide answers to patients’ questions and compares them to written answers from doctors.
He added, “Without checking response length, we can’t know for sure whether raters judged for style (e.g., long-winded, flowery speech) rather than content.”
Dr Mhairi Aitken, research ethicist at the Alan Turing Institute, said it was important to consider the perspectives of patients and not just professionals when using chatbots.
He added: “It is important to note that while some people may feel comfortable receiving medical advice from a chatbot or having a chatbot assist in a doctor’s advice, for many patients human connection and care are a vital part of the healthcare process and something that cannot be automated or replaced by chatbots like ChatGPT.
“A human doctor is able to adapt their speech, mannerisms and approach in response to social cues and interactions, whereas a chatbot will produce more general language without awareness of social contexts.”