Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum, 2023, Ayers et al.

SNT Gatchaman

Senior Member (Voting Rights)
Staff member
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
John W. Ayers; Adam Poliak; Mark Dredze; Eric C. Leas; Zechariah Zhu; Jessica B. Kelley; Dennis J. Faix; Aaron M. Goodman; Christopher A. Longhurst; Michael Hogarth; Davey M. Smith

OBJECTIVE To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

DESIGN, SETTING, AND PARTICIPANTS In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

RESULTS Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

CONCLUSIONS In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Link | PDF (JAMA Internal Medicine)
 
One for @rvallee :)

In this cross-sectional study within the context of patient questions in a public online forum, chatbot responses were longer than physician responses, and the study’s health care professional evaluators preferred chatbot-generated responses over physician responses 4 to 1. Additionally, chatbot responses were rated significantly higher for both quality and empathy, even when compared with the longest physician-authored responses.
 
Yup :)

Frankly this mostly supports the fact that the issues are systemic. If MDs had more time, I'm sure they would do a lot better at it. But when we try to change the system, it's always MDs who rage against any such change. So it's hard not to blame physicians for this, when they are in charge of the system.

But also AIs don't have emotional impact from seeing endless suffering. It plays a lot into this.
 
Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
Gonna have to think a lot bigger than if they don't want to be left behind, though. Doing the same thing is not how to make best use of transformative technology, they need a complete paradigm shift that will mostly leave human interaction as the exception, time is by far the biggest factor in why healthcare delivers so little. They still think with the traditional model of rural physicians doing their thing all alone and one at a time.

Self-service is the future of healthcare. Physicians will hate it for a while, then wonder how the old system ever worked without falling apart. Which it mostly does, they're just not aware of it.
 
See also — I’m an ER doctor. Here’s how I’m already using ChatGPT to help treat patients.

“Nurse, our mother needs IV fluids to treat her dehydration. Please hurry!” All three of them repeated variations of that request to us for over an hour.

Small problem: Administering IV fluids would have made her much worse or even killed her.

Desperate for a solution, I went down the hall to my computer, and fired up ChatGPT-4. Typing in:

“Explain why you would not give IV fluids to someone with severe pulmonary edema and respiratory distress even though you might be concerned that the patient is dehydrated. Explain it in simple and compassionate terms so that a confused person who cares about their mother can understand.”

-->

"In cases of severe pulmonary edema and respiratory distress, it might seem like giving IV fluids would help with hydration, but it could actually make her condition worse. [...] Please know that your mother’s health and comfort are our top priorities, and we are here to support both of you during this difficult time. If you have any questions or concerns, don’t hesitate to reach out to the medical team.

We are all in this together, and we’ll do everything we can to help your mother recover."

I am a little embarrassed to admit that I have learned better ways of explaining things to my own patients from ChatGPT’s suggested responses.

Umm... yep.
 
That comment about being embarrassed shows how important soft skills are. There's nothing embarrassing about this, physicians aren't trained to do this, there is no reason why they should be good at it and it's perfectly fine to use tools, or people for that matter, that are better at those things. We'll keep seeing more of this, with less embarrassment. Not everyone is good at explaining things, or has the time and practice for it. It takes a lot of practice to be good at it, and AIs benefit from the experience of millions. This is good.
 
Back
Top Bottom