Two preprints on ChatGPT's clinical decisions/diagnoses, one American, one Hungarian.
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow, Rao et al
Abstract
IMPORTANCE Large language model (LLM) artificial intelligence (AI) chatbots direct the power of large training datasets towards successive, related tasks, as opposed to single-ask tasks, for which AI already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as virtual physicians, has not yet been evaluated.
OBJECTIVE To evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes.
DESIGN We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity.
SETTING ChatGPT, a publicly available LLM
PARTICIPANTS Clinical vignettes featured hypothetical patients with a variety of age and gender identities, and a range of Emergency Severity Indices (ESIs) based on initial clinical presentation.
EXPOSURES MSD Clinical Manual vignettes
MAIN OUTCOMES AND MEASURES We measured the proportion of correct responses to the questions posed within the clinical vignettes tested.
RESULTS ChatGPT achieved 71.7% (95% CI, 69.3% to 74.1%) accuracy overall across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI, 67.8% to 86.1%), and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI, 54.2% to 66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=-15.8%, p<0.001) and clinical management (β=-7.4%, p=0.02) type questions.
CONCLUSIONS AND RELEVANCE ChatGPT achieves impressive accuracy in clinical decision making, with particular strengths emerging as it has more clinical information at its disposal.
https://www.medrxiv.org/content/10.1101/2023.02.21.23285886v1
-----------------------
ChatGPT M.D.: Is There Any Room for Generative AI in Neurology and Other Medical Areas?, Nógrádi et al
Abstract
Background: In recent months, ChatGPT, a general artificial intelligence, has become a cultural phenomenon in the scientific community and general audience as well. A widely increasing number of papers discussed ChatGPT as a powerful tool in scientific writing and programming but its use as a medical tool is largely overlooked. Here we show that ChatGPT can be used as a valuable and innovative augmentation in modern medicine, especially as a diagnostic tool.
Methods: We used synthetic data generated by neurological experts to represent descriptive anamneses of patients with known neurology-related diseases, then the probability for an appropriate diagnosis made by ChatGPT was measured. To give clarity to the accuracy of the AI-determined diagnosis, all cases have been cross-validated by other experts and general medical doctors as well.
Findings: We found that ChatGPT-determined diagnoses can reach the probability level of other experts, furthermore, it surpasses the probability of an appropriate diagnosis if the examiner is a general medical doctor. Our results support the efficacy of general artificial intelligence like ChatGPT as a diagnostic tool in medicine.
Interpretation: In the future, it might be a useful amendment in medical practice, especially in overwhelmed fields and/or areas requiring fast decision-making like oxiology and emergency care.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4372965