-
J Pain Symptom Manage · May 2024
Chatbot Performance in Defining and Differentiating Palliative Care, Supportive Care, Hospice Care.
- Min Ji Kim, Sonal Admane, Yuchieh Kathryn Chang, Kao-Swi Karina Shih, Akhila Reddy, Michael Tang, CruzMaxine De LaMBeth Israel Deaconess Medical Center, Harvard Medical School (M.C.), Boston, Massachusetts, USA., Terry Pham Taylor, Eduardo Bruera, and David Hui.
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA. Electronic address: mkim4@mdanderson.org.
- J Pain Symptom Manage. 2024 May 1; 67 (5): e381e391e381-e391.
ContextArtificial intelligence (AI) chatbot platforms are increasingly used by patients as sources of information. However, there is limited data on the performance of these platforms, especially regarding palliative care terms.ObjectivesWe evaluated the accuracy, comprehensiveness, reliability, and readability of three AI platforms in defining and differentiating "palliative care," "supportive care," and "hospice care."MethodsWe asked ChatGPT, Microsoft Bing Chat, Google Bard to define and differentiate "palliative care," "supportive care," and "hospice care" and provide three references. Outputs were randomized and assessed by six blinded palliative care physicians using 0-10 scales (10 = best) for accuracy, comprehensiveness, and reliability. Readability was assessed using Flesch Kincaid Grade Level and Flesch Reading Ease scores.ResultsThe mean (SD) accuracy scores for ChatGPT, Bard, and Bing Chat were 9.1 (1.3), 8.7 (1.5), and 8.2 (1.7), respectively; for comprehensiveness, the scores for the three platforms were 8.7 (1.5), 8.1 (1.9), and 5.6 (2.0), respectively; for reliability, the scores were 6.3 (2.5), 3.2 (3.1), and 7.1 (2.4), respectively. Despite generally high accuracy, we identified some major errors (e.g., Bard stated that supportive care had "the goal of prolonging life or even achieving a cure"). We found several major omissions, particularly with Bing Chat (e.g., no mention of interdisciplinary teams in palliative care or hospice care). References were often unreliable. Readability scores did not meet recommended levels for patient educational materials.ConclusionWe identified important concerns regarding the accuracy, comprehensiveness, reliability, and readability of outputs from AI platforms. Further research is needed to improve their performance.Copyright © 2024 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.