-
- Lingxuan Zhu, Weiming Mou, Tao Yang, and Rui Chen.
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, 160 Pujian Road, Shanghai 200127, China; The First Clinical Medical School, Southern Medical University, 1023 Shatai South Road, Guangzhou, 510515 Guangdong, China.
- Resuscitation. 2023 Jul 1; 188: 109783109783.
AbstractThe study by Fijačko et al. tested ChatGPT's ability to pass the BLS and ACLS exams of AHA, but found that ChatGPT failed both exams. A limitation of their study was using ChatGPT to generate only one response, which may have introduced bias. When generating three responses per question, ChatGPT can pass BLS exam with an overall accuracy of 84%. When incorrectly answered questions were rewritten as open-ended questions, ChatGPT's accuracy rate increased to 96% and 92.1% for the BLS and ACLS exams, respectively, allowing ChatGPT to pass both exams with outstanding results.Copyright © 2023 Elsevier B.V. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.