-
- Lingxuan Zhu, Weiming Mou, Tao Yang, and Rui Chen.
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, 160 Pujian Road, Shanghai 200127, China; The First Clinical Medical School, Southern Medical University, 1023 Shatai... more
- Resuscitation. 2023 Jul 1; 188: 109783109783.
AbstractThe study by Fijačko et al. tested ChatGPT's ability to pass the BLS and ACLS exams of AHA, but found that ChatGPT failed both exams. A limitation of their study was using ChatGPT to generate only one response, which may have introduced bias. When generating three responses per question, ChatGPT can pass BLS exam with an overall accuracy of 84%. When incorrectly answered questions were rewritten as open-ended questions, ChatGPT's accuracy rate increased to 96% and 92.1% for the BLS and ACLS exams, respectively, allowing ChatGPT to pass both exams with outstanding results.Copyright © 2023 Elsevier B.V. All rights reserved.
Notes