-
Comparative Study
Comparison of emergency medicine specialist, cardiologist, and chat-GPT in electrocardiography assessment.
- Serkan Günay, Ahmet Öztürk, Hakan Özerol, Yavuz Yiğit, and Ali Kemal Erenler.
- Department of Emergency Medicine, Hitit University Erol Olçok Education and Research Hospital, Çorum, Turkey. Electronic address: drsrkngny@gmail.com.
- Am J Emerg Med. 2024 Jun 1; 80: 516051-60.
IntroductionChatGPT, developed by OpenAI, represents the cutting-edge in its field with its latest model, GPT-4. Extensive research is currently being conducted in various domains, including cardiovascular diseases, using ChatGPT. Nevertheless, there is a lack of studies addressing the proficiency of GPT-4 in diagnosing conditions based on Electrocardiography (ECG) data. The goal of this study is to evaluate the diagnostic accuracy of GPT-4 when provided with ECG data, and to compare its performance with that of emergency medicine specialists and cardiologists.MethodsThis study has received approval from the Clinical Research Ethics Committee of Hitit University Medical Faculty on August 21, 2023 (decision no: 2023-91). Drawing on cases from the "150 ECG Cases" book, a total of 40 ECG cases were crafted into multiple-choice questions (comprising 20 everyday and 20 more challenging ECG questions). The participant pool included 12 emergency medicine specialists and 12 cardiology specialists. GPT-4 was administered the questions in a total of 12 separate sessions. The responses from the cardiology physicians, emergency medicine physicians, and GPT-4 were evaluated separately for each of the three groups.ResultsIn the everyday ECG questions, GPT-4 demonstrated superior performance compared to both the emergency medicine specialists and the cardiology specialists (p < 0.001, p = 0.001). In the more challenging ECG questions, while Chat-GPT outperformed the emergency medicine specialists (p < 0.001), no significant statistical difference was found between Chat-GPT and the cardiology specialists (p = 0.190). Upon examining the accuracy of the total ECG questions, Chat-GPT was found to be more successful compared to both the Emergency Medicine Specialists and the cardiologists (p < 0.001, p = 0.001).ConclusionOur study has shown that GPT-4 is more successful than emergency medicine specialists in evaluating both everyday and more challenging ECG questions. It performed better compared to cardiologists on everyday questions, but its performance aligned closely with that of the cardiologists as the difficulty of the questions increased.Copyright © 2024 Elsevier Inc. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.