-
- Michel Abdel Malek, Monique van Velzen, Albert Dahan, Chris Martini, Elske Sitsen, Elise Sarton, and Martijn Boon.
- Department of Anaesthesiology, Leiden University Medical Centre, Leiden, The Netherlands. Electronic address: m.abdel_malek@lumc.nl.
- Br J Anaesth. 2024 Nov 14.
BackgroundRecent advances in artificial intelligence (AI) have enabled development of natural language algorithms capable of generating coherent texts. We evaluated the quality, validity, and safety of this generative AI in preoperative anaesthetic planning.MethodsIn this exploratory, single-centre, convergent mixed-method study, 10 clinical vignettes were randomly selected, and ChatGPT (OpenAI, 4.0) was prompted to create anaesthetic plans, including cardiopulmonary risk assessment, intraoperative anaesthesia technique, and postoperative management. A quantitative assessment compared these plans with those made by eight senior anaesthesia consultants. A qualitative assessment was performed by an adjudication committee through focus group discussion and thematic analysis. Agreement on cardiopulmonary risk assessment was calculated using weighted Kappa, with descriptive data representation for other outcomes.ResultsChatGPT anaesthetic plans showed variable agreement with consultants' plans. ChatGPT, the survey panel, and adjudication committee frequently disagreed on cardiopulmonary risk estimation. The ChatGPT answers were repetitive and lacked variety, evidenced by the strong preference for general anaesthesia and absence of locoregional techniques. It also showed inconsistent choices regarding airway management, postoperative analgesia, and medication use. While some differences were not deemed clinically significant, subpar postoperative pain management advice and failure to recommend tracheal intubation for patients at high risk for pulmonary aspiration were considered inappropriate recommendations.ConclusionsPreoperative anaesthetic plans generated by ChatGPT did not consistently meet minimum clinical standards and were unlikely the result of clinical reasoning. Therefore, ChatGPT is currently not recommended for preoperative planning. Future large language models trained on anaesthesia-specific datasets might improve performance but should undergo vigorous evaluation before use in clinical practice.Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.