-
- Phung Tran Huy Nhat, Nguyen Van Hao, Phan Vinh Tho, Hamideh Kerdegari, Luigi Pisani, Le Ngoc Minh Thu, Le Thanh Phuong, Ha Thi Hai Duong, Duong Bich Thuy, Angela McBride, Miguel Xochicale, Marcus J Schultz, Reza Razavi, Andrew P King, Louise Thwaites, Nguyen Van Vinh Chau, Sophie Yacoub, VITAL Consortium, and Alberto Gomez.
- Oxford University Clinical Research Unit, Ho Chi Minh City, Vietnam. nhat.phung@kcl.ac.uk.
- Crit Care. 2023 Jul 1; 27 (1): 257257.
BackgroundInterpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in a low resource ICU.MethodsThis was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool.ResultsThe average accuracy of beginners' LUS interpretation was 68.7% [95% CI 66.8-70.7%] compared to 72.2% [95% CI 70.0-75.6%] in intermediate, and 73.4% [95% CI 62.2-87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2-100.0%], which was significantly better than beginners, intermediate and advanced users (p < 0.001). When supported by our AI tool for interpreting retrospectively acquired clips, the non-expert clinicians improved their performance from an average of 68.9% [95% CI 65.6-73.9%] to 82.9% [95% CI 79.1-86.7%], (p < 0.001). In prospective real-time testing, non-expert clinicians improved their baseline performance from 68.1% [95% CI 57.9-78.2%] to 93.4% [95% CI 89.0-97.8%], (p < 0.001) when using our AI tool. The time-to-interpret clips improved from a median of 12.1 s (IQR 8.5-20.6) to 5.0 s (IQR 3.5-8.8), (p < 0.001) and clinicians' median confidence level improved from 3 out of 4 to 4 out of 4 when using our AI tool.ConclusionsAI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently.© 2023. The Author(s).
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.