-
- Ao Liu, Xilin Zhang, Jiaxin Zhong, Zilu Wang, Zhenyang Ge, Zhong Wang, Xiaoya Fan, and Jing Zhang.
- School of Software Technology, Dalian University of Technology, Dalian, China.
- Ann. Med. 2024 Dec 1; 56 (1): 24189632418963.
ObjectiveThe risk of gastric cancer can be predicted by gastroscopic manifestation recognition and the Kyoto Gastritis Score. This study aims to validate the applicability of AI approaches for recognizing gastroscopic manifestations according to the definition of Kyoto Gastritis Score, with the goal of improving early gastric cancer detection and reducing gastric cancer mortality.MethodsIn this retrospective study, 29013 gastric endoscopy images were collected and carefully annotated into five categories according to the Kyoto Gastritis Score, i.e. atrophy (A), diffuse redness (DR), enlarged folds (H), intestinal metaplasia (IM), and nodularity (N). As a multi-label recognition task, we propose a deep learning approach composed of five GAM-EfficientNet models, each performing a multiple classification to quantify gastroscopic manifestations, i.e. no presentation or the severity score 0-2. This approach was compared with endoscopists of varying years of experience in terms of accuracy, specificity, precision, recall, and F1 score.ResultsThe approach demonstrated good performance in identifying the five manifestations of the Kyoto Gastritis Score, with an average accuracy, specificity, precision, recall, and F1 score of 78.70%, 91.92%, 80.23%, 78.70%, and 0.78, respectively. The average performance of five experienced endoscopists was 72.63%, 90.00%, 77.68%, 72.63%, and 0.73, while that of five less experienced endoscopists was 66.60%, 87.44%, 70.88%, 66.60%, and 0.66, respectively. The sample t-test indicates that the approach's average accuracy, specificity, precision, recall, and F1 score for identifying the five manifestations were significantly higher than those of less experienced endoscopists, experienced endoscopists, and all endoscopists on average (p < 0.05).ConclusionOur study demonstrates the potential of deep learning approaches on gastric manifestation recognition over junior, even senior endoscopists. Thus, the deep learning approach holds potential as an auxiliary tool, although prospective validation is still needed to assess its clinical applicability.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.