-
- Yida Wang, Yang Song, Fang Wang, Jingjing Sun, Xinyi Gao, Zhe Han, Lei Shi, Guoliang Shao, Mingxia Fan, and Guang Yang.
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China. Electronic address: ydwang@phy.ecnu.edu.cn.
- Eur J Radiol. 2020 Mar 1; 124: 108822.
PurposeTo propose an automatic approach based on a convolutional neural network (CNN) to evaluate the quality of T2-weighted liver magnetic resonance (MR) images as nondiagnostic (ND) or diagnostic (D).Materials And MethodsWe included 150 T2-weighted liver MR imaging examinations in this retrospective study. Each slice of liver image was annotated with a label D or ND by two radiologists with seven and six years of experience, respectively. Additionally, the radiologists segmented the liver region manually as the ground truth for liver segmentation. A CNN was trained to segment the liver region and another CNN was used to classify the qualities of patches extracted from the liver region. The quality of an image was obtained from the percentage of nondiagnostic patches in all liver patches in the image. Treating nondiagnostic as positive, the accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic curve (AUC), and confusion matrix were used to evaluate our model. A Mann-Whitney U test was performed with the statistical significance set at 0.05.ResultsOur model achieved good performance with an accuracy of 88.3 %, sensitivity of 86.0 %, specificity of 89.4 %, PPV of 78.6 %, NPV of 93.4 %, and AUC of 0.911 (95 % confidence interval: 0.882-0.939, p < 0.05). The confusion matrix of our model indicated good concordance with that of the radiologists.ConclusionsThe proposed two-step patch-based model achieved excellent performance when assessing the quality of liver MR images.Copyright © 2020 Elsevier B.V. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.