-
- Zheng Wang, Yu Meng, Futian Weng, Yinghao Chen, Fanggen Lu, Xiaowei Liu, Muzhou Hou, and Jie Zhang.
- School of Mathematics and Statistics, Central South University, Changsha, 410083, China.
- Ann Biomed Eng. 2020 Jan 1; 48 (1): 312-328.
AbstractOne major role of an accurate distribution of abdominal adipose tissue is to predict disease risk. This paper proposes a novel effective three-level convolutional neural network (CNN) approach to automate the selection of abdominal computed tomography (CT) images on large-scale CT scans and automatically quantify the visceral and subcutaneous adipose tissue. First, the proposed framework employs support vector machine (SVM) classifier with a configured parameter to cluster abdominal CT images from screening patients. Second, a pyramid dilation network (DilaLab) is designed based on CNN, to address the complex distribution and non-abdominal internal adipose tissue problems of biomedical image segmentation in visceral adipose tissue. Finally, since the trained DilaLab implicitly encodes the fat-related learning, the transferred DilaLab learning and a simple decoder constitute a new network (DilaLabPlus) for quantifying subcutaneous adipose tissue. The networks are trained not only all available CT images but also with a limited number of CT scans, such as 70 samples including a 10% validation subset. All networks are yielding more precise results. The mean accuracy of the configured SVM classifier yields promising performance of 99.83%, while DilaLabPlus achieves a remarkable performance improvement an with average of 98.08 ± 0.84% standard deviation and 0.7 ± 0.8% standard deviation false-positive rate. The performance of DilaLab yields average 97.82 ± 1.34% standard deviation and 1.23 ± 1.33% standard deviation false-positive rate. This study demonstrates considerable improvement in feasibility and reliability for the fully automated recognition of abdominal CT slices and segmentation of selected abdominal CT in subcutaneous and visceral adipose tissue, and it has a high agreement with a manually annotated biomarker.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.