• Academic radiology · Aug 2020

    Deep Learning-based Quantification of Abdominal Subcutaneous and Visceral Fat Volume on CT Images.

    • Andrew T Grainger, Arun Krishnaraj, Michael H Quinones, Nicholas J Tustison, Samantha Epstein, Daniela Fuller, Aakash Jha, Kevin L Allman, and Weibin Shi.
    • Departments of Biochemistry & Molecular Genetics, Richmond, Virginia.
    • Acad Radiol. 2020 Aug 5.

    Rationale And ObjectivesDevelop a deep learning-based algorithm using the U-Net architecture to measure abdominal fat on computed tomography (CT) images.Materials And MethodsSequential CT images spanning the abdominal region of seven subjects were manually segmented to calculate subcutaneous fat (SAT) and visceral fat (VAT). The resulting segmentation maps of SAT and VAT were augmented using a template-based data augmentation approach to create a large dataset for neural network training. Neural network performance was evaluated on both sequential CT slices from three subjects and randomly selected CT images from the upper, central, and lower abdominal regions of 100 subjects.ResultsBoth subcutaneous and abdominal cavity segmentation images created by the two methods were highly comparable with an overall Dice similarity coefficient of 0.94. Pearson's correlation coefficients between the subcutaneous and visceral fat volumes quantified using the two methods were 0.99 and 0.99 and the overall percent residual squared error were 5.5% and 8.5%. Manual segmentation of SAT and VAT on the 555 CT slices used for testing took approximately 46 hours while automated segmentation took approximately 1 minute.ConclusionOur data demonstrates that deep learning methods utilizing a template-based data augmentation strategy can be employed to accurately and rapidly quantify total abdominal SAT and VAT with a small number of training images.Copyright © 2020 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

      Pubmed     Full text   Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…

What will the 'Medical Journal of You' look like?

Start your free 21 day trial now.

We guarantee your privacy. Your email address will not be shared.