• J Magn Reson Imaging · Feb 2020

    Automated deep learning method for whole-breast segmentation in diffusion-weighted breast MRI.

    • Lei Zhang, Aly A Mohamed, Ruimei Chai, Yuan Guo, Bingjie Zheng, and Shandong Wu.
    • Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.
    • J Magn Reson Imaging. 2020 Feb 1; 51 (2): 635-643.

    BackgroundDiffusion-weighted imaging (DWI) in MRI plays an increasingly important role in diagnostic applications and developing imaging biomarkers. Automated whole-breast segmentation is an important yet challenging step for quantitative breast imaging analysis. While methods have been developed on dynamic contrast-enhanced (DCE) MRI, automatic whole-breast segmentation in breast DWI MRI is still underdeveloped.PurposeTo develop a deep/transfer learning-based segmentation approach for DWI MRI scans and conduct an extensive study assessment on four imaging datasets from both internal and external sources.Study TypeRetrospective.SubjectsIn all, 98 patients (144 MRI scans; 11,035 slices) of four different breast MRI datasets from two different institutions.Field Strength/Sequences1.5T scanners with DCE sequence (Dataset 1 and Dataset 2) and DWI sequence. A 3.0T scanner with one external DWI sequence.AssessmentDeep learning models (UNet and SegNet) and transfer learning were used as segmentation approaches. The main DCE Dataset (4,251 2D slices from 39 patients) was used for pre-training and internal validation, and an unseen DCE Dataset (431 2D slices from 20 patients) was used as an independent test dataset for evaluating the pre-trained DCE models. The main DWI Dataset (6,343 2D slices from 75 MRI scans of 29 patients) was used for transfer learning and internal validation, and an unseen DWI Dataset (10 2D slices from 10 patients) was used for independent evaluation to the fine-tuned models for DWI segmentation. Manual segmentations by three radiologists (>10-year experience) were used to establish the ground truth for assessment. The segmentation performance was measured using the Dice Coefficient (DC) for the agreement between manual expert radiologist's segmentation and algorithm-generated segmentation.Statistical TestsThe mean value and standard deviation of the DCs were calculated to compare segmentation results from different deep learning models.ResultsFor the segmentation on the DCE MRI, the average DC of the UNet was 0.92 (cross-validation on the main DCE dataset) and 0.87 (external evaluation on the unseen DCE dataset), both higher than the performance of the SegNet. When segmenting the DWI images by the fine-tuned models, the average DC of the UNet was 0.85 (cross-validation on the main DWI dataset) and 0.72 (external evaluation on the unseen DWI dataset), both outperforming the SegNet on the same datasets.Data ConclusionThe internal and independent tests show that the deep/transfer learning models can achieve promising segmentation effects validated on DWI data from different institutions and scanner types. Our proposed approach may provide an automated toolkit to help computer-aided quantitative analyses of breast DWI images.Level Of Evidence3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:635-643.© 2019 International Society for Magnetic Resonance in Medicine.

      Pubmed     Full text   Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…