-
- Bo Wang, Yang Lei, Sibo Tian, Tonghe Wang, Yingzi Liu, Pretesh Patel, Ashesh B Jani, Hui Mao, Walter J Curran, Tian Liu, and Xiaofeng Yang.
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
- Med Phys. 2019 Apr 1; 46 (4): 1707-1718.
PurposeReliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge.MethodsWe propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured.ResultsThe proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56.ConclusionWe developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.© 2019 American Association of Physicists in Medicine.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.