-
- A Ben Hamida, M Devanne, J Weber, C Truntzer, V Derangère, F Ghiringhelli, G Forestier, and C Wemmert.
- ICube, University of Strasbourg, France. Electronic address: aminabenhamida.bh@gmail.com.
- Comput. Biol. Med. 2021 Sep 1; 136: 104730.
AbstractNowadays, digital pathology plays a major role in the diagnosis and prognosis of tumours. Unfortunately, existing methods remain limited when faced with the high resolution and size of Whole Slide Images (WSIs) coupled with the lack of richly annotated datasets. Regarding the ability of the Deep Learning (DL) methods to cope with the large scale applications, such models seem like an appealing solution for tissue classification and segmentation in histopathological images. This paper focuses on the use of DL architectures to classify and highlight colon cancer regions in a sparsely annotated histopathological data context. First, we review and compare state-of-the-art Convolutional Neural networks (CNN) including the AlexNet, vgg, ResNet, DenseNet and Inception models. To cope with the shortage of rich WSI datasets, we have resorted to the use of transfer learning techniques. This strategy comes with the hallmark of relying on a large size computer vision dataset (ImageNet) to train the network and generate a rich collection of learnt features. The testing and evaluation of such models on our AiCOLO colon cancer dataset ensure accurate patch-level classification results reaching up to 96.98% accuracy rate with ResNet. The CNN models have also been tested and evaluated with the CRC-5000, nct-crc-he-100k and merged datasets. ResNet respectively achieves 96.77%, 99.76% and 99.98% for the three publicly available datasets. Then, we present a pixel-wise segmentation strategy for colon cancer WSIs through the use of both UNet and SegNet models. We introduce a multi-step training strategy as a remedy for the sparse annotation of histopathological images. UNet and SegNet are used and tested in different training scenarios including data augmentation and transfer learning and ensure up to 76.18% and 81.22% accuracy rates. Besides, we test our training strategy and models on the CRC-5000, nct-crc-he-100k and Warwick datasets. Respective accuracy rates of 98.66%, 99.12% and 78.39% were achieved by SegNet. Finally, we analyze the existing models to discover the most suitable network and the most effective training strategy for our colon tumour segmentation case study.1.Copyright © 2021 Elsevier Ltd. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.