Conference PaperPDF Available

Semantic segmentation for prostate cancer grading by convolutional neural networks

Authors:
A preview of the PDF is not available
... Zhaoxuan Ma et al. [14] collected a dataset of 513 HR image tiles from primary prostate cancer, where each gland and stroma were identified and rated by the pointer. In this unique dataset, multi-scale U-Net, two SegNet variants, and FCN-8 s are tested for semantic segmentation of low and high grade tumors. ...
Article
Full-text available
For early detection of cancer tumors, the semantic segmentation based technique is proposed because the existing numerous methods fail while classifying due to accuracy and ineffectual decision-making. Therefore, in this paper, the hybrid semantic segmentation networks are introduced. In the beginning, the input image sets are applied into the pre-processing phase and after that, subject to the process of segmentation. The pre-processing process of image contrast enhancement is done by Adaptive Local Gamma Correction (ALGC). The semantic segmentation topology is done by using the hybrid network of the DenseNet-121 model with Attention based pyramid scene parsing network (Att-PSPnet). The feature map extraction and scene parsing are handled by the Att-PSPnet network. The Attention Gate mechanism is introduced to improve the quality of the high-dimensional hidden layer features by highlighting the useful information, and unnecessary information and noise are suppressed. To make the efficient decision and enhance prediction accuracy, the pyramid dilated convolution module (PDM) is a branch of the attention-based pyramid pooling module that enlarges the receptive field to extract global information. Additionally, the global average pooling (GAP) layer is introduced at the output of the feature map. The performance of the proposed method is validated using the Google Colab environment with a histologically confirmed dataset. The experimental results are compared with existing methods like FCN, Unet, and PSPNet in terms of IoU, accuracy, precision, recall, F1 score, IoU, and more. The proposed method achieves 94.68% prediction accuracy which is higher than the existing approaches.
... Lt j et al.; [27] Pvt Dataset 10-fold cross validation 65.8% JACC across stroma, Gleason 3, Gleason 4 and benign glands; and 75.5% across (stroma, benign glands, prostate cancer Lt j et al.; [28] Pvt Dataset Semi-supervised approach 49.5% on an independent set Ing et al. [29] Pvt Dataset FCN-8s: mIOU of 0.759 and an ACC of 0.87, two SegNet variants, and multiscale U-Net: mIOU of 0.738 and accuracy of 0.885. Kalapahar et al. [30] Pvt Dataset Pixel-level Cohen's quadratic Kappa of 0.52 using ResNets Singh et al. [31] Pvt Dataset DC 0.5203 ± 0.2517 (Train) DC 0.4931 ± 0.2557 (Test) Ali et al. [32] Pvt Dataset Bin Acc: 90% ...
Article
Full-text available
A foremost cause of death in males worldwide is prostate cancer. Its early identification, detection and diagnosis is crucial in saving lives. In this paper, we present an efficient gland segmentation model using digital histopathology and deep learning. These methods have the potential to revolutionize the approach by identifying hidden patterns within the image. The recent improvements in data acquisition, processing and analysis of Deep Learning Models has made Artificial Intelligence driven Healthcare a topic of intensive investigation, in terms of inferring from the data and delivering meaningful insights. This study presents an automated method for segmenting histopathological images of human prostate glands. It focuses on developing new methods for segmenting these histopathological images using a multi-channel algorithm with an attention mechanism to focus on important areas. We compare our results with a host of contemporary techniques, and show that our method performs better at the segmentation task for histopathological imagery. Our method is able to delineate gland and background parts with an average Dice-coefficient of 0.9168. In this attention-based model, we have thereby demonstrated an accurate segmentation of glands regions, which could have significant positive implications for medical screening applications.
Article
Deep learning offers a promising methodology for the registration of prostate cancer images from histopathology to MRI. We explored how to effectively leverage key information from images to achieve improved end-to-end registration. We developed an approach based on a correlation attention registration framework to register segmentation labels of histopathology onto MRI. The network was trained using paired prostate datasets of histopathology and MRI from the Cancer Imaging Archive. We introduced An L2-Pearson correlation layer to enhance feature matching. Furthermore, our model employed an enhanced attention regression network to distinguish between key and nonkey features. For data analysis, we used the Kolmogorov-Smirnov test and a one-sample t -test, with the statistical significance level for the one-sample t -test set at 0.001. Compared with two other models (ProsRegNet and CNNGeo), our model exhibited improved performance in Dice coefficient, with increases of 9.893% and 2.753%, respectively. The Hausdorff distance was reduced by approximately 50% and 50%, while the average label error (ALE) was reduced by 0.389% and 15.021%. The proposed improved multimodal prostate registration framework demonstrated high performance in statistical analysis. The results indicate that our enhanced strategy significantly improves registration performance and enables faster registration of histopathological images of patients undergoing radical prostatectomy to preoperative MRI. More accurate registration can prevent over-diagnosing low-risk cancers and frequent false positives due to observer differences.
Chapter
The diagnosis of prostate cancer is driven by the histopathological appearance of epithelial cells and epithelial tissue architecture. Despite the fact that the appearance of the tumor-associated stroma contributes to diagnostic impressions, its assessment has not been standardized. Given the crucial role of the tumor microenvironment in tumor progression, it is hypothesized that the morphological analysis of stroma could have diagnostic and prognostic value. However, stromal alterations are often subtle and challenging to characterize through light microscopy alone. Emerging evidence suggests that computerized algorithms can be used to identify and characterize these changes. This paper presents a deep-learning approach to identify and characterize tumor-associated stroma in multi-modal prostate histopathology slides. The model achieved an average testing AUROC of \(86.53\%\) on a large curated dataset with over 1.1 million stroma patches. Our experimental results indicate that stromal alterations are detectable in the presence of prostate cancer and highlight the potential for tumor-associated stroma to serve as a diagnostic biomarker in prostate cancer. Furthermore, our research offers a promising computational framework for in-depth exploration of the field effect and tumor progression in prostate cancer.
Article
The presence of dust in spiral galaxies affects the ability of photometric decompositions to retrieve the parameters of their main structural components. For galaxies in an edge-on orientation, the optical depth integrated over the line-of-sight is significantly higher than for those with intermediate or face-on inclinations, so it is only natural to expect that for edge-on galaxies, dust attenuation should severely influence measured structural parameters. In this paper, we use radiative transfer simulations to generate a set of synthetic images of edge-on galaxies which are then analysed via decomposition. Our results demonstrate that for edge-on galaxies, the observed systematic errors of the fit parameters are significantly higher than for moderately inclined galaxies. Even for models with a relatively low dust content, all structural parameters suffer offsets that are far from negligible. In our search for ways to reduce the impact of dust on retrieved structural parameters, we test several approaches, including various masking methods and an analytical model that incorporates dust absorption. We show that using such techniques greatly improves the reliability of decompositions for edge-on galaxies.
Article
Background and objective: Prostate cancer is one of the most common diseases affecting men. The main diagnostic and prognostic reference tool is the Gleason scoring system. An expert pathologist assigns a Gleason grade to a sample of prostate tissue. As this process is very time-consuming, some artificial intelligence applications were developed to automatize it. The training process is often confronted with insufficient and unbalanced databases which affect the generalisability of the models. Therefore, the aim of this work is to develop a generative deep learning model capable of synthesising patches of any selected Gleason grade to perform data augmentation on unbalanced data and test the improvement of classification models. Methodology: The methodology proposed in this work consists of a conditional Progressive Growing GAN (ProGleason-GAN) capable of synthesising prostate histopathological tissue patches by selecting the desired Gleason Grade cancer pattern in the synthetic sample. The conditional Gleason Grade information is introduced into the model through the embedding layers, so there is no need to add a term to the Wasserstein loss function. We used minibatch standard deviation and pixel normalisation to improve the performance and stability of the training process. Results: The reality assessment of the synthetic samples was performed with the Frechet Inception Distance (FID). We obtained an FID metric of 88.85 for non-cancerous patterns, 81.86 for GG3, 49.32 for GG4 and 108.69 for GG5 after post-processing stain normalisation. In addition, a group of expert pathologists was selected to perform an external validation of the proposed framework. Finally, the application of our proposed framework improved the classification results in SICAPv2 dataset, proving its effectiveness as a data augmentation method. Conclusions: ProGleason-GAN approach combined with a stain normalisation post-processing provides state-of-the-art results regarding Frechet's Inception Distance. This model can synthesise samples of non-cancerous patterns, GG3, GG4 or GG5. The inclusion of conditional information about the Gleason grade during the training process allows the model to select the cancerous pattern in a synthetic sample. The proposed framework can be used as a data augmentation method.
Article
Prostate cancer is a type of cancer that develops in the prostate. The prostate is a little gland in men that resembles a walnut and secretes seminal fluid, which nourishes and transports sperm. A Taylor-Bird Squirrel Optimization based Deep Recurrent Neural Network (Taylor-BSO based Deep RNN) is created to determine the severity degree of prostate cancer classificationstart the diagnosis process . The Taylor series is combined with the Bird Swarm Algorithm (BSA) and Squirrel Search Algorithm (SSA), respectively, to create the Taylor-BSO. By filtering the MR image using the Hybrid Local and Non-Local Means (HLNLM) filtering model, the noise that was present in the input image is successfully eliminated. The procedure of cancer classification is designed for determining the existence or absence of a tumour using the features that were collected in the segmentation findings. The threshold number, however, categorizes the tumour severity level as either a high-grade or low-grade tumour. .
Chapter
Colorectal Cancer (CRC) is treated as one of a most frequent cancer, and it is the third leading cancer ended with death. Recent progress in deep neural network makes it possible to detect such cancer automatically and accurately from histopathological images. The main challenges of such neural network based model are that we cannot understand why it is making such decision and that leads to a question of trust. Histopathological whole slide images are typically large volume, and manual analysis of such data is tedious. In this paper, we have shown a traditional convolutional neural network (CNN) can be used to answer what (cancer or non-cancer) and attention-based pixel highlighter can highlight the important regions and can be helpful for human expert to validate the decision (why). We have used LC25000 dataset to evaluate the proposed method. The results shown a state-of-the-art accuracy with ability to explain the decision.
Article
Full-text available
Gleason grading of histological images is important in risk assessment and treatment planning for prostate cancer patients. Much research has been done in classifying small homogeneous cancer regions within histological images. However, semi-supervised methods published to date depend on pre-selected regions and cannot be easily extended to an image of heterogeneous tissue composition. In this paper, we propose a multi-scale U-Net model to classify images at the pixel-level using 224 histological image tiles from radical prostatectomies of 20 patients. Our model was evaluated by a patient-based 10-fold cross validation, and achieved a mean Jaccard index of 65.8% across 4 classes (stroma, Gleason 3, Gleason 4 and benign glands), and 75.5% for 3 classes (stroma, benign glands, prostate cancer), outperforming other methods.
Article
Full-text available
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
Article
Full-text available
Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.
Article
Full-text available
Histopathological assessments, including surgical resection and core needle biopsy, are the standard procedures in the diagnosis of the prostate cancer. Current interpretation of the histopathology images includes the determination of the tumor area, Gleason grading, and identification of certain prognosis-critical features. Such a process is not only tedious, but also prune to intra/inter-observe variabilities. Recently, FDA cleared the marketing of the first whole slide imaging system for digital pathology. This opens a new era for the computer aided prostate image analysis and feature extraction based on the digital histopathology images. In this work, we present an analysis pipeline that includes localization of the cancer region, grading, area ratio of different Gleason grades, and cytological/architectural feature extraction. The proposed algorithm combines the human engineered feature extraction as well as those learned by the deep neural network. Moreover, the entire pipeline is implemented to directly operate on the whole slide images produced by the digital scanners and is therefore potentially easy to translate into clinical practices. The algorithm is tested on 368 whole slide images from the TCGA data set and achieves an overall accuracy of 75% in differentiating Gleason 3+4 with 4+3 slides.
Article
Full-text available
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.
Book
New computerized approaches to various problems have become critically important in healthcare. Computer assisted diagnosis has been extended towards a support of the clinical treatment. Mathematical information analysis, computer applications have become standard tools underpinning the current rapid progress with developing Computational Intelligence. A computerized support in the analysis of patient information and implementation of a computer aided diagnosis and treatment systems, increases the objectivity of the analysis and speeds up the response to pathological changes. This book presents a variety of state-of-the-art information technology and its applications to the networked environment to allow robust computerized approaches to be introduced throughout the healthcare enterprise. Image analysis and its application is the traditional part that deals with the problem of data processing, recognition and classification. Bioinformatics has become a dynamically developed field of computer assisted biological data analysis. This book is a great reference tool for scientists who deal with problems of designing and implementing processing tools employed in systems that assist the radiologists and biologists in patient data analysis.
Conference Paper
Torch7 is a versatile numeric computing framework and machine learning library that extends Lua. Its goal is to provide a flexible environment to design and train learning machines. Flexibility is obtained via Lua, an extremely lightweight scripting language. High performance is obtained via efficient OpenMP/SSE and CUDA implementations of low-level numeric routines. Torch7 can easily be in- terfaced to third-party software thanks to Lua's light interface.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Article
Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.
Article
Each year, the treatment decisions for more than 230,000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100 x 100 pixels in gigapixel microscopy images sized 100,000 x 100,000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.