Fig 7 - uploaded by Vassili Kovalev
Content may be subject to copyright.
Examples of extracted reference points (colored dots) to the tumor regions detected by the method and colorcoded probabilities of categorization to the Tumor class. White contours presenting the tissue sample edges.

Examples of extracted reference points (colored dots) to the tumor regions detected by the method and colorcoded probabilities of categorization to the Tumor class. White contours presenting the tissue sample edges.

Source publication
Conference Paper
Full-text available
This paper present results of the use of Deep Learning approach and Convolutional Neural Networks (CNN) for the problem of breast cancer diagnosis. Specifically, the main goal of this particular study was to detect and to segment (i.e. delineate) regions of micro-and macro-metastases in whole-slide images of lymph node sections. The whole-slide ima...

Context in source publication

Context 1
... employed for removing connected components that unlikely to belong to the tumor class because they are too small was set to MIN SIZE = 5 pixels. At the step of computing the tumor probability scores for the whole extracted region P TUM the most suitable MAX SIZE parameter was found to be MAX SIZE = 30 image pixels at the image pyramid Level 9. Fig. 7 provides examples of resultant tumor regions detected by the method by their reference locations given by color dots. The colors presenting probabilities of categorization of each region to the Tumor class in the color scale provided on the right. Finally, reference coordinates of location of each selected connected component were ...

Citations

... Deep learning has also been considered in interpreting and integrating multiple sources of information in pathology (histology, molecular, etc.) [15]. Recent studies have shown promising results in using deep learning to detect breast cancer in whole slide imaging of SLNs (examples: Camelyon16, ICIAR 2018) [16,17]. However, they require extensive scanning and analysis of all the lymph node slides for each case. ...
Preprint
Deep learning has been shown to be useful to detect breast cancer metastases by analyzing whole slide images of sentinel lymph nodes. However, it requires extensive scanning and analysis of all the lymph nodes slides for each case. Our deep learning study focuses on breast cancer screening with only a small set of image patches from any sentinel lymph node, positive or negative for metastasis, to detect changes in tumor environment and not in the tumor itself. We design a convolutional neural network in the Python language to build a diagnostic model for this purpose. The excellent results from this preliminary study provided a proof of concept for incorporating automated metastatic screen into the digital pathology workflow to augment the pathologists' productivity. Our approach is unique since it provides a very rapid screen rather than an exhaustive search for tumor in all fields of all sentinel lymph nodes.
... Therefore, machine and deep learning techniques were utilized for various tasks in digital pathology [7], [8]. For example, identify types of cancerous lesions [9], [10], to segment cell nuclei [11], to segment inflammatory bowel disease (IBD) tissue features [12], to classify different cancer types via histology images [13], [14], to perform cancer screening [15], and to personalize cancer care [16]. Yet, several fundamental challenges still remain, particularly the gap between the scale of the features that determine the medical condition (which can be in the scale of a few cells) and the overall content of an entire slide that has a typical size of 10 8 − 10 10 pixels (much larger than the typical input size of most architectures which is about 10 6 pixels). ...
Conference Paper
Full-text available
Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus associated with elevated numbers of eosinophils. Disease diagnosis and monitoring require determining the concentration of eosinophils in esophageal biopsies, a time-consuming, tedious and somewhat subjective task currently performed by pathologists. Here, we developed a machine learning pipeline to identify, quantitate and diagnose EoE patients' at the whole slide image level. We propose a platform that combines multi-label segmentation deep network decision support system with dynamics convolution that is able to process whole biopsy slide. Our network is able to segment both intact and not-intact eosinophils with a mean intersection over union (mIoU) of 0.93. This segmentation enables the local quantification of intact eosinophils with a mean absolute error of 0.611 eosinophils. We examined a cohort of 1066 whole slide images from 400 patients derived from multiple institutions. Using this set, our model achieved a global accuracy of 94.75%, sensitivity of 94.13%, and specificity of 95.25% in reporting EoE disease activity. Our work provides state-of-the-art performances on the largest EoE cohort to date, and successfully addresses two of the main challenges in EoE diagnostics and digital pathology, the need to detect several types of small features simultaneously, and the ability to analyze whole slides efficiently. Our results pave the way for an automated diagnosis of EoE and can be utilized for other conditions with similar challenges.
... Two methods were employed to address the challenge of training on high-size images containing small features: first, downscaling the original image with the potential of losing the information associated with small features [14]; and second, dividing the images into smaller patches and analyzing each of the patches [17]. Although the second approach solves the image size challenge, if the relevant small feature (e.g., a local increase in eosinophil density) appears in only a few patches, many patches that do not contain the small feature are still labeled as positive. ...
Article
Full-text available
Goal: Eosinophilic esophagitis (EoE) is an allergic inflammatory condition characterized by eosinophil accumulation in the esophageal mucosa. EoE diagnosis includes a manual assessment of eosinophil levels in mucosal biopsies-a time-consuming, laborious task that is difficult to standardize. One of the main challenges in automating this process, like many other biopsy-based diagnostics, is detecting features that are small relative to the size of the biopsy. Results: In this work, we utilized hematoxylin- and eosin-stained slides from esophageal biopsies from patients with active EoE and control subjects to develop a platform based on a deep convolutional neural network (DCNN) that can classify esophageal biopsies with an accuracy of 85%, sensitivity of 82.5%, and specificity of 87%. Moreover, by combining several downscaling and cropping strategies, we show that some of the features contributing to the correct classification are global rather than specific, local features. Conclusions: We report the ability of artificial intelligence to identify EoE using computer vision analysis of esophageal biopsy slides. Further, the DCNN features associated with EoE are based on not only local eosinophils but also global histologic changes. Our approach can be used for other conditions that rely on biopsy-based histologic diagnostics.
... With the approach of semantic segmentation, a tissue image is labeled to designate local regions of the image as belonging to certain classes, and the network is then trained to identify local regions belonging to the same classes in separate images. Previously, this approach has been applied successfully to identify types of cancerous lesions 8,9 , to segment cell nuclei 10 , to segment inflammatory bowel disease (IBD) tissue features 11 , to classify different cancer types via histology images 12,13 , to perform cancer screening 14 , to personalize cancer care 15 , and to perform fundamental deep learning methods in medical field analysis 16 , such as image registration, detection of anatomical and cellular structures, tissue segmentation, and computer-aided disease diagnosis and prognosis. Various net architectures have been developed for segmentation, such as Mask RCNN 17 , Mask ECNN 18 , Deep Lab 19,20 , and Generative Adversarial Networks 21 . ...
Preprint
Background. Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus associated with elevated numbers of eosinophils. Disease diagnosis and monitoring requires determining the concentration of eosinophils in esophageal biopsies, a time-consuming, tedious and somewhat subjective task currently performed by pathologists. Methods. Herein, we aimed to use machine learning to identify, quantitate and diagnose EoE. We labeled more than 100M pixels of 4345 images obtained by scanning whole slides of H&E-stained sections of esophageal biopsies derived from 23 EoE patients. We used this dataset to train a multi-label segmentation deep network. To validate the network, we examined a replication cohort of 1089 whole slide images from 419 patients derived from multiple institutions. Findings. PECNet segmented both intact and not-intact eosinophils with a mean intersection over union (mIoU) of 0.93. This segmentation was able to quantitate intact eosinophils with a mean absolute error of 0.611 eosinophils and classify EoE disease activity with an accuracy of 98.5%. Using whole slide images from the validation cohort, PECNet achieved an accuracy of 94.8%, sensitivity of 94.3%, and specificity of 95.14% in reporting EoE disease activity. Interpretation. We have developed a deep learning multi-label semantic segmentation network that successfully addresses two of the main challenges in EoE diagnostics and digital pathology, the need to detect several types of small features simultaneously and the ability to analyze whole slides efficiently. Our results pave the way for an automated diagnosis of EoE and can be utilized for other conditions with similar challenges.
... Two methods were employed to address the challenge of training on high-resolution images containing small features: first, downscaling the original image with the potential of losing the information associated with small features [14]; and second, dividing the images into smaller patches and analyzing each of the patches [16]. Although the second approach solves the image size challenge, if the relevant small feature (e.g., a local increase in eosinophil density) appears in only a few patches, many patches that do not contain the small feature are still labeled as positive. ...
Preprint
Goal Eosinophilic esophagitis (EoE) is an allergic inflammatory condition characterized by eosinophil accumulation in the esophageal mucosa. EoE diagnosis includes a manual assessment of eosinophil levels in mucosal biopsies—a time-consuming, laborious task that is difficult to standardize. One of the main challenges in automating this process, like many other biopsy-based diagnostics, is detecting features that are small relative to the size of the biopsy. Results In this work, we utilized hematoxylin- and eosin-stained slides from esophageal biopsies from patients with active EoE and control subjects to develop a platform based on a deep convolutional neural network (DCNN) that can classify esophageal biopsies with an accuracy of 85%, sensitivity of 82.5%, and specificity of 87%. Moreover, by combining several downscaling and cropping strategies, we show that some of the features contributing to the correct classification are global rather than specific, local features. Conclusions We report the ability of artificial intelligence to identify EoE using computer vision analysis of esophageal biopsy slides. Further, the DCNN features associated with EoE are based on not only local eosinophils but also global histologic changes. Our approach can be used for other conditions that rely on biopsy-based histologic diagnostics. Impact Statement Deep convolutional neural network (DCNN), together with a systematic downscaling approach, can classify esophageal biopsies with high accuracy and reveals a global nature of the histologic features of eosinophilic esophagitis. Our approach of systematic analysis of the image size versus downscaling tradeoff can be used to improve disease classification performance and insight gathering in digital pathology.
... Using this approach, a 1.5% improvement in accuracy was achieved compared to the baseline network (VGG). Kovalev et al. [118] and Litjens et al. [4] applied a fixed threshold followed by a connected component analysis of the heatmap. All components with a diameter smaller than a predefined value were removed to get rid of spurious detection caused by artifacts (tissue deformation and dust). ...
Article
Full-text available
Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field.
... Recent studies showed that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in digital images. Medical image analysis groups around the world started to apply CNNs and other DL methodologies to a wide range of applications [14][15][16], and promising results have been emerging from recent studies [7,[20][21][22][23]. The International Symposium on Biomedical Imaging (ISBI) held the Camelyon Grand Challenge [22] in 2016 to evaluate computational systems for the automated detection of metastatic breast cancer in WSI of sentinel lymph node biopsies. ...
... Medical image analysis groups around the world started to apply CNNs and other DL methodologies to a wide range of applications [14][15][16], and promising results have been emerging from recent studies [7,[20][21][22][23]. The International Symposium on Biomedical Imaging (ISBI) held the Camelyon Grand Challenge [22] in 2016 to evaluate computational systems for the automated detection of metastatic breast cancer in WSI of sentinel lymph node biopsies. The Harvard & MIT team won the grand challenge obtaining an Area Under the receiver operating Curve (AUC) of 0.925 for the task of WSI classification, i.e. positive versus negative for metastasis for each slide. ...
Preprint
Recent studies have shown promising results in using Deep Learning to detect malignancy in whole slide imaging. However, they were limited to just predicting positive or negative finding for a specific neoplasm. We attempted to use Deep Learning with a convolutional neural network algorithm to build a lymphoma diagnostic model for four diagnostic categories: benign lymph node, diffuse large B cell lymphoma, Burkitt lymphoma, and small lymphocytic lymphoma. Our software was written in Python language. We obtained digital whole slide images of Hematoxylin and Eosin stained slides of 128 cases including 32 cases for each diagnostic category. Four sets of 5 representative images, 40x40 pixels in dimension, were taken for each case. A total of 2,560 images were obtained from which 1,856 were used for training, 464 for validation and 240 for testing. For each test set of 5 images, the predicted diagnosis was combined from prediction of 5 images. The test results showed excellent diagnostic accuracy at 95% for image-by-image prediction and at 10% for set-by-set prediction. This preliminary study provided a proof of concept for incorporating automated lymphoma diagnostic screen into future pathology workflow to augment the pathologists' productivity.
... A number of studies has proven the efficiency of utilizing Deep Convolutional Networks in biomedical image analysis tasks (Ravi, D., 2017;Zhou, S., 2017;Litjens, G., 2017.). Several studies accomplished by authors on the use of Convolutional Neural Networks for histology image classification in breast cancer diagnosis (Kovalev, V., 2016b), lung segmentation (Kalinovsky, A. 2016) and lung lesion detection in computed tomography images of tuberculosis patients confirms the applicability and power of Deep Learning methods in medical imaging domain. The purpose of this study is to examine the abilities of Deep Convolutional Networks to automatically detect different types of tuberculosis lesions and to compare them to conventional methods on a dataset of manually labeled 3D CT scans. ...
Conference Paper
Full-text available
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
... A number of studies has proven the efficiency of utilizing Deep Convolutional Networks in biomedical image analysis tasks (Ravi, D., 2017;Zhou, S., 2017;Litjens, G., 2017.). Several studies accomplished by authors on the use of Convolutional Neural Networks for histology image classification in breast cancer diagnosis (Kovalev, V., 2016b), lung segmentation (Kalinovsky, A. 2016) and lung lesion detection in computed tomography images of tuberculosis patients confirms the applicability and power of Deep Learning methods in medical imaging domain. ...
Article
Full-text available
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
... Recent achievements in biomedical image classification using Deep Learning methods and Convolutional Neural Networks (CNN) give well-grounded promises to become an effective tool in biomedical image analysis [1][2][3][4]. Several studies accomplished by authors on the use of CNNs for histology image classification in breast cancer diagnosis [5], lung segmentation [6] and lung lesion detection in computed tomography images of tuberculosis patients [7] confirm the applicability and power of Deep Learning methods in medical imaging domain. ...
Conference Paper
Full-text available
This paper presents results that were obtained in comparative study of the efficiency of conventional and Deep Learning methods on the problem of predicting subjects' age by their chest radiographs. A large study group consisting of chest radiographs of 10 000 people was created by random sub-sampling of suitable subjects from the input image repository containing 1.8 million items. The age range was chosen to span from 21 to 70 years. The age prediction was performed by Convolutional Neural Networks AlexNet and GoogLeNet as well as using conventional methods based on Local Binary Patterns and extended co-occurrence matrices as image features followed by kNN, Random Forest, Linear Model, SVM, and Decision Trees classifiers. The conclusion was that the convolutional neural networks greatly outperform conventional methods. It was found that the lowest RMSE error achieved on the task of age prediction using convolutional networks is 5.77 years whereas conventional methods demonstrate on the same data much higher error value of 11.73 years.