ArticlePDF Available

Artificial intelligence and colon capsule endoscopy: automatic detection of blood in colon capsule endoscopy using a convolutional neural network

Georg Thieme Verlag KG
Endoscopy International Open
Authors:

Abstract and Figures

Colon capsule endoscopy (CCE) is a minimally invasive alternative to conventional colonoscopy. Most studies on CCE focus on colorectal neoplasia detection. The development of automated tools may address some of the limitations of this diagnostic tool and widen its indications for different clinical settings. We developed an artificial intelligence model based on a convolutional neural network (CNN) for the automatic detection of blood content in CCE images. Training and validation datasets were constructed for the development and testing of the CNN. The CNN detected blood with a sensitivity, specificity, and positive and negative predictive values of 99.8 %, 93.2 %, 93.8 %, and 99.8 %, respectively. The area under the receiver operating characteristic curve for blood detection was 1.00. We developed a deep learning algorithm capable of accurately detecting blood or hematic residues within the lumen of the colon based on colon CCE images. Publication History Received: 12 January 2021 Accepted: 12 March 2021 Publication Date: 16 July 2021 (online) © 2021. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/) Georg Thieme Verlag KG Rüdigerstraße 14, 70469 Stuttgart, Germany
This content is subject to copyright.
Introduction
Capsule endoscopy has become the main instrument for evalu-
ation of patients with suspected small-bowel bleeding. Colo-
noscopy is routinely performed for the investigation of suspect-
ed lower gastrointestinal bleeding; however, it is invasive, po-
tentially painful, frequently requires sedation, and is associated
with a risk of perforation [1].
Colon capsule endoscopy (CCE) has been recently intro-
duced as a minimally invasive alternative to conventional colo-
noscopy when the latter is contraindicated, unfeasible, or un-
wanted by the patient. The application of CCE has been most
extensively studied in the setting of colorectal cancer screening
[2]. However, a single CCE examination may produce 50000
images, the review of which is time-consuming, requiring ap-
proximately 50 minutes for completion [3]. Additionally, abnor-
mal findings may be restricted to a small number of frames,
thus contributing to the risk of overlooking important lesions.
Convolutional neural networks (CNN) are a type of deep
learning algorithm tailored for image analysis. This artificial in-
telligence (AI) architecture has demonstrated high perform-
ance levels in diverse medical fields [4, 5]. Recent studies have
reported a high diagnostic yield of CNN-based tools for the de-
tection of luminal blood in small-bowel capsule endoscopy [6].
Artificial intelligence and colon capsule endoscopy:
automatic detection of blood in colon capsule endoscopy
using a convolutional neural network
Authors
Miguel Mascarenhas Saraiva1, 3,JoãoP.S.Ferreira
4,5, Hélder Cardoso1,3, João Afonso1,2, Tiago Ribeiro1,2,Patrícia
Andrade1, 3,MarcoP.L.Parente
4,5, Renato N. Jorge4,5, Guilherme Macedo1, 3
Institutions
1 Department of Gastroenterology, São João University
Hospital, Alameda Professor Hernâni Monteiro, Porto,
Portugal
2 WGO Gastroenterology and Hepatology Training Center,
Porto, Portugal
3 Faculty of Medicine of the University of Porto, Alameda
Professor Hernâni Monteiro, Porto, Portugal
4 Department of Mechanical Engineering, Faculty of
Engineering of the University of Porto, Porto, Portugal
5 INEGI Institute of Science and Innovation in
Mechanical and Industrial Engineering, Porto, Portugal
submitted 12.1.2021
accepted after revision 12.3.2021
Bibliography
Endosc Int Open 2021; 09: E1264E1268
DOI 10.1055/a-1490-8960
ISSN 2364-3722
© 2021. The Author(s).
This is an open acce ss article published by Thie me under the terms of the C reative
Commons Attribut ion-NonDerivative-NonCommercial Licens e, permitting copying
and reproducti on so long as the origin al work is given appropriate credit . Contents
may not be used for comme rcial purposes, or adapted, remixed , transformed or
built upon. (http s://creativecommons.org/licens es/by-nc-nd/4. 0/)
Georg Thieme Verlag KG, Rüdigerstraße 14,
70469 Stuttgart, Germany
Corresponding author
Miguel José da Quinta e Costa de Mascarenhas Saraiva, MD,
Department of Gastroenterology, São João University
Hospital, Alameda Professor Hernâni Monteiro, Rua Oliveira
Martins 104, Porto, 4200-427, Portugal
Fax: +351-22-5509479
miguelmascarenhassaraiva@gmail.com
ABSTRACT
Colon capsule endoscopy (CCE) is a minimally invasive alter-
native to conventional colonoscopy. Most studies on CCE
focus on colorectal neoplasia detection. The development
of automated tools may address some of the limitations of
this diagnostic tool and widen its indications for different
clinical settings. We developed an artificial intelligence
model based on a convolutional neural network (CNN) for
the automatic detection of blood content in CCE images.
Training and validation datasets were constructed for the
development and testing of the CNN. The CNN detected
blood with a sensitivity, specificity, and positive and nega-
tive predictive values of 99.8%, 93.2%, 93.8 %, and 99.8 %,
respectively. The area under the receiver operating charac-
teristic curve for blood detection was 1.00. We developed a
deep learning algorithm capable of accurately detecting
blood or hematic residues within the lumen of the colon
based on colon CCE images.
Innovation forum
E1264 Mascarenhas Saraiva Miguel et al. Artificial intelligence and Endosc Int Open 2021; 09: E1264E1268 | © 2021. The Author(s).
Article published online: 2021-07-16
The use of CCE for investigation of conditions other than colo-
rectal neoplasia has not been evaluated. Detection of blood
content is important when reviewing CCE images and, to date,
no AI algorithm has been developed for the detection of colonic
luminal blood or hematic residues in CCE images. The aim of
this pilot study was to develop and validate a CNN-based algo-
rithm for the automatic detection of colonic luminal blood or
hematic vestiges in CCE examinations.
Methods
Study design
We retrospectively reviewed CCE images obtained between
2010 and 2020 at São João University Hospital, Porto, Portugal.
The full-length video of all participants (n = 24) was reviewed
(total number of frames 3 387 259). A total of 5825 images of
the colonic mucosa were ultimately extracted. Inclusion and la-
beling of frames were performed by two experienced gastroen-
terologists who had each read more than 1000 capsule endos-
copies (M.M.S. and H.C.) prior to the study. Significant frames
were included regardless of image quality and bowel cleansing
quality. A final decision on the frame labeling required undispu-
ted consensus between the two gastroenterologists. The study
was approved by the ethics committee of São João University
Hospital/Faculty of Medicine of the University of Porto (No. CE
407/2020). A team with Data Protection Officer certification
(Maastricht University) confirmed the nontraceability of data
and conformity with general data protection regulations.
CCE procedure
In all patients, the procedures were conducted using the Pill-
Cam Colon 2 system (Medtronic, Minneapolis, Minnesota,
USA). This system was launched in 2009 and no hardware mod-
ifications were introduced during the study period. Therefore,
image quality remained unaltered between 2010 and 2020,
with no difference in quality between the image frames used
to train the CNN and those used to test the model. The images
were reviewed using PillCam software v9 (Medtronic). Bowel
preparation was performed according to previously published
recommendations [7]. In brief, a solution consisting of 4L of
polyethylene glycol solution was used in a split-dosage regimen
(2 L in the evening before and 2 L on the morning of capsule in-
gestion). Two boosters consisting of a sodium phosphate solu-
tion were applied after the capsule had entered the small bowel
and with a 3-hour interval.
Development of the CNN
A CNN was developed for automatic identification of blood or
hematic residues within the lumen of the colon. From the col-
lected pool of images (n = 5825), 2975 had evidence of luminal
blood or hematic residues and 2850 showed normal mucosa or
mucosal lesions. This pool of images was split to form training
and validation image datasets. The training dataset comprised
80 % of the consecutively extracted images (n = 4660); the re-
maining 20 % were used as the validation dataset (n= 1165).
The validation dataset was used for assessing the performance
of the CNN (Fig. 1).
TocreatetheCNN,weusedtheXceptionmodelwithits
weights trained on ImageNet. To transfer this learning to our
data, we kept the convolutional layers of the model. We used
Tensorflow 2.3 and Keras libraries to prepare the data and run
the model. Each inputted frame had a resolution of 512 × 512
pixels. For each image, the CNN calculated the probability for
each category (normal colonic mucosa/other findings vs.
blood/hematic residues). The category with the highest prob-
ability score was outputted as the CNNs predicted classifica-
tion (Fig.2).
Model performance and statistical analysis
The primary outcome measures included sensitivity, specificity,
positive and negative predictive values, and accuracy. More-
over, we used receiver operating characteristic (ROC) curve a-
nalysis and area under the ROC curve (AUROC) to measure the
performance of our model for distinction between the categor-
ies. The networks classification was compared with that re-
corded by the gastroenterologists (gold standard). Sensitiv-
ities, specificities, and precisions are presented as means and
standard deviations (SDs). ROC curves are graphically represen-
ted and AUROC was calculated as mean and 95 % confidence in-
terval (CI), assuming normal distribution of these variables.
The computational performance of the network was also de-
termined by calculating the time required for the CNN to pro-
vide output for all images in the validation dataset. Statistical
analysis was performed using SciKit learn v. 0.22.2.
Results
Construction of the CNN
A total of 24 patients who underwent CCE were enrolled in the
study. A total of 5825 frames were extracted, 2975 containing
blood and 2850 showing normal mucosa/other findings. The
training dataset comprised 80% of the total image pool; the re-
maining 20 % (n = 1165) were used for testing the model. The
latter subset of images comprised 595 images (51.1%) with evi-
dence of blood or hematic residues and 570 images (48.9 %)
with normal colonic mucosa/other findings. The CNN evaluated
each image and predicted a classification (normal mucosa/
other findings or blood/hematic residues), which was compar-
ed with the classification provided by the gastroenterologists.
The network demonstrated its learning ability, with accuracy
increasing as data were repeatedly inputted to the multi-layer
CNN (Fig.3).
Performance of the CNN
The performance of the CNN is shown in Table 1. Overall, the
mean (SD) sensitivity and specificity were 99.8 % (4.7 %) and
93.2% (4.7 %), respectively. The positive predictive value and
negative predictive value were 93.8% (4.2%) and 99.8 %
(4.2 %), respectively (Table 1). The overall accuracy of the
CNN was 96.6%. The AUROC for detection of blood was 1.00
(95 %CI 0.991.00) (Fig. 4).
Mascarenhas Saraiva Miguel et al. Artificial intelligence andEndosc Int Open 2021; 09: E1264E1268 | © 2 021. The Author(s). E1265
Computational performance of the CNN
The CNN completed the reading of the dataset in 9 seconds.
This translates into an approximated reading rate of 129 frames
per second. At this rate, review of a full-length CCE video con-
taining an estimated 50000 frames would require approxi-
mately 6 minutes.
Discussion
We developed a CNN for automatic detection of blood in the lu-
men of the colon in CCE images. Our AI model was highly sensi-
tive, specific, and accurate for the detection of blood in CCE
images.
The application of AI tools in the field of capsule endoscopy
has been generating increasing interest. The development of AI
tools for automatic detection of a wide array of lesions has
provided promising results [8]. Recently, Aoki et al. reported
high performance of a CNN for the detection of blood content
in images of small-bowel capsule endoscopy, outperforming
currently existing software tools for screening the presence of
blood in capsule endoscopy images [6]. The development of AI
tools for automatic detection of lesions in CCE images is in its
early stages, and mainly focuses on automatic detection of
colorectal neoplasia. To date, two studies have reported the de-
velopment of CNN-based models for detection of colorectal
neoplasia, with high performance levels [9,10].
24 CCE exams (PillCam Colon 2)
3387259 images
Extraction and double-validation of 7640 frames
Training dataset – 80 % of the extracted images
CNN
CNN
Output
N/ON/O 100 % (N/O) B 100 % (B)B
Performance assessment
sensitivity, specificity, AUROC, accuracy, precision, image processing time
Validation dataset – 20 % of the extracted images
2850 frames
normal mucosa/other findings
2975 frames
blood/hematic residues
Training phaseValidation phase
N/O B
Fig. 1 Study flow chart for the training and validation phases. N/O, normal mucosa/other findings; B, blood or hematic residues; AUROC,
area under the receiver operating characteristic curve. PillCam Colon 2 (Medtronic, Minneapolis, Minnesota, USA).
E1266 Mascarenhas Saraiva Miguel et al. Artificial intelligence and Endosc Int Open 2021; 09: E1264E1268 | © 2021. The Author(s).
Innovation forum
The role of CCE in everyday clinical practice has not been
fully established. Most studies focus on detection of polyps for
colorectal cancer screening. Besides its application in the
screening setting, CCE may be a noninvasive alternative to con-
ventional colonoscopy for other common indications, including
the investigation of lower gastrointestinal bleeding and colonic
lesions other than polyps. To the best of our knowledge, this is
the first study to develop a CNN for automatic detection of
blood content in CCE images. Our network detected the pres-
ence of blood in CCE images with high sensitivity, specificity,
and accuracy (99.8 %, 93.2 %, and 96.6 %, respectively). The de-
velopment of automated AI tools for CCE has the potential to
improve its diagnostic yield and time efficiency, thus contribut-
ing to CCE acceptance. Moreover, increased diagnostic capacity
may widen the indications for CCE. These results suggest that
AI-enhanced CCE may be a useful examination for evaluation
of patients with lower gastrointestinal bleeding, particularly
when conventional colonoscopy is contraindicated or unwan-
ted by the patient.
This study has several limitations.First,itwasaretrospective
proof-of-concept study involving images collected at a single
center. Second, the tool was only tested in still frames; assess-
ment of performance using full-length videos is required before
clinical application of these tools. Third, although a large pool
of images was reviewed, the number of patients included in
the study was small. Thus, subsequent prospective multicenter
studies with larger numbers of CCE examinations are desirable
before this model can be applied to clinical practice. Further-
N/ON/O 62 % (N/O) B N/ON/O 100 % (N/O) B 100 % (B)BN/O B
N/ON/O 100 % (N/O) B N/ON/O 99 % (N/O) B 100 % (B)BN/O B
N/ON/O 92 % (N/O) B N/OB 100 % (B) N/O 99 % (N/O)BN/O B
Fig. 2 Output obtained from the application of the convolutional neural network. The bars represent the probability estimated by the net-
work. The finding with the highest probability was outputted as the predicted classification. Blue bars represents a correct prediction; gray
bars represent an incorrect prediction. N/O, normal mucosa/other findings; B, blood or hematic residues.
0.0 2.5 5.0 7.5 10.0 12.5 15.0
Training accuracy
Validation accuracy
17.5
1.0
0.9
0.8
0.7
0.6
0.5
Fig. 3 Evolution of the accuracy of the convolutional neural
network during training and validation phases as the training and
validation datasets were repeatedly inputted into the neural net-
work.
Mascarenhas Saraiva Miguel et al. Artificial intelligence andEndosc Int Open 2021; 09: E1264E1268 | © 2 021. The Author(s). E1267
more, these tools should be regarded as supportive rather than
substitutive in a real-life clinical setting.
In conclusion, we developed a CNN-based model capable of
detecting blood content in CCE images with high sensitivity and
specificity. We believe that the implementation of AI tools to
clinical practice will address some of the limitations of CCE,
mainly the time required for reading, thus lessening the burden
on gastroenterologists and boosting the acceptance of CCE to
routine clinical practice.
Competing interests
The authors declare that they have no conflict of interest.
References
[1] Niikura R, Yasunaga H, Yamada A et al. Factors predicting adverse
events associated with therapeutic colonoscopy for colorectal neo-
plasia: a retrospective nationwide study in Japan. Gastrointest Endosc
2016; 84: 971982
[2] Spada C, Pasha SF, Gross SA et al. Accuracy of first- and second-gen-
eration colon capsules in endoscopic detection of colorectal polyps:
a systematic review and meta-analysis. Clin Gastroenterol Hepatol
2016; 14: 15331543
[3] Eliakim R, Yassin K, Niv Y et al. Prospective multicenter performance
evaluation of the second-generation colon capsule compared with
colonoscopy. Endoscopy 2009; 41: 10261031
[4] EstevaA,KuprelB,NovoaRAetal.Dermatologist-levelclassification
of skin cancer with deep neural networks. Nature 2017; 542: 115118
[5] Gargeya R, Leng T. Automated identification of diabetic retinopathy
using deep learning. Ophthalmology 2017; 124: 962969
[6] Aoki T, Yamada A, Kato Y et al. Automatic detection of blood content
in capsule endoscopy images based on a deep convolutional neural
network. J Gastroenterol Hepatol 2020; 35: 11961200
[7] Spada C, Hassan C, Galmiche JP et al. Colon capsule endoscopy:
European Society of Gastrointestinal Endoscopy (ESGE) Guideline.
Endoscopy 2012; 44: 527536
[8] Iakovidis DK, Koulaouzidis A. Software for enhanced video capsule
endoscopy: challenges for essential progress. Nat Rev Gastroenterol
Hepatol 2015; 12: 172186
[9] Blanes-Vidal V, Baatrup G, Nadimi ES. Addressing priority challenges
in the detection and assessment of colorectal polyps from capsule
endoscopy and colonoscopy in colorectal cancer screening using ma-
chine learning. Acta Oncol 2019; 58: S29S36
[10] Yamada A, Niikura R, Otani K et al. Automatic detection of colorectal
neoplasia in wireless colon capsule endoscopic images using a deep
convolutional neural network. Endoscopy 2020: doi:10.1055/a-1266-
1066
0.0 0.2 0.4 0.6 0.8 1.0
True positive rate (Sensitivity)
B (AUROC 1.00)
Random guessing
False positive rate (1-Specificity)
1.0
0.8
0.6
0.4
0.2
0.0
Fig. 4 Receiver operating characteristic analyses of the net-
works performance in the detection of blood vs. normal colonic
mucosa/other findings. B, blood or hematic residues; ROC, recei-
ver operating characteristic.
Table1 Confusion matrix of the automatic detection vs. ex pert classification.
Expert
Blood/hematic residues Normal/other findings
CNN Blood/hematic residues 594 39
Normal/other findings 1 531
Sensitivity1
99.8 % (4.7 %)
Specificity1
93.2 % (4.7 %)
PPV1
93.8 % (4.2 %)
NPV1
99.8 % (4.2 %)
CNN, convolutional neural network; PPV, positive predictive value; NPV, negative predictive value.
1Expressed as mean (standard deviation).
E1268 Mascarenhas Saraiva Miguel et al. Artificial intelligence and Endosc Int Open 2021; 09: E1264E1268 | © 2021. The Author(s).
Innovation forum
... After that, we added a dense layer the size of which was defined the number of classification groups (two: normal or vascular lesions). Trial and error were used to determine the learning rate (ranging between 0.0000625 and 0.0005), batch size (128) and the num-ber of epochs (20). PyTorch and scikit libraries were used to prepare the model. ...
... In the small bowel, there are very accurate deep learning models capable of detecting different types of vascular lesions, as well as predicting their bleeding risk accuracy [14]. In the colon, although the vast majority of retrospective studies focus on the detection of protruding lesions, there are already published AI algorithms not only for automatic detection of blood or hematic residues [20]. In addition, there is also a published trinary network aiming to detect and differentiate blood from normal colonic mucosa and from mucosa lesions (including ulcers and erosions, vascular lesions and protruding lesions) with high sensitivity, specificity, and accuracy [21]. ...
Article
Full-text available
Background and study aims Capsule endoscopy (CE) is commonly used as the initial exam for suspected mid-gastrointestinal bleeding after normal upper and lower endoscopy. Although the assessment of the small bowel is the primary focus of CE, detecting upstream or downstream vascular lesions may also be clinically significant. This study aimed to develop and test a convolutional neural network (CNN)-based model for panendoscopic automatic detection of vascular lesions during CE. Patients and methods A multicentric AI model development study was based on 1022 CE exams. Our group used 34655 frames from seven types of CE devices, of which 11091 were considered to have vascular lesions (angiectasia or varices) after triple validation. We divided data into a training and a validation set, and the latter was used to evaluate the model’s performance. At the time of division, all frames from a given patient were assigned to the same dataset. Our primary outcome measures were sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), and an area under the precision-recall curve (AUC-PR). Results Sensitivity and specificity were 86.4% and 98.3%, respectively. PPV was 95.2%, while the NPV was 95.0%. Overall accuracy was 95.0%. The AUC-PR value was 0.96. The CNN processed 115 frames per second. Conclusions This is the first proof-of-concept artificial intelligence deep learning model developed for pan-endoscopic automatic detection of vascular lesions during CE. The diagnostic performance of this CNN in multi-brand devices addresses an essential issue of technological interoperability, allowing it to be replicated in multiple technological settings.
... Also, Saraiva et al. (Mascarenhas Saraiva et al., 2021) created a convolutional neural network (CNN)-based artificial intelligence algorithm for the automated identification of blood contents in colon capsule endoscopy images in 2021. They based their findings on a sample of 24 CCE examinations (PillCam Colon 2) and 3387259 images. ...
Article
Wireless capsule endoscopy (WCE) is the gold standard for diagnosing small bowel disorders and is considered the future of effective diagnostic gastrointestinal (GI) endoscopy. Patients find it comfortable and more likely to adopt it than traditional colonoscopy and gastroscopy, making it a viable option for detecting cancer or ulcerations. WCE can obtain images of the GI tract from the inside, but pinpointing the disease's location remains a challenge. This paper reviews studies on endoscopy capsule development and discusses techniques and solutions for higher efficiency. Research has demonstrated that artificial intelligence (AI) enhances the accuracy of disease detection and minimizes errors resulting from physicians' fatigue or lack of attention. When it comes to WCE, deep learning has shown remarkable success in detecting a wide variety of disorders.
... As summarized in Table 2 some authors employed Transfer learning algorithms and functions to tackle dataset scarcity. 10,20,21,29 The largest datasets were 400,000 with Inception-Resnet-V2 architecture 22 and performed 98.1 accuracy, while the smallest datasets were 201 with VGG 16 network architecture 25 and performed 93.4 accuracy in this systematic investigation. ...
Article
Wireless capsule endoscopy is a non-invasive medical imaging modality used for diagnosing and monitoring digestive tract diseases. However, the analysis of images obtained from wireless capsule endoscopy is a challenging task, as the images are of low resolution and often contain a large number of artifacts. In recent years, deep learning has shown great promise in the analysis of medical images, including wireless capsule endoscopy images. This paper provides a review of the current trends and future directions in deep learning for wireless capsule endoscopy. We focus on the recent advances in transfer learning, attention mechanisms, multi-modal learning, automated lesion detection, interpretability and explainability, data augmentation, and edge computing. We also highlight the challenges and limitations of current deep learning methods and discuss the potential future directions for the field. Our review provides insights into the ongoing research and development efforts in the field of deep learning for wireless capsule endoscopy, and can serve as a reference for researchers, clinicians, and engineers working in this area inspection process.
... Convolutional neural networks were initially developed using frames from a singlecamera capsules, later expanding to dual-camera capsules. Specific CNNs were applied first in the small bowel, followed by the colon and rectum, and then, in both anatomical regions, excelling at detecting a particular lesion [37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54]. Nevertheless, adopting a sequential approach where each specific CNN is applied one at a time for an AI-assisted review of a CE video, while logical, might not be the most efficient strategy. ...
Article
Full-text available
Simple Summary The exponential growth in artificial intelligence development, particularly its application in capsule endoscopy, serves as a compelling model for gastroenterologists. This review focusses on the latest advancements in capsule endoscopy, analyzing the possible benefits and ethical challenges that artificial intelligence may bring to the field of minimally invasive capsule panendoscopy, while also offering insights into future directions. Specifically in the context of oncological gastrointestinal screening, there is still a need to explore alternative strategies for enhancing this process. Artificial intelligence-enhanced capsule panendoscopy has the potential to positively impact the future by addressing time constraints and improve accessibility through the use of highly efficient diagnostic models. Abstract In the early 2000s, the introduction of single-camera wireless capsule endoscopy (CE) redefined small bowel study. Progress continued with the development of double-camera devices, first for the colon and rectum, and then, for panenteric assessment. Advancements continued with magnetic capsule endoscopy (MCE), particularly when assisted by a robotic arm, designed to enhance gastric evaluation. Indeed, as CE provides full visualization of the entire gastrointestinal (GI) tract, a minimally invasive capsule panendoscopy (CPE) could be a feasible alternative, despite its time-consuming nature and learning curve, assuming appropriate bowel cleansing has been carried out. Recent progress in artificial intelligence (AI), particularly in the development of convolutional neural networks (CNN) for CE auxiliary reading (detecting and diagnosing), may provide the missing link in fulfilling the goal of establishing the use of panendoscopy, although prospective studies are still needed to validate these models in actual clinical scenarios. Recent CE advancements will be discussed, focusing on the current evidence on CNN developments, and their real-life implementation potential and associated ethical challenges.
... Moreover, these tools must also display an adequate negative predictive value while retaining their specificity during the diagnostic work-up. We have since designed AI applications to automate the detection of protruding lesions in the colon through CE images [64], to accurately identify and diagnose lesions in the colon mucosa [20] and for the automated detection of blood/hematic residues in the colon lumen [65]. In each of these cases, the tools have achieved the specificity and sensitivity, as well as the predictive value, required to merit further validation towards their clinical implementation. ...
Article
Full-text available
The surge in the implementation of artificial intelligence (AI) in recent years has permeated many aspects of our life, and health care is no exception. Whereas this technology can offer clear benefits, some of the problems associated with its use have also been recognised and brought into question, for example, its environmental impact. In a similar fashion, health care also has a significant environmental impact, and it requires a considerable source of greenhouse gases. Whereas efforts are being made to reduce the footprint of AI tools, here, we were specifically interested in how employing AI tools in gastroenterology departments, and in particular in conjunction with capsule endoscopy, can reduce the carbon footprint associated with digestive health care while offering improvements, particularly in terms of diagnostic accuracy. We address the different ways that leveraging AI applications can reduce the carbon footprint associated with all types of capsule endoscopy examinations. Moreover, we contemplate how the incorporation of other technologies, such as blockchain technology, into digestive health care can help ensure the sustainability of this clinical speciality and by extension, health care in general.
... A recent systematic review of AI use in colon capsules recently included 9 studies (97). Though few, these studies show promising results for future integration of AI enhanced colon capsules into routine clinical practice (97)(98)(99). ...
Article
Full-text available
Colon capsule endoscopy (CCE) has been demonstrated to be comparable to traditional colonoscopy and better than CT colonography (CTC) for the detection of colonic pathology. It has been shown to have a high incremental yield after incomplete colonoscopy. It is a safe test with good patient acceptability. Challenges currently include great variability in completion rates and high rates of re-investigation. In this review, we will discuss the evidence to date regarding CCE in symptomatic and surveillance populations, and in those post incomplete colonoscopy. We will discuss current challenges faced by CCE and areas for further research.
... Deep Learning (DL): The most widely used deep learning approach for image classification and segmentation is the Convolutional Neural Network (CNN). Several CNN models, such as AlexNet [95], LeNet [127], Fully Convolutional Neural Network (FCN) [27,128], Visual Geometry Group Network (VGGNet) [129], Residual Network (Resnet-50) [130][131][132], Res2Net101 [133], Inception-Resnet-V2 [134,135], AttResU-Net [136], MobileNet [42], DenseNet [24], Region-based Convolutional Neural Networks (R-CNN) [137], Convolutional Recurrent Neural Network (CRNN) [34], U-Net [44,138], SegNet [46], and custom CNNs [41,[139][140][141][142][143][144][145][146][147][148][149][150][151][152][153], have been utilized in a number of studies for the classification or segmentation or combined classification and segmentation of bleeding in CE images. The study in [27] presented an FCN model for an automatic blood region segmentation system. ...
Article
Full-text available
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Chapter
Full-text available
Gastroenterology (GI) and hepatology are in the early stages of incorporation of artificial intelligence (AI) into clinical practice. The two major areas of AI deep learning technology which can be considered applicable to GI and hepatology are image recognition analysis and clinical data analysis. Additional areas of AI such as generative AI also may have roles in clinical practice. Continued development, validation, and real-world modeling of AI systems will be needed prior to wider integration. Based on the trajectory and rapid developments within AI, it is likely that in the coming years new areas of AI applications in GI and hepatology will be proposed and current AI applications will be enhanced and become standard of care.
Article
Full-text available
While deep learning has displayed excellent performance in a broad spectrum of application areas, neural networks still struggle to recognize what they have not seen, i.e., out-of-distribution (OOD) inputs. In the medical field, building robust models that are able to detect OOD images is highly critical, as these rare images could show diseases or anomalies that should be detected. In this study, we use wireless capsule endoscopy (WCE) images to present a novel patch-based self-supervised approach comprising three stages. First, we train a triplet network to learn vector representations of WCE image patches. Second, we cluster the patch embeddings to group patches in terms of visual similarity. Third, we use the cluster assignments as pseudolabels to train a patch classifier and use the Out-of-Distribution Detector for Neural Networks (ODIN) for OOD detection. The system has been tested on the Kvasir-capsule, a publicly released WCE dataset. Empirical results show an OOD detection improvement compared to baseline methods. Our method can detect unseen pathologies and anomalies such as lymphangiectasia, foreign bodies and blood with AUROC>0.6. This work presents an effective solution for OOD detection models without needing labeled images.
Article
Full-text available
Background & aims: Colon capsule endoscopy (CCE) is a noninvasive technique used to explore the colon without sedation or air insufflation. A second-generation capsule was recently developed to improve accuracy of detection, and clinical use has expanded globally. We performed a systematic review and meta-analysis to assess the accuracy of CCE in detecting colorectal polyps. Methods: We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and other databases from 1966 through 2015 for studies that compared accuracy of colonoscopy with histologic evaluation with CCE. The risk of bias within each study was ascertained according to Quality Assessment of Diagnostic Accuracy in Systematic Reviews recommendations. Per-patient accuracy values were calculated for polyps, overall and for first-generation (CCE-1) and second-generation (CCE-2) capsules. We analyzed data by using forest plots, the I(2) statistic to calculate heterogeneity, and meta-regression analyses. Results: Fourteen studies provided data from 2420 patients (1128 for CCE-1 and 1292 for CCE-2). CCE-2 and CCE-1 detected polyps >6 mm with 86% sensitivity (95% confidence interval [CI], 82%-89%) and 58% sensitivity (95% CI, 44%-70%), respectively, and 88.1% specificity (95% CI, 74.2%-95.0%) and 85.7% specificity (95% CI, 80.2%-90.0%), respectively. CCE-2 and CCE-1 detected polyps >10 mm with 87% sensitivity (95% CI, 81%-91%) and 54% sensitivity (95% CI, 29%-77%), respectively, and 95.3% specificity (95% CI, 91.5%-97.5%) and 97.4% specificity (95% CI, 96.0%-98.3%), respectively. CCE-2 identified all 11 invasive cancers detected by colonoscopy. Conclusions: The sensitivity in detection of polyps >6 mm and >10 mm increased substantially between development of first-generation and second-generation colon capsules. High specificity values for detection of polyps by CCE-2 seem to be achievable with a 10-mm cutoff and in a screening setting.
Article
Full-text available
PillCam colon capsule endoscopy (CCE) is an innovative noninvasive, and painless ingestible capsule technique that allows exploration of the colon without the need for sedation and gas insufflation. Although it is already available in European and other countries, the clinical indications for CCE as well as the reporting and work-up of detected findings have not yet been standardized. The aim of this evidence-based and consensus-based guideline, commissioned by the European Society of Gastrointestinal Endoscopy (ESGE) is to furnish healthcare providers with a comprehensive framework for potential implementation of this technique in a clinical setting.
Article
Background and aims: Although colorectal neoplasias are the most common abnormalities found in colon capsule endoscopy (CCE), no computer-aided detection method is yet available. We developed an artificial intelligence (AI) system that uses deep learning to automatically detect such lesions in CCE images. Methods: We trained a deep convolutional neural network system based on a Single Shot Multibox Detector using 15,933 CCE images of colorectal neoplasias such as polyps and cancers. We assessed performance by calculating areas under the receiver operating characteristic curves and sensitivities, specificities, and accuracies using an independent test set of 4,784 images including 1,850 images of colorectal neoplasias and 2,934 normal colon images. Results: The area under the curve for detection by the AI model of colorectal neoplasias was 0.902. The sensitivity, specificity, and accuracy were 79.0%, 87.0%, and 83.9%, respectively, at a probability cutoff of 0.348. Conclusions: We developed and validated a new AI-based system that automatically detects colorectal neoplasias in CCE images.
Article
Background: Detecting blood content in the gastrointestinal tract is one of the crucial applications of capsule endoscopy (CE). The suspected blood indicator (SBI) is a conventional tool used to automatically tag images depicting possible bleeding in the reading system. We aim to develop a deep learning-based system to detect blood content in images and compare its performance with that of the SBI. Methods: We trained a deep convolutional neural network (CNN) system, using 27,847 CE images (6,503 images depicting blood content from 29 patients and 21,344 images of normal mucosa from 12 patients). We assessed its performance by calculating the area under the receiver operating characteristic curve (ROC-AUC), and its sensitivity, specificity, and accuracy, using an independent test set of 10,208 small-bowel images (208 images depicting blood content and 10,000 images of normal mucosa). The performance of the CNN was compared with that of the SBI, in individual image analysis, using the same test set. Results: The AUC for the detection of blood content was 0.9998. The sensitivity, specificity, and accuracy of the CNN were 96.63%, 99.96%, and 99.89%, respectively, at a cut-off value of 0.5 for the probability score, which were significantly higher than those of the SBI (76.92%, 99.82%, and 99.35%, respectively). The trained CNN required 250 seconds to evaluate 10,208 test images. Conclusions: We developed and tested the CNN-based detection system for blood content in capsule endoscopy images. This system has the potential to outperform the SBI system, and the patient-level analyses on larger studies are required.
Article
Background: Colorectal capsule endoscopy (CCE) is a potentially valuable patient-friendly technique for colorectal cancer screening in large populations. Before it can be widely applied, significant research priorities need to be addressed. We present two innovative data science algorithms which can considerably improve acquisition and analysis of relevant data on colorectal polyps obtained from capsule endoscopy. Material and methods: A fully paired study was performed (2015–2016), where 255 participants from the Danish national screening program had CCE, colonoscopy, and histopathology of all detected polyps. We developed: (1) a new algorithm to match CCE and colonoscopy polyps, based on objective measures of similarity between polyps, and (2) a deep convolutional neural network (CNN) for autonomous detection and localization of colorectal polyps in colon capsule endoscopy. Results and conclusion: Unlike previous matching methods, our matching algorithm is able to objectively quantify the similarity between CCE and colonoscopy polyps based on their size, morphology and location, and provides a one-to-one unequivocal match between CCE and colonoscopy polyps. Compared to previous methods, the autonomous detection algorithm showed unprecedented high accuracy (96.4%), sensitivity (97.1%) and specificity (93.3%), calculated in respect to the number of polyps detected by trained nurses and gastroenterologists after visualizing frame-by-frame the CCE videos.
Article
Purpose: Diabetic retinopathy (DR) is one of the leading causes of preventable blindness globally. Performing retinal screening examinations on all diabetic patients is an unmet need, and there are many undiagnosed and untreated cases of DR. The objective of this study was to develop robust diagnostic technology to automate DR screening. Referral of eyes with DR to an ophthalmologist for further evaluation and treatment would aid in reducing the rate of vision loss, enabling timely and accurate diagnoses. Design: We developed and evaluated a data-driven deep learning algorithm as a novel diagnostic tool for automated DR detection. The algorithm processed color fundus images and classified them as healthy (no?retinopathy) or having DR, identifying relevant cases for medical referral. Methods: A total of 75?137 publicly available fundus images from diabetic patients were used to train and test an artificial intelligence model to differentiate healthy fundi from those with DR. A panel of retinal specialists determined the ground truth for our data set before experimentation. We also tested our model using the public MESSIDOR 2 and E-Ophtha databases for external validation. Information learned in our automated method was visualized readily through an automatically generated abnormality heatmap, highlighting subregions within each input fundus image for further clinical review. Main outcome measures: We used area under the receiver operating characteristic curve (AUC) as a metric to measure the precision-recall trade-off of our algorithm, reporting associated sensitivity and specificity metrics on the receiver operating characteristic curve. Results: Our model achieved a 0.97 AUC with a 94% and 98% sensitivity and specificity, respectively, on 5-fold cross-validation using our local data set. Testing against the independent MESSIDOR 2 and E-Ophtha databases achieved a 0.94 and 0.95 AUC score, respectively. Conclusions: A fully data-driven artificial intelligence-based grading algorithm can be used to screen fundus photographs obtained from diabetic patients and to identify, with high reliability, which cases should be referred to an ophthalmologist for further evaluation and treatment. The implementation of such an algorithm on a global basis could reduce drastically the rate of vision loss attributed to DR.
Article
Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.
Article
Background and aims: Few large studies have evaluated the adverse events associated with therapeutic colonoscopy for colorectal neoplasia, including bleeding and bowel perforation. Our aim was to investigate factors associated with these events, using a Japanese national inpatient database. Methods: We extracted data from the nationwide Japan Diagnosis Procedure Combination database for patients who underwent therapeutic colonoscopy for colorectal neoplasia between 2013 and 2014. Therapeutic colonoscopy included endoscopic submucosal dissection (ESD), endoscopic mucosal resection (EMR) and polypectomy. Outcomes included bleeding, perforation, cerebro-cardiovascular events, and in-hospital death. A multivariable logistic regression model was used to evaluate factors associated with bleeding and bowel perforation. Results: We analyzed 345,546 patients, including 16,812 (4.9%) who underwent ESD, 219,848 (63.6%) who underwent EMR and 108,886 (31.5%) who underwent polypectomy. The rates of bleeding, bowel perforation, cardiovascular events, cerebrovascular events, and death were 32.5, 0.47, 0.05, 0.88, and 1.32 per 1,000 patients, respectively. In the multivariate analysis, a higher bleeding rate was associated with male, comorbid diseases, ESD, tumor size ≥2 cm, and use of drugs including low-dose aspirin, thienopyridines, non-aspirin antiplatelet drugs, novel oral anticoagulants, warfarin, non-steroidal anti-inflammatory drugs (NSAIDs), and steroids. A higher bowel perforation rate was associated with male, renal disease, ESD, tumor size ≥2 cm, and drugs including warfarin, NSAIDs, and steroids. Conclusions: Although the incidence of adverse events after therapeutic colonoscopy was low, several patient-related factors were significantly associated with bleeding and bowel perforation.
Article
Video capsule endoscopy (VCE) has revolutionized the diagnostic work-up in the field of small bowel diseases. Furthermore, VCE has the potential to become the leading screening technique for the entire gastrointestinal tract. Computational methods that can be implemented in software can enhance the diagnostic yield of VCE both in terms of efficiency and diagnostic accuracy. Since the appearance of the first capsule endoscope in clinical practice in 2001, information technology (IT) research groups have proposed a variety of such methods, including algorithms for detecting haemorrhage and lesions, reducing the reviewing time, localizing the capsule or lesion, assessing intestinal motility, enhancing the video quality and managing the data. Even though research is prolific (as measured by publication activity), the progress made during the past 5 years can only be considered as marginal with respect to clinically significant outcomes. One thing is clear—parallel pathways of medical and IT scientists exist, each publishing in their own area, but where do these research pathways meet? Could the proposed IT plans have any clinical effect and do clinicians really understand the limitations of VCE software? In this Review, we present an in-depth critical analysis that aims to inspire and align the agendas of the two scientific groups.
Article
A second-generation capsule endoscopy system, using the PillCam Colon 2, was developed to increase sensitivity for colorectal polyp detection compared with the first-generation system. The performance of this new system is reported. In a five-center feasibility study, second-generation capsule endoscopy was prospectively compared with conventional colonoscopy as gold standard for the detection of colorectal polyps and other colonic disease, in a cohort of patients scheduled for colonoscopy and having known or suspected colonic disease. Colonoscopy was independently performed within 10 hours after capsule ingestion. Capsule-positive but colonoscopy-negative cases were counted as false-positive. 104 patients (mean age 49.8 years) were enrolled; data from 98 were analyzed. Patient rate for polyps of any size was 44 %, 53 % of these patients having adenomas. No adverse events related to either procedure were reported. The capsule sensitivity for the detection of patients with polyps >or= 6 mm was 89 % (95 % confidence interval [CI] 70 - 97) and for those with polyps >or= 10 mm it was 88 % (95 %CI 56 - 98), with specificities of 76 % (95 %CI 72 - 78) and 89 % (95 %CI 86 - 90), respectively. Both polyps missed by colonoscopy and mismatch in polyp size by study definition lowered specificity. Overall colon cleanliness for capsule endoscopy was adequate in 78 % of patients (95 %CI 68 - 86). The new second-generation colon capsule endoscopy is a safe and effective method for visualizing the colon and detecting colonic lesions. Sensitivity and specificity for detecting colorectal polyps appear to be very good, suggesting a potential for improved accuracy compared with the first-generation system. Further prospective and comparative studies are needed.