ArticlePDF Available
Gastroenterology and Hepatology
from Bed to Bench
.
©2020 RIGLD, Research Institute for Gastroenterology and Liver Diseases
The role of artificial intelligence in colon polyps detection
Pezhman Rasouli
1
, Arash Dooghaie Moghadam
2
, Pegah Eslami
2
, Morteza Aghajanpoor Pasha
3
, Hamid Asadzadeh
Aghdaei
4
, Azim Mehrvar
4
,
, Amir Nezami-Asl
5
, Shahrokh Iravani
5
, Amir Sadeghi
2*
, Mohammad Reza Zali
2
1
Department of Computer, West Tehran Branch, Islamic Azad University, Tehran, Iran
2
Gastroenterology and Liver Diseases Research Center, Research Institute for Gastroenterology and Liver Diseases,
Shahid Beheshti University of Medical Sciences, Tehran, Iran
3
Gastroenterology and Hepatobiliary Research Center, AJA University of Medical Sciences, Tehran, Iran
4
Basic and Molecular Epidemiology of Gastrointestinal Disorders Research Center, Research institute for
Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
5
Research Center for Cancer Screening and Epidemiology, AJA University of Medical Sciences, Tehran, Iran
ABSTRACT
Over the past few decades, artificial intelligence (AI) has evolved dramatically and is believed to have a significant impact on all
aspects of technology and daily life. The use of AI in the healthcare system has been rapidly growing, owing to the large amount of
data. Various methods of AI including machine learning, deep learning and convolutional neural network (CNN) have been used in
diagnostic imaging, which have helped physicians in the accurate diagnosis of diseases and determination of appropriate treatment for
them. Using and collecting a huge number of digital images and medical records has led to the creation of big data over a time period.
Currently, considerations regarding the diagnosis of various presentations in all endoscopic procedures and imaging findings are
solely handled by endoscopists. Moreover, AI has shown to be highly effective in the field of gastroenterology in terms of diagnosis,
prognosis, and image processing. Herein, this review aimed to discuss different aspects of AI use for early detection and treatment of
gastroenterology diseases.
Keywords: Artificial intelligence, Deep learning, Polyp detection, Image processing, Computer-assisted, Colonoscopy.
(Please cite as: Rasouli P, Dooghaie Moghadam A, Aghajanpoor Pasha M, Asadzadeh Aghdaei H, Mehrvar A,
Iravani Sh, et al. The role of artificial intelligence in colon polyps detection. Gastroenterol Hepatol Bed Bench
2020;13(3):191-199).
Introduction
1
Based on the reports from the Iranian Annual
National Cancer Registration, colorectal cancer (CRC)
is the fifth and third major cause of death in women and
men, respectively (1). It is well-known that almost all
cases of CRC originate from colon polyps (2). The
majority of adenomatous polyps could progress to CRC
within approximately ten years (3). Currently, the best
effective method to prevent CRC is regular screening
Received: 11 May 2020 Accepted: 22 June 2020
Reprint or Correspondence:
Amir Sadeghi
, MD &
Shahrokh Iravani
, MD.
Research Institute for
Gastroenterology and Liver Diseases, Shahid Beheshti
University of Medical Sciences, & Gastroenterology and
Hepatobiliary Research Center, Imam Reza (501) Hospital,
AJA University of Medical Sciences, Tehran, Iran.
E-mail:
amirsadeghimd@yahoo.com
&
ltrc.tehran@gmail.com
ORCID ID: 0000-0002-9300-1066
and early polyp detection (4). Colonoscopy is the most
reliable tool for polyp detection and resection (4).
Despite significant capabilities of colonoscopy in
screening and polyp detection, specific polyps could be
missed during the procedure and thus may result in the
process of cancer development (5, 6). Similar biases are
frequently observed in other diagnostic modalities such
as CT colonography (7). With the recent advances in
computer technology using artificial intelligence (AI),
numerous studies have been carried out to assess the
utility of AI in medical image processing such as
radiology and gastroenterology. The application of AI
in medical imaging analysis has been investigated in
several medical fields such as radiology, neurology,
orthopaedics, pathology, ophthalmology, and
gastroenterology (8). AI has also achieved noticeable
results by using specific methods such as machine
REVIEW ARTICLE
192
The role of artificial intelligence in colon polyps detection
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
learning (deep learning) in the field of image
processing in medicine (9). AI has been consistently
implemented in the medical field in various methods
such as machine learning, decision trees, and artificial
neural networks (ANNs) (10, 11). Using and collecting
a huge number of digital images and medical records
has led to the creation of big data over a time period.
Given the rapid advances of AI in recent years,
gastroenterologists and physicians need to learn AI
tools as well as its strengths and weaknesses.
Physicians should also be able to use AI in real clinical
practice in the near future. As a result, machine
learning could greatly impact medical decision making
due to an increasing need for the interpretation of large
amounts of clinical data. In this study, the aim was to
discuss the most important aspects of AI use in the
gastroenterology field.
Computer-aided polyp detection and
diagnosis
We still do not have a full understanding of how
the brain works. As long as human beings continue to
rely principally on visual inspection to detect
endoscopic images, we will never dominate the
previous obstacles of individuality causing wider
variability in observations (12). Computers appear to be
more efficient than the human eye and brain in terms of
image processing and detection (13). With the
exception of the human brain, an independent and
realistic operator has an advantage which can be used
by any doctors without any particular practice (14). In
general, AI is regarded as an intelligent machine which
is capable of learning and problem solving by itself
(15). Currently, machine learning is the most common
method used in AI (16). This method automatically
builds mathematical algorithms based on input data
(training data) that can predict and decide in uncertain
conditions without human intervention (17).
AI has been used in gastroenterology images for
processing, diagnosis, prognosis, and analysis (18).
Due to the importance currently given to big data, the
collection of large-scale medical digital images
provides grounds for making effective use of AI which
provides essential resources for machine learning by
itself (9). Furthermore, limitations of traditional
machine learning methods can be overcome by
increasing computing power with graphics processing
units (10). This leads to an increased efficiency of AI,
especially when using deep learning technology, and
has proven to perform well in the analysis of medical
images (16).
There are some possible methods for polyp
detection and recognition. Machine learning algorithms
can have considerable influence over precise
interpretation of endoscopic images (14). Pilot trials on
the use of a convolutional neural network have
illustrated promising outcomes. (19, 20). In an ideal
manner, neural networks should be nurtured by the
most visual score (finally based on advanced imaging
methods) provided by an overall central reading (21).
CNN is one of the seven kinds of artificial neural
networks (Figure 1), including CNN, recursive neural
network, recurrent neural network, multilayer
perceptron, long short-term memory, sequence-to-
sequence models, and shallow neural networks. Despite
the fact that disease symptoms are considered to be
crucial in diagnostic justification, there are no reliable
techniques to confirm this information as a diagnostic
tool (22). To solve non-linearity inherent in the
relationship between disease symptoms and underlying
pathology, ANNs are used as highly-adaptive modeling
mechanisms (61). Some novel algorithms are
represented by ANNs as the solution for non-linear
problems that are highly complex for conventional
statistical analysis such as CNNs (23).
To date, the application of image processing with
AI in deep learning through CNNs has significantly
increased in medical fields for diagnostic imaging.
Moreover, automatic detection of polyps (and cancer
diagnosis) in colonoscopic images is affected by CNNs
(24). CNNs are particularly applied to analyze visual
imagery. They also have applications in image and
video recognition, skin cancer classification, diagnostic
and radiation oncology and diabetic retinopathy,
histologic classification of gastric biopsy, and
characterization of colorectal lesions using
endocytoscopy (24, 25). As a result, medical image
detection of gastrointestinal cancer at an early stage by
CNNs is the single most effective measure which can
lead to a reduced rate of gastrointestinal cancer
mortality (57).
Image processing could be performed by extracting
pixel data from high quality images and integrated
Rasouli P. et al 193
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
detection patterns in the algorithm (i.e., computer aided
diagnosis algorithm based on neural networks) (14).
These non-discriminatory image data can be further
linked to histological data. Moreover, it has been
recently shown that this approach is more compatible
with ulcerative colitis histology and is considered to be
the best predictor of sustained clinical remission in
these patients (26).
Furthermore, the use of big data provides
convenient input data for training. Moreover, the rapid
development of computational power allows
researchers to overcome previous restrictions (27).
Figure 1. a) Neural network structure: An Artificial Neural Network (ANN) is a computational pattern to process information,
resembling the way biological neural systems in the human brain process data and information. ANNs have been applied to
speech recognition, image analysis, and adaptive control b) Neuron: Each pattern of a neural network is made up of numerous
neurons often called nodes. A node, as the primary computation unit, is a mathematical function that simulates the functioning of
a biological neuron. A neuron consists of a set of inputs, a set of weights, and an activation function. The neuron transforms input
data into a single output, which can then be used as input for another layer of neurons. c) Input: Data imported from the outside
world to the neural network are called inputs. Inputs are modified and summed by the weight vectors. The “Input Layer” does not
perform any computation (input nodes) and just moves data to hidden layers. d) Weight: Each neuron consists of a weight
vector, which is equal to the number of inputs to that neuron. Then, given that each input raw data can have many features, the
amount of weight is determined by those features. For instance, for a prediction model of diabetes, weights are height, age,
smoking, and so on. e) Output: The output neurons result from hidden layers. Then, they are planned for transferring
computational information from the neural network to the outside world. Therefore, the domain of the output is controlled by an
activation function in order to reach an acceptable range of output, which is often between 0 and 1. Also, the output of the neuron
can then be used as input to the neurons of another layer, which could repeat the same computation. f) Hidden nodes: The
hidden layer is made up of numerous neurons, which are called hidden neurons. They perform a mathematic function and
computation process to transform input data and information to signals, to be transferred from the input layer to the output layer.
194
The role of artificial intelligence in colon polyps detection
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
Input images are filtered using multiple special filters
to extract specific features and create multiple feature
plans. This processing method for filtering is called
convolution (28). A learning method is important for
the convolution filter to create the best feature plans for
the success of CNNs (21).
Given that AI is currently concentrated on the
gastrointestinal field in image analysis, several machine
learning samples have demonstrated promising
outcomes in disease diagnosis and prediction (29).
Although, endoscopic imaging programs have reduced
mortality caused by gastrointestinal diseases, these
diseases still lead to death on a large scale globally,
which is a huge burden economically (10) Hence,
gastroenterologists have become enthusiastically
interested in the application of AI and its analysis,
especially the use of CNN and support vector machines
in medical image processing. In addition, AI has been
significantly used in the diagnosis of non-digestive
diseases as well as infections (28, 30).
Although there has been promising progress in
polyp auto-identification methods, they have not led to
a prospective evaluation (31-33). However, in a study
performed by Klare et al. (34) new validation software
was established in real-time conditions (performed on
55 sequences of daily colonoscopies). Similar results
were reported in terms of detection capability between
doctors and the developed software. Accordingly,
polyp diagnosis and adenoma diagnosis rates reported
by endoscopists were 56.4% and 30.9%, respectively.
However, these rates were reported as 50.9% and
29.1% by the software, respectively. In another study,
Wang et al. (35) implemented a new deep learning
algorithm using data from 1,290 patients, which was
then validated with 27,113 new colonoscopy images
collected from 1,138 patients. The proposed technique
indicated significant efficiency with a sensitivity of
94.38% and the area under receiver operating
characteristic (AUROC) of 0.984 in at least one polyp
diagnosis (35). In the different areas of gastrointestinal
fields, development of an AI technique using
colonoscopy has shown promising results in polyp
detection, which is frequently observed during the
colonoscopy procedure (36).
AI training is defined as a collection of numerous
medical images including real-time video and static
images that are the primary source of image processing,
resulting in statistical outputs for further interpretation
(Figure 2). These data could be used as a valuable
source for detection of missing colorectal polyps that
are directly linked to the development of colon cancer
(10). Urban et al. (37) benefited from a CNN system to
detect colonic polyps. Moreover, they used 8,641 hand-
labeled medical images and 20 colonoscopy videos in
several combinations as well as training data (37). The
CNN technique in polyp detection with real time
yielded an AUROC of 0.991 and accuracy of 96.4%.
Moreover, it contributed to the distinction of nine
additional polyps in comparison to the ones detected by
expert endoscopists in the application of test
colonoscopy videos (28).
Computer-assisted magnetic
resonance imaging (MRI) in colon
The currently available clinical tools for assessing
the colon (e.g. constipation by radiography) have not
shown a reliable diagnostic performance, as detected by
observers (38). In several studies, automated and semi-
automated methods have been used for virtual
colonography to segment colon images using
computerized tomography (CT) scan (39-45). Binary
thresholding, region growing, and morphological
operations have been applied in most of these
researches for colon segmentation from CT images
with little or no user interaction (39-43, 45). The
studies rely on gas filling of the colon or contrast
enhancement to obtain adequate contrast of the colon
and colon wall. This subsequently led to alternation of
the colon morphology by distension (bloating) and
unnecessary radiation (21). The study of the unready
colon morphology to prepare and quantify colonic
regional volumes without ionizing radiation has been
facilitated by MRI (46). Image segmentation must be
performed manually in order to acquire the colon size,
which is a tedious and time-consuming process. Image
segmentation using semi-automated software can be
beneficial to reduce segmentation time (47, 48).
Furthermore, in terms of reliability of detection,
automated methods have shown better results when
compared to manual assessments, which may
ultimately result in reducing the variability of
observations (49). Noticeable reduction of the contrast
between gut tissue and surrounding tissue compared to
Rasouli P. et al 195
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
imaging using contrast agents or bowel preparation has
led to complicated automatic segmentation in the MRI
of unprepared colon (50). In this regard, results related
to automatic colon segmentation of MRI images are
restricted due to the presence of image in
homogeneities (51). In the semi-automatic technique,
the colon volume does not indicate fasted state or full
state, but rather may lie in between (52). However, the
purpose of the current study was to validate a semi-
automated image segmentation method as the colon
status does not affect observers’ agreement on
segmentation (52).
MRI has been used to evaluate the effect of
corticotrophin releasing hormone on regional volume
of colon. Moreover, it can also be applied in drug
studies to assess the effect of drugs on the regional size
of colon and fecal distribution (53). A modern semi-
automated method using unprepared colon and feces
from MRIs has raised hopes for use in clinical practice
in the near future (54). Quantification to colon volume
in the unprepared colon rather than routine non-
quantitative examinations such as radiography can be
observed in this method. This method enables objective
measures on fecal volume and colon morphology that
could also increase diagnostic value, particularly in
patients with constipation and abdominal pain (50).
Thus, this method can be used to guide physicians
towards more accurate diagnosis as well as efficient
therapies (52).
AI pitfalls in gastroenterology
In this section, several disadvantages of AI related
to its direct implication for computer-aided detection
and diagnosis are discussed. These potential
disadvantages of AI in colonoscopy were pointed out in
a prospective study using real-time polyp
characterization. It was shown that the time required for
colonoscopy was raised from 35 to 47 seconds per
polyp characterization (36). Moreover, polyp
characterization may reduce endoscopists'
concentration and could result in missing or
misidentification of polyps (55). It should also be noted
that reliance on AI alone could reduce the number of
skillful endoscopists in the near future. Further
prospective studies are needed to assess these
drawbacks in addition to the effectiveness of AI (36).
Although many studies have been conducted on
polyp characterization systems, they have been
evaluated and developed using video images and saved
images. These images are often taken as ideal images
of diagnosed lesions and polyps, as selected by
Figure 2 The neural network of artificial intelligence in real time data processing. This schematic outlines a neural network
method for a representative detection task, such as diagnosis of colon polyp. a) The first stage of image processing by neural
network is taking colon real-time videos of patients visited by doctors to diagnose polyps. b) Real-time videos will be prepared
for a functional method, in particular properties of images. As a result, these provided data called input data. c) In this stage, a
neural network uses input data for crucial processing, which contains plenty of neuron layers in its method. Following that,
promising outcomes in polyp detection are reached as output data with two types of processes. d) The provided output illustrates
two kinds of data which are separated into a marked polyp in real-time video and polyp with no background of tissue.
196
The role of artificial intelligence in colon polyps detection
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
endoscopists. Thus, they may appear to be ineffective
in daily practice (56). Moreover, histopathological
findings may not always exist. In this regard, lack of
data on the severity of inflammation and dysplasia, as
observed in ulcerative and Crohn’s colitis, is another
restriction (19, 57). AI promises to empower physicians
in reducing the significant burden of clinical
documentation to improve the quality of patient care
and enhance physician-patient interaction (58). For
instance, AI could help radiologists validate raw image
data to facilitate recognizing complex patterns, which
could ultimately result in unbiased reports (59).
AI future in gastroenterology
To overcome human error in tumor detection and
avoid excessive biases, the use of AI in the
gastroenterology field appears to have a bright future
ahead when compared to the use of advanced
endoscopic techniques alone. Endoscopic methods have
not shown promising results in assessing and
diagnosing inflammatory bowel diseases (60). Despite
numerous reports regarding the utility of AI tools, most
studies conducted on this issue were retrospective in
nature (61). In these circumstances, the presence of
potential inherent bias in selecting data cannot be
ignored (9). Hence, significant validation of AI
performance is crucial before its application in real
clinical practice (62). Over fitting in training data and
spectrum bias (class imbalance) should be also
considered by physicians, as they may have an impact
on AI performance, and be avoided in evaluating the
performance (63). It is logical that as the amount of
data increases, the performance and accuracy of
machine learning will also improve. However, it is
difficult to develop a practical machine learning model,
due to the paucity of hand-labeled data regarding
confidentiality in private medical records (29). A
method has been suggested to overcome this issue,
which uses data reinforcement strategies (with
artificially modified data) (64). Replacing the current
ANN methods with more powerful computing
capability of the neural networks could help minimize
these problems (65). The primary purpose of training
these neural networks is to enhance diagnosis and
differentiate between features of hyperplastic and
adenomatous polyps using thousands of medical polyp
images from the database (66). Currently, recent studies
based on recorded colonoscopy films or medical
images have shown that polyp pathology with accuracy
ranging from 70% to 96% can be predicted by this
neural network model. On the other hand, by using
negative predictive value techniques, this prediction
increases to over 90% (37, 67-74). Effectiveness in real
clinical practice does not mean that AI uses the
accuracy of recognition or classification. Complex
inspection can prove the actual benefit of clinical
results, cost effectiveness beyond academic
performance and satisfaction of physicians (13, 63).
Furthermore, AI is not complete; although, AI is
designed to replace human intelligence, “augmented
intelligence” appears to emphasize the fact that AI is
planned to improve or raise human intelligence (36).
Despite the fact that in medical practice, AI aims to
improve the work process with increasing accuracy and
decrease the number of unintentional faults, imprecise
data with incorrect performance in created models lead
to incorrect grouping or diagnosis (18).
A crucial part of healthcare operation and function
of medicine is the influence of AI on the
communication between doctors and patients, which
has not been evaluated. Hence, as soon as AI research
begins to grow, the AI technique extension should be
established (75). This AI platform accelerates the
assumption of a ‘resect and discard’ strategy for
diminutive colorectal polyps if accredited in designed
clinical tests with patients during live procedures (71).
In the future, a similar deep learning approach could
facilitate the endoscopic examination by highlighting
detection areas of possible adenomatous or serrated
mucosa for accurate inspection (16). Moreover, the
method described here is used for training programs
that can distinguish between hyperplastic polyps and
adenomatosis and has the potential to improve the
detection of different types of endoscopic images that
can resolve clinical problems such as identification of
dysplasia in Barrett’s esophagus and detection of
intestinal metaplasia and dysplasia in the gastric
mucosa (76). Computer-assisted polyp detection is
materializing with the rise of auto fluorescence and AI
methods/deep learning which require no operator for
each part of a computer-assisted polyp detection
process (60, 76). In addition, deep learning is still in its
nascent stage and further research is needed to better
Rasouli P. et al 197
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
understand its value for widespread clinical
implementation of optical polyp diagnosis (35, 71).
Accordingly, the widespread use of this new tool is still
limited in the diagnosis of polyps in medical imaging.
Most recent researches are based on pre-recorded
colonoscopy images, and only a few studies contain
real-time images and polyp detection during
colonoscopy in patients (36). There is also no set of
focused randomized tests for definitive confirmation of
this method as a suitable alternative to pathology- or
endoscopist-based diagnosis (77).
Conclusion
The application of AI in gastroenterology has
revealed varied errors in terms of statistical or image
processing starting from the 1950s when it was initially
introduced. AI and subcategories such as Machine
Learning and Neural Networks have evolved
dramatically in recent years and play a crucial role in
diagnosis of diseases in the gastroenterology field.
Moreover, AI will improve in different aspects of
healthcare services even though it is still in its infancy
and has some pitfalls. Accordingly, promising results
from recent studies have shown the effectiveness of this
method in accurate diagnosis of polyps and
consequently unsupervised widespread clinical
implementation is imminent in the near future.
Conflict of interests
The authors declare that they have no conflict of
interest.
References
1. Azadeh S, Moghimi-Dehkordi B, Fatem SR,
Pourhoseingholi MA, Ghiasi S, Zali MR. Colorectal cancer in
Iran: an epidemiological study. Asian Pac J Cancer Prev
2008;9:123-6.
2.Simon K. Colorectal cancer development and advances in
screening. Clin Interv Aging 2016;11:967-76.
3.Noffsinger AE. Serrated polyps and colorectal cancer: new
pathway to malignancy. Annu Rev Pathol 2009;4:343-64.
4.Levin B, Lieberman DA, McFarland B, Smith RA, Brooks D,
Andrews KS, et al. Screening and surveillance for the early
detection of colorectal cancer and adenomatous polyps, 2008: a
joint guideline from the American Cancer Society, the US
Multi-Society Task Force on Colorectal Cancer, and the
American College of Radiology. CA Cancer J Clin
2008;58:130-60.
5.Kim NH, Jung YS, Jeong WS, Yang HJ, Park SK, Choi K, et
al. Miss rate of colorectal neoplastic polyps and risk factors for
missed polyps in consecutive colonoscopies. Intest Res
2017;15:411-8.
6.Ahn SB, Han DS, Bae JH, Byun TJ, Kim JP, Eun CS. The
Miss Rate for Colorectal Adenoma Determined by Quality-
Adjusted, Back-to-Back Colonoscopies. Gut Liver 2012;6:64-
70.
7.Macari M, Bini EJ, Jacobs SL, Lui YW, Laks S, Milano A, et
al. Significance of missed polyps at CT colonography. AJR Am
J Roentgenol 2004;183:127-34.
8.Topol EJ. High-performance medicine: the convergence of
human and artificial intelligence. Nat Med 2019;25:44-56.
9.Savadjiev P, Chong J, Dohan A, Vakalopoulou M, Reinhold
C, Paragios N, et al. Demystification of AI-driven medical
image interpretation: past, present and future. Eur Radiol
2019;29:1616-24.
10. Yang YJ, Bang CS. Application of artificial intelligence in
gastroenterology. World J Gastroenterol 2019;25:1666-83.
11. Yazdani Charati J, Janbabaei G, Alipour N, Mohammadi S,
Ghorbani Gholiabad S, Fendereski A. Survival prediction of
gastric cancer patients by Artificial Neural Network model.
Gastroenterol Hepatol Bed Bench 2018;11:110-7.
12. Krupinski EA. Current perspectives in medical image
perception. Atten Percept Psychophys 2010;72:1205-17.
13. Travis SP, Schnell D, Feagan BG, Abreu MT, Altman DG,
Hanauer SB, et al. The Impact of Clinical Information on the
Assessment of Endoscopic Activity: Characteristics of the
Ulcerative Colitis Endoscopic Index Of Severity [UCEIS]. J
Crohns Colitis 2015;9:607-16.
14. Bossuyt P, Vermeire S, Bisschops R. Scoring endoscopic
disease activity in IBD: artificial intelligence sees more and
better than we do. Gut 2020;69:788-9.
15. Russell SJ, Norvig P. Artificial intelligence: a modern
approach. 2016.
16. Bini SA. Artificial Intelligence, Machine Learning, Deep
Learning, and Cognitive Computing: What Do These Terms
Mean and How Will They Impact Health Care? J Arthroplasty
2018;33:2358-61.
17. Murphy KP. Machine learning: a probabilistic perspective.
Cambridge, Mass.: MIT Press; 2013.
18. Long E, Lin H, Liu Z, Wu X, Wang L, Jiang J, et al. An
artificial intelligence platform for the multihospital
collaborative management of congenital cataracts. Nat Biomed
Eng 2017;1.
19. Ozawa T, Ishihara S, Fujishiro M, Saito H, Kumagai Y,
Shichijo S, et al. Novel computer-assisted diagnosis system for
endoscopic disease activity in patients with ulcerative colitis.
Gastrointest Endosc 2019;89:416-21.
20. Zezos P, Borowski K, Bajaj G, Boland K, Sheasgreen C,
Tessolini JM, et al. 439 - Toward Computer-Based Automated
Mayo Score Classification in Ulcerative Colitis through
Classical and Deep Machine Learning. Gastroenterology
2018;154:S99-100.
198
The role of artificial intelligence in colon polyps detection
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
21. Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS,
et al. Application of convolutional neural network in the
diagnosis of the invasion depth of gastric cancer based on
conventional endoscopy. Gastrointest Endosc 2019;89:806-15.
22. Bibi H, Nutman A, Shoseyov D, Shalom M, Peled R,
Kivity S, et al. Prediction of emergency department visits for
respiratory symptoms using an artificial neural network. Chest
2002;122:1627-32.
23. Cross SS, Harrison RF, Kennedy RL. Introduction to
neural networks. Lancet 1995;346:1075-9.
24. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D,
Narayanaswamy A, et al. Development and Validation of a
Deep Learning Algorithm for Detection of Diabetic
Retinopathy in Retinal Fundus Photographs. JAMA
2016;316:2402-10.
25. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau
HM, et al. Corrigendum: Dermatologist-level classification of
skin cancer with deep neural networks. Nature 2017;546:686.
26. Bossuyt P, Nakase H, Vermeire S, Willekens H, Ikemoto
Y, makino T, et al. 436 - Automated Digital Calculation of
Endoscopic Inflammation in Ulcerative Colitis: Results of the
Red Density Study. Gastroenterology 2018;154:S98-9.
27. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature
2015;521:436.
28. Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y,
Ishiyama A, et al. Diagnostic outcomes of esophageal cancer
by artificial intelligence using convolutional neural networks.
Gastrointest Endosc 2019;89:25-32.
29. Danaee P, Ghaeini R, Hendrix DA. A deep learning
approach for cancer detection and relevant gene identification.
Pac Symp Biocomput 2017;22:219-29.
30. Takiyama H, Ozawa T, Ishihara S, Fujishiro M, Shichijo S,
Nomura S, et al. Automatic anatomical classification of
esophagogastroduodenoscopy images using deep convolutional
neural networks. Sci Rep 2018;8:7497.
31. Wang H, Liang Z, Li LC, Han H, Song B, Pickhardt PJ, et
al. An adaptive paradigm for computer-aided detection of
colonic polyps. Phys Med Biol 2015;60:7207-28.
32. Piardi T, Lhuaire M, Bruno O, Memeo R, Pessaux P,
Kianmanesh R, et al. Vascular complications following liver
transplantation: A literature review of advances in 2015. World
J Hepatol 2016;8:36-57.
33. Billah M, Waheed S, Rahman MM. An Automatic
Gastrointestinal Polyp Detection System in Video Endoscopy
Using Fusion of Color Wavelet and Convolutional Neural
Network Features. Int J Biomed Imaging 2017;2017:9545920.
34. Klare P, Sander C, Prinzen M, Haller B, Nowack S,
Abdelhafez M, et al. Automated polyp detection in the
colorectum: a prospective study (with videos). Gastrointest
Endosc 2019;89:576-82.
35. Wang P, Xiao X, Glissen Brown JR, Berzin TM, Tu M,
Xiong F, et al. Development and validation of a deep-learning
algorithm for the detection of polyps during colonoscopy. Nat
Biomed Eng 2018;2:741-8.
36. Mori Y, Kudo SE, Misawa M, Saito Y, Ikematsu H, Hotta
K, et al. Real-Time Use of Artificial Intelligence in
Identification of Diminutive Polyps During Colonoscopy: A
Prospective Study. Ann Intern Med 2018;169:357-66.
37. Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes
W, et al. Deep Learning Localizes and Identifies Polyps in Real
Time With 96% Accuracy in Screening Colonoscopy.
Gastroenterology 2018;155:1069-78.
38. Moylan S, Armstrong J, Diaz-Saldano D, Saker M, Yerkes
EB, Lindgren BW. Are abdominal x-rays a reliable way to
assess for constipation? J Urol 2010;184:1692-8.
39. Wyatt CL, Ge Y, Vining DJ. Automatic segmentation of
the colon for virtual colonoscopy. Comput Med Imaging Graph
2000;24:1-9.
40. Franaszek M, Summers RM, Pickhardt PJ, Choi JR. Hybrid
segmentation of colon filled with air and opacified fluid for CT
colonography. IEEE Trans Med Imaging 2006;25:358-68.
41. Lu L, Zhao J. An improved method of automatic colon
segmentation for virtual colon unfolding. Comput Methods
Programs Biomed 2013;109:1-12.
42. Bert A, Dmitriev I, Agliozzo S, Pietrosemoli N,
Mandelkern M, Gallo T, et al. An automatic method for colon
segmentation in CT colonography. Comput Med Imaging
Graph 2009;33:325-31.
43. Wyatt CL, Ge Y, Vining DJ. Segmentation in virtual
colonoscopy using a geometric deformable model. Comput
Med Imaging Graph 2006;30:17-30.
44. Losnegard A, Hodneland E, Lundervold A, Hysing LB,
Muren LP. Semi-automated segmentation of the sigmoid and
descending colon for radiotherapy planning using the fast
marching method. Phys Med Biol 2010;55:5569-84.
45. Bielen D, Kiss G. Computer-aided detection for CT
colonography: update 2007. Abdom Imaging 2007;32:571-81.
46. Pritchard SE, Marciani L, Garsed KC, Hoad CL,
Thongborisute W, Roberts E, et al. Fasting and postprandial
volumes of the undisturbed colon: normal values and changes
in diarrhea-predominant irritable bowel syndrome measured
using serial MRI. Neurogastroenterol Motil 2014;26:124-30.
47. Iorio M, Spalletta G, Chiapponi C, Luccichenti G, Cacciari
C, Orfei MD, et al. White matter hyperintensities segmentation:
a new semi-automated method. Front Aging Neurosci
2013;5:76.
48. Bartko JJ. Measurement and reliability: statistical thinking
considerations. Schizophr Bull 1991;17:483-9.
49. Haas S, Brock C, Krogh K, Gram M, Nissen TD, Lundby
L, et al. Cortical evoked potentials in response to rapid balloon
distension of the rectum and anal canal. Neurogastroenterol
Motil 2014;26:862-73.
50. Oommen J, Oto A. Contrast-enhanced MRI of the small
bowel in Crohn's disease. Abdom Imaging 2011;36:134-41.
51. Tielbeek JA, Vos FM, Stoker J. A computer-assisted model
for detection of MRI signs of Crohn's disease activity: future or
fiction? Abdom Imaging 2012;37:967-73.
Rasouli P. et al 199
Gastroenterol Hepatol
Bed Bench
2020;13(3):191-199
52. Sandberg TH, Nilsson M, Poulsen JL, Gram M, Frokjaer
JB, Ostergaard LR, et al. A novel semi-automatic segmentation
method for volumetric assessment of the colon based on
magnetic resonance imaging. Abdom Imaging 2015;40:2232-
41.
53. Pritchard SE, Garsed KC, Hoad CL, Lingaya M, Banwait
R, Thongborisute W, et al. Effect of experimental stress on the
small bowel and colon in healthy humans. Neurogastroenterol
Motil 2015;27:542-9.
54. Sled JG, Zijdenbos AP, Evans AC. A nonparametric
method for automatic correction of intensity nonuniformity in
MRI data. IEEE Trans Med Imaging 1998;17:87-97.
55. Mori Y, Kudo SE. Detecting colorectal polyps via machine
learning. Nat Biomed Eng 2018;2:713-4.
56. Vinsard DG, Mori Y, Misawa M, Kudo SE, Rastogi A,
Bagci U, et al. Quality assurance of computer-aided detection
and diagnosis in colonoscopy. Gastrointest Endosc 2019;90:55-
63.
57. Maeda Y, Kudo SE, Mori Y, Misawa M, Ogata N,
Sasanuma S, et al. Fully automated diagnostic system with
artificial intelligence using endocytoscopy to identify the
presence of histologic inflammation associated with ulcerative
colitis (with video). Gastrointest Endosc 2019;89:408-15.
58. Coiera E, Kocaballi B, Halamka J, Laranjo L. The digital
scribe. NPJ Digit Med 2018;1:58.
59. Cabitza F, Rasoini R, Gensini GF. Unintended
Consequences of Machine Learning in Medicine. JAMA
2017;318:517-8.
60. Syed AB, Zoga AC. Artificial Intelligence in Radiology:
Current Technology and Future Directions. Semin
Musculoskelet Radiol 2018;22:540-5.
61.Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts H.
Artificial intelligence in radiology. Nat Rev Cancer
2018;18:500-10.
62.Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S,
Ozawa T, et al. Application of artificial intelligence using a
convolutional neural network for detecting gastric cancer in
endoscopic images. Gastric Cancer 2018;21:653-60.
63.Kubota K, Kuroda J, Yoshida M, Ohta K, Kitajima M.
Medical image analysis: computer-aided diagnosis of gastric
cancer invasion on endoscopic images. Surg Endosc
2012;26:1485-9.
64.Bae HJ, Kim CW, Kim N, Park B, Kim N, Seo JB, et al. A
Perlin Noise-Based Augmentation Strategy for Deep Learning
with Small Data Samples of HRCT Images. Sci Rep
2018;8:17687.
65. Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T,
Maida A. Deep learning in spiking neural networks. Neural
Netw 2019;111:47-63.
66. Pace F, Buscema M, Dominici P, Intraligi M, Baldi F,
Cestari R, et al. Artificial neural networks are able to recognize
gastro-oesophageal reflux disease patients solely on the basis of
clinical data. Eur J Gastroenterol Hepatol 2005;17:605-10.
67. Mori Y, Kudo SE, Wakamura K, Misawa M, Ogawa Y,
Kutsukawa M, et al. Novel computer-aided diagnostic system
for colorectal lesions by using endocytoscopy (with videos).
Gastrointest Endosc 2015;81:621-9.
68. Kominami Y, Yoshida S, Tanaka S, Sanomura Y,
Hirakawa T, Raytchev B, et al. Computer-aided diagnosis of
colorectal polyp histology by using a real-time image
recognition system and narrow-band imaging magnifying
colonoscopy. Gastrointest Endosc 2016;83:643-9.
69.Misawa M, Kudo SE, Mori Y, Nakamura H, Kataoka S,
Maeda Y, et al. Characterization of Colorectal Lesions Using a
Computer-Aided Diagnostic System for Narrow-Band Imaging
Endocytoscopy. Gastroenterology 2016;150:1531-2.
70.Mori Y, Kudo SE, Chiu PW, Singh R, Misawa M,
Wakamura K, et al. Impact of an automated system for
endocytoscopic diagnosis of small colorectal lesions: an
international web-based study. Endoscopy 2016;48:1110-8.
71.Byrne MF, Chapados N, Soudan F, Oertel C, Linares Perez
M, Kelly R, et al. Real-time differentiation of adenomatous and
hyperplastic diminutive colorectal polyps during analysis of
unaltered videos of standard colonoscopy using a deep learning
model. Gut 2019;68:94-100.
72.Komeda Y, Handa H, Watanabe T, Nomura T, Kitahashi M,
Sakurai T, et al. Computer-Aided Diagnosis Based on
Convolutional Neural Network System for Colorectal Polyp
Classification: Preliminary Experience. Oncology 2017;93:30-
4.
73.Zhang R, Zheng Y, Mak TW, Yu R, Wong SH, Lau JY, et
al. Automatic Detection and Classification of Colorectal Polyps
by Transferring Low-Level CNN Features From Nonmedical
Domain. IEEE J Biomed Health Inform 2017;21:41-7.
74. Chen PJ, Lin MC, Lai MJ, Lin JC, Lu HH, Tseng VS.
Accurate Classification of Diminutive Colorectal Polyps Using
Computer-Aided Analysis. Gastroenterology 2018;154:568-75.
75. Ichimasa K, Kudo SE, Mori Y, Misawa M, Matsudaira S,
Kouyama Y, et al. Correction: Artificial intelligence may help
in predicting the need for additional surgery after endoscopic
resection of T1 colorectal cancer. Endoscopy 2018;50:C2.
76. Balyen L, Peto T. Promising Artificial Intelligence-
Machine Learning-Deep Learning Algorithms in
Ophthalmology. Asia Pac J Ophthalmol 2019;8:264-72.
77. Djinbachian R, Dube AJ, von Renteln D. Optical Diagnosis
of Colorectal Polyps: Recent Developments. Curr Treat
Options Gastroenterol 2019;17:99-114.
... In the field of digestive endoscopy image analysis, CNN is mainly used to assist in detecting colon polyps, judging the degree of differentiation of gastric and colon polyps, and identifying early gastrointestinal tumors and gastric mucosal Helicobacter pylori infection. [21][22][23][24] In recent years, the development and research on the clinical application of CNN has gradually expanded into the field of esophageal diseases. Previously published studies on CNN-based AI for the diagnosis of early EC on endoscopic images show promising yet divergent results, and the methodology varies considerably among studies, particularly in endoscopic approach and CNN structure. ...
Article
Full-text available
Background: Early screening and treatment of esophageal cancer (EC) is particularly important for the survival and prognosis of patients. However, early EC is difficult to diagnose by a routine endoscopic examination. Therefore, convolutional neural network (CNN)-based artificial intelligence (AI) has become a very promising method in the diagnosis of early EC using endoscopic images. The aim of this study was to evaluate the diagnostic performance of CNN-based AI for detecting early EC based on endoscopic images. Methods: A comprehensive search was performed to identify relevant English articles concerning CNN-based AI in the diagnosis of early EC based on endoscopic images (from the date of database establishment to April 2022). The pooled sensitivity (SEN), pooled specificity (SPE), positive likelihood ratio (LR+), negative likelihood ratio (LR-), diagnostic odds ratio (DOR) with 95% confidence interval (CI), summary receiver operating characteristic (SROC) curve, and area under the curve (AUC) for the accuracy of CNN-based AI in the diagnosis of early EC based on endoscopic images were calculated. We used the I2 test to assess heterogeneity and investigated the source of heterogeneity by performing meta-regression analysis. Publication bias was assessed using Deeks' funnel plot asymmetry test. Results: Seven studies met the eligibility criteria. The SEN and SPE were 0.90 (95% confidence interval [CI]: 0.82-0.94) and 0.91 (95% CI: 0.79-0.96), respectively. The LR+ of the malignant ultrasonic features was 9.8 (95% CI: 3.8-24.8) and the LR- was 0.11 (95% CI: 0.06-0.21), revealing that CNN-based AI exhibited an excellent ability to confirm or exclude early EC on endoscopic images. Additionally, SROC curves showed that the AUC of the CNN-based AI in the diagnosis of early EC based on endoscopic images was 0.95 (95% CI: 0.93-0.97), demonstrating that CNN-based AI has good diagnostic value for early EC based on endoscopic images. Conclusions: Based on our meta-analysis, CNN-based AI is an excellent diagnostic tool with high sensitivity, specificity, and AUC in the diagnosis of early EC based on endoscopic images.
Chapter
Colorectal cancer (CRC) is a leading cause of death, worldwide. Most sporadic cancers arise from the adenoma-carcinoma pathway. This pathway, together with the serrated polyp-carcinoma sequence, constitute 95% of cancer of the colon. The hereditary CRC syndromes represent 5% of colon cancers. It is important to describe the size and location of lesions at CTC as early detection of cancer precursors. Examples of lesions at CTC are presented. The role of imaging modalities in preoperative evaluation of CRC, as well as for staging of tumours, nodes, and metastases, is discussed. An overview of the role of artificial intelligence for benign and premalignant polyps, as well as for staging of CRC and in radiation therapy is presented. Management of CRC includes surgery, radiation therapy, chemotherapy, and targeted therapy which is either separate or in combination. The use of dual-energy computed tomography (DECT) for CRC grading is discussed.
Article
Full-text available
Colorectal polyps is a prevalent medical condition that could lead to colorectal cancer, a leading cause of cancer-related mortality globally, if left undiagnosed. Colonoscopy remains the gold standard for detection and diagnosis of colorectal neoplasia; however, a significant proportion of neoplastic lesions are missed during routine examinations, particularly diminutive and flat lesions. Deep learning techniques have been employed to improve polyp detection rates in colonoscopy images and have proven successful in reducing the miss rate. However, accurate segmentation of small and flat polyps remains a major challenge to existing models as they struggle to differentiate polypoid and non-polypoid regions apart. To address this issue, we present an enhanced version of the Multi-Scale Attention Network (MA-NET) that incorporates a modified Mix-ViT transformer as the feature extractor. The modified Mix-ViT facilitates ultra-fine-grained visual categorization to improve the segmentation accuracy of polypoid and non-polypoid regions. Additionally, we introduce a pre-processing layer that performs histogram equalization on input images in the CIEL*A*B* color space to enhance their features. Our model was trained on a combined dataset comprising Kvasir-SEG and CVC-ColonDB and cross-validated on CVC-ClinicDB and ETIS-LaribDB. The proposed method demonstrates superior performance compared to existing methods, particularly in the detection of small and flat polyps.
Article
Full-text available
Simple Summary In head and neck cancer, there are several treatment options. When surgical treatment is chosen, removal of the entire tumor is necessary for optimal therapy of the patient. This, however, is challenging in vulnerable areas of the body such as the mouth and throat, as a more radical resection leads to more severe functional limitations after surgery. Several imaging techniques facilitate the distinction of tumor versus adjacent healthy tissue during the operation, which can help the surgeon remove the entire tumor with optimal functional outcomes. In this paper, we aim to provide an overview of these imaging techniques applicable to oropharyngeal squamous cell carcinoma and discuss the possibilities for optimizing the surgical outcome of patients. Abstract Inadequate resection margins in head and neck squamous cell carcinoma surgery necessitate adjuvant therapies such as re-resection and radiotherapy with or without chemotherapy and imply increasing morbidity and worse prognosis. On the other hand, taking larger margins by extending the resection also leads to avoidable increased morbidity. Oropharyngeal squamous cell carcinomas (OPSCCs) are often difficult to access; resections are limited by anatomy and functionality and thus carry an increased risk for close or positive margins. Therefore, there is a need to improve intraoperative assessment of resection margins. Several intraoperative techniques are available, but these often lead to prolonged operative time and are only suitable for a subgroup of patients. In recent years, new diagnostic tools have been the subject of investigation. This study reviews the available literature on intraoperative techniques to improve resection margins for OPSCCs. A literature search was performed in Embase, PubMed, and Cochrane. Narrow band imaging (NBI), high-resolution microendoscopic imaging, confocal laser endomicroscopy, frozen section analysis (FSA), ultrasound (US), computed tomography scan (CT), (auto) fluorescence imaging (FI), and augmented reality (AR) have all been used for OPSCC. NBI, FSA, and US are most commonly used and increase the rate of negative margins. Other techniques will become available in the future, of which fluorescence imaging has high potential for use with OPSCC.
Article
Full-text available
Artificial intelligence (AI) using deep-learning (DL) has emerged as a breakthrough computer technology. By the era of big data, the accumulation of an enormous number of digital images and medical records drove the need for the utilization of AI to efficiently deal with these data, which have become fundamental resources for a machine to learn by itself. Among several DL models, the convolutional neural network showed outstanding performance in image analysis. In the field of gastroenterology, physicians handle large amounts of clinical data and various kinds of image devices such as endoscopy and ultrasound. AI has been applied in gastroenterology in terms of diagnosis, prognosis, and image analysis. However, potential inherent selection bias cannot be excluded in the form of retrospective study. Because overfitting and spectrum bias (class imbalance) have the possibility of overestimating the accuracy, external validation using unused datasets for model development, collected in a way that minimizes the spectrum bias, is mandatory. For robust verification, prospective studies with adequate inclusion/exclusion criteria, which represent the target populations, are needed. DL has its own lack of interpretability. Because interpretability is important in that it can provide safety measures, help to detect bias, and create social acceptance, further investigations should be performed. ©The Author(s) 2019. Published by Baishideng Publishing Group Inc. All rights reserved.
Article
Full-text available
Purpose of review Optical diagnosis of diminutive colorectal polyps has been recently proposed as an alternative to histopathologic diagnosis. Recent developments in imaging techniques, new classification systems, and the use of artificial intelligence have allowed for increased viability of optical diagnosis. This review provides an up-to-date overview of optical diagnosis recommendations, classifications, outcomes, and recent developments. Recent findings There are currently seven major classification systems and three major society recommendations for quality benchmarks for optical diagnosis of diminutive polyps. The NICE classification has been extensively studied and meets quality benchmarks for most imaging techniques but does not allow for the diagnosis of sessile serrated polyps (SSPs). The SIMPLE classification has met quality benchmarks for NBI and i-Scan and allows for the diagnosis of SSPs. Other classification systems need to be further studied to validate effectiveness. Computer-assisted diagnosis of colorectal polyps is a very promising recent development with first studies showing that society-recommended quality benchmarks for real-time colonoscopies on patients are being met. Limitations include a non-negligible percentage of failure to diagnose, low specificity, and low number of real-time diagnostic studies. More research needs to be performed to further understand the value of artificial intelligence for optical polyp diagnosis. Summary Optical diagnosis of diminutive colorectal polyps is currently a viable strategy for experienced endoscopists using validated classifications and imaging-enhanced endoscopy. Artificial intelligence–based diagnosis could make optical diagnosis widely applicable but is currently in its early developmental stage.
Article
Although vascular complications (VCs) following orthotopic liver transplantation (OLT) seldom occur, they are the most feared complications with a high incidence of both graft loss and mortality, as they compromise the blood flow of the transplant (either inflow or outflow). Diagnosis and therapeutic management of VCs constitute a major challenge in terms of increasing the success rate of liver transplantation. While surgical treatment used to be considered the first choice for management, advances in endovascular intervention have increased to make this a viable therapeutic option. Considering VC as a rare but a major and dreadful issue in OLT history, and in view of the continuing and rapid progress in recent years, an update on these uncommon conditions seemed necessary. In this sense, this review comprehensively discusses the important features (epidemiological, clinical, paraclinical, prognostic and therapeutic) of VCs following OLT.
Article
The lifestyle of modern society has changed significantly with the emergence of artificial intelligence (AI), machine learning (ML), and deep learning (DL) technologies in recent years. Artificialintelligence is a multidimensional technology with various components such as advanced algorithms, ML and DL. Together, AI, ML, and DL are expected to provide automated devices to ophthalmologists for early diagnosis and timely treatment of ocular disorders in the near future. In fact, AI, ML, and DL have been used in ophthalmic setting to validate the diagnosis of diseases, read images, perform corneal topographic mapping and intraocular lens calculations. Diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are the 3 most common causes of irreversible blindness on a global scale. Ophthalmic imaging provides a way to diagnose and objectively detect the progression of a number of pathologies including DR, AMD, glaucoma, and other ophthalmic disorders. There are 2 methods of imaging used as diagnostic methods in ophthalmic practice: fundus digital photography and optical coherence tomography (OCT). Of note, OCT has become the most widely used imaging modality in ophthalmology settings in the developed world. Changes in population demographics and lifestyle, extension of average lifespan, and the changing pattern of chronic diseases such as obesity, diabetes, DR, AMD, and glaucoma create a rising demand for such images. Furthermore, the limitation of availability of retina specialists and trained human graders is a major problem in many countries. Consequently, given the current population growth trends, it is inevitable that analyzing such images is time-consuming, costly, and prone to human error. Therefore, the detection and treatment of DR, AMD, glaucoma, and other ophthalmic disorders through unmanned automated applications system in the near future will be inevitable. We provide an overview of the potential impact of the current AI, ML, and DL methods and their applications on the early detection and treatment of DR, AMD, glaucoma, and other ophthalmic diseases.
Article
Recent breakthroughs in artificial intelligence (AI), specifically via its emerging sub-field "deep learning," have direct implications for computer-aided detection and diagnosis (CADe and/or CADx) for colonoscopy. AI is expected to have at least 2 major roles in colonoscopy practice-polyp detection (CADe) and polyp characterization (CADx). CADe has the potential to decrease the polyp miss rate, contributing to improving adenoma detection, whereas CADx can improve the accuracy of colorectal polyp optical diagnosis, leading to reduction of unnecessary polypectomy of non-neoplastic lesions, potential implementation of a resect-and-discard paradigm, and proper application of advanced resection techniques. A growing number of medical-engineering researchers are developing both CADe and CADx systems, some of which allow real-time recognition of polyps or in vivo identification of adenomas, with over 90% accuracy. However, the quality of the developed AI systems as well as that of the study designs vary significantly, hence raising some concerns regarding the generalization of the proposed AI systems. Initial studies were conducted in an exploratory or retrospective fashion by using stored images and likely overestimating the results. These drawbacks potentially hinder smooth implementation of this novel technology into colonoscopy practice. The aim of this article is to review both contributions and limitations in recent machine-learning-based CADe and/or CADx colonoscopy studies and propose some principles that should underlie system development and clinical testing.
Article
The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.
Article
Background and aims: According to guidelines, endoscopic resection should only be performed for patients whose early gastric cancer invasion depth is within the mucosa or submucosa of the stomach regardless of lymph node involvement. The accurate prediction of invasion depth based on endoscopic images is crucial for screening patients for endoscopic resection. We constructed a convolutional neural network computer-aided detection (CNN-CAD) system based on endoscopic images to determine invasion depth and screen patients for endoscopic resection. Methods: Endoscopic images of gastric cancer tumors were obtained from the Endoscopy Center of Zhongshan Hospital. An artificial intelligence-based CNN-CAD system was developed through transfer learning leveraging a state-of-the-art pretrained CNN architecture, ResNet50. A total of 790 images served as a development dataset and another 203 images as a test dataset. We used the CNN-CAD system to determine the invasion depth of gastric cancer and evaluated the system's classification accuracy by calculating its sensitivity, specificity, and area under the receiver operating characteristic curve. Results: The area under the receiver operating characteristic curve for the CNN-CAD system was .94 (95% confidence interval [CI], .90-.97). At a threshold value of .5, sensitivity was 76.47%, and specificity 95.56%. Overall accuracy was 89.16%. Positive and negative predictive values were 89.66% and 88.97%, respectively. The CNN-CAD system achieved significantly higher accuracy (by 17.25%; 95% CI, 11.63-22.59) and specificity (by 32.21%; 95% CI, 26.78-37.44) than human endoscopists. Conclusions: We constructed a CNN-CAD system to determine the invasion depth of gastric cancer with high accuracy and specificity. This system distinguished early gastric cancer from deeper submucosal invasion and minimized overestimation of invasion depth, which could reduce unnecessary gastrectomy.
Article
Artificial intelligence (AI) has been heralded as the next big wave in the computing revolution and touted as a transformative technology for many industries including health care. In radiology, considerable excitement and anxiety are associated with the promise of AI and its potential to disrupt the practice of the radiologist. Radiology has often served as the gateway for medical technological advancements, and AI will likely be no different. We present a brief overview of AI advancements that have driven recent interest, offer a review of the current literature, and examine the most likely ways that AI will change radiology in the coming years.
Article
Background and aims: Evaluation of endoscopic disease activity for patients with ulcerative colitis (UC) is important when determining the treatment of choice. However, endoscopists require a certain period of training to evaluate the activity of inflammation properly, and interobserver variability exists. Therefore, we constructed a computer-assisted diagnosis (CAD) system using a convolutional neural network (CNN) and evaluated its performance using a large dataset of endoscopic images from patients with UC. Methods: A CNN-based CAD system was constructed based on GoogLeNet architecture. The CNN was trained using 26,304 colonoscopy images from a cumulative total of 841 patients with UC, which were tagged with anatomic locations and Mayo endoscopic scores. The performance of the CNN in identifying normal mucosa (Mayo 0) and mucosal healing state (Mayo 0-1) was evaluated in an independent test set of 3981 images from 114 patients with UC, by calculating the areas under the receiver operating characteristic curves (AUROCs). In addition, AUROCs in the right side of the colon, left side of the colon, and rectum were evaluated. Results: The CNN-based CAD system showed a high level of performance with AUROCs of 0.86 and 0.98 to identify Mayo 0 and 0-1, respectively. The performance of the CNN was better for the rectum than for the right side and left side of the colon when identifying Mayo 0 (AUROC = 0.92, 0.83, and 0.83, respectively). Conclusions: The performance of the CNN-based CAD system was robust when used to identify endoscopic inflammation severity in patients with UC, highlighting its promising role in supporting less-experienced endoscopists and reducing interobserver variability.