ArticlePDF AvailableLiterature Review

Clinicians' Guide to Artificial Intelligence in Colon Capsule Endoscopy-Technology Made Simple

Authors:

Abstract

Artificial intelligence (AI) applications have become widely popular across the healthcare ecosystem. Colon capsule endoscopy (CCE) was adopted in the NHS England pilot project following the recent COVID pandemic's impact. It demonstrated its capability to relieve the national backlog in endoscopy. As a result, AI-assisted colon capsule video analysis has become gastroenterology's most active research area. However, with rapid AI advances, mastering these complex machine learning concepts remains challenging for healthcare professionals. This forms a barrier for clinicians to take on this new technology and embrace the new era of big data. This paper aims to bridge the knowledge gap between the current CCE system and the future, fully integrated AI system. The primary focus is on simplifying the technical terms and concepts in machine learning. This will hopefully address the general "fear of the unknown in AI" by helping healthcare professionals understand the basic principle of machine learning in capsule endoscopy and apply this knowledge in their future interactions and adaptation to AI technology. It also summarises the evidence of AI in CCE and its impact on diagnostic pathways. Finally, it discusses the unintended consequences of using AI, ethical challenges, potential flaws, and bias within clinical settings.
Diagnostics2023,13,1038.https://doi.org/10.3390/diagnostics13061038www.mdpi.com/journal/diagnostics
Review
Clinicians’GuidetoArtificialIntelligenceinColonCapsule
Endoscopy—TechnologyMadeSimple
IanI.Lei
1
,GoharJ.Nia
1
,ElizabethWhite
2
,HagenWenzek
2
,SantiSegui
3
,AngusJ.M.Watson
4
,
AnastasiosKoulaouzidis
5,6,7
andRameshP.Arasaradnam
1,8,9,
*
1
DepartmentofGastroenterology,UniversityHospitalofCoventryandWarwickshire,
CoventryCV22DX,UK;brian.lei@uhcw.nhs.uk(I.I.L.)
2
CorporateHealthInternational,InvernessIV25NA,UK
3
MathematicsandComputerScienceDepartment,TheUniversityofBarcelona,Barcelona58508007,Spain
4
InstituteofAppliedHealthSciences,UniversityofAberdeen,AberdeenAB243FX,UK
5
DepartmentofGastroenterology,OdenseUniversityHospital&SvendborgSygehus,
5700Odense,Denmark
6
DepartmentofClinicalResearch,UniversityofSouthernDenmark(SDU),5000Odense,Denmark
7
DepartmentofSocialMedicineandPublicHealth,PomeranianMedicalUniversity,70204Szczecin,Poland
8
WarwickMedicalSchool,UniversityofWarwick,CoventryCV47AL,UK
9
DepartmentofGastroenterology,LeicesterCancerCentre,UniversityofLeicester,Leicester,LE17RH,UK
*Correspondence:r.arasaradnam@warwick.ac.uk
Abstract:Artificialintelligence(AI)applicationshavebecomewidelypopularacrossthehealthcare
ecosystem.Coloncapsuleendoscopy(CCE)wasadoptedintheNHSEnglandpilotprojectfollow
ingtherecentCOVIDpandemic’simpact.Itdemonstrateditscapabilitytorelievethenationalback
loginendoscopy.Asaresult,AIassistedcoloncapsulevideoanalysishasbecomegastroenterol
ogy’smostactiveresearcharea.However,withrapidAIadvances,masteringthesecomplexma
chinelearningconceptsremainschallengingforhealthcareprofessionals.Thisformsabarrierfor
clinicianstotakeonthisnewtechnologyandembracetheneweraofbigdata.Thispaperaimsto
bridgetheknowledgegapbetweenthecurrentCCEsystemandthefuture,fullyintegratedAIsys
tem.Theprimaryfocusisonsimplifyingthetechnicaltermsandconceptsinmachinelearning.This
willhopefullyaddressthegeneral“fearoftheunknowninAI”byhelpinghealthcareprofessionals
understandthebasicprincipleofmachinelearningincapsuleendoscopyandapplythisknowledge
intheirfutureinteractionsandadaptationtoAItechnology.ItalsosummarisestheevidenceofAI
inCCEanditsimpactondiagnosticpathways.Finally,itdiscussestheunintendedconsequences
ofusingAI,ethicalchallenges,potentialflaws,andbiaswithinclinicalsettings.
Keywords:artificialintelligence(AI);machinelearning(ML);deeplearning(DL);convolutional
neuralnetworks(CNN);decisionmakingsystems(DMS);coloncapsuleendoscopy(CCE)
1.Introduction
Therecentdecade’sprofoundtechnologicaladvanceshaveconsiderablytrans
formedthemedicallandscape.Artificialintelligence(AI)applicationshavebecome
widelypopularingenomicanalysis,roboticsurgery,predictionandsupportdiagnosis,
andtreatmentdecisionmakingacrossthehealthcareecosystem.Therehasalsobeensig
nificantinterestinAIsolutionsingastroenterologyinrecentyears.Withmanystudies
publishedandpotentialopportunitiesavailableinthisfield,gastroenterologyandendos
copyhealthcareprofessionalsmustunderstandandevaluateAIstudiesascriticalstake
holdersinsuccessfullydevelopingandimplementingAItechnologies.
Coloncapsuleendoscopy(CCE)wasfirsttestedin2006withthefirstmulticentre
studypublishedinIsrael[1].Comparedwiththegold(reference)standard,i.e.,colonos
copy,CCEwasfirstmetwithscepticismduetoitsdisadvantages,includingextensive
Citation:Lei,I.I.;Nia,G.J.;White,E.;
Wenzek,H.;Segui,S.;Watson,
A.J.M.;Koulaouzidis,A.;
Arasaradnam,R.P.Clinicians’Guide
toArtificialIntelligenceinColon
CapsuleEndoscopy—Technology
MadeSimple.Diagnostics2023,13,
1038.https://doi.org/10.3390/
diagnostics13061038
AcademicEditors:Antonella
SantoneandHajimeIsomoto
Received:29December2022
Accepted:7February2023
Accepted:21February2023
Published:8March2023
Copyright:©2023bytheauthors.Li
censeeMDPI,Basel,Switzerland.
Thisarticleisanopenaccessarticle
distributedunderthetermsandcon
ditionsoftheCreativeCommonsAt
tribution(CCBY)license(https://cre
ativecommons.org/licenses/by/4.0/).
Diagnostics2023,13,10382of19
bowelpreparationtoachieveareasonablepolypdetectionrate(PDR),highcost,andina
bilitytoperformbiopsyortherapy(e.g.,polypectomy).EventhoughthePillCamColon2
wasupgradedtoallowpanoramicviewsin2011,theuptakeofCCEcouldhavebeenbet
terintheUK.However,followingtheimpactoftherecentCOVIDpandemic,anNHS
Scotlandevaluationdemonstratedthatthetechnologycouldleadtorelievingthebacklog
inendoscopyonanationallevel.Still,CCEgeneratesavideocontainingmorethan50,000
images;thiscouldbetimeconsumingandinefficienttoanalyse[2,3].Asaresult,thead
vancesinAIapplicationonimageanalysismakeAIassistedCCEvideoanalysisoneof
themostactiveresearchareas.
Nowadays,itisbroadlyacceptedthatthedatagenerationrateisbeyondthehuman
cognitivecapacitytobeeffectivelyanalysedandmanaged.Therefore,AIwilllikelyhave
acomplementaryroleindeliveringhealthcareservicessoon.Nonetheless,duetothecom
plexityofmachinelearning(ML),masteringtheconceptofAIbycliniciansremainschal
lenging[4–7].
Robustresearchintocomputeraideddetection(CAD)inupperandlowergastroin
testinal(GI)endoscopyimageshasdemonstratedencouragingresultsinrecentyears[8,9].
Thesuccessalsobecamevisibleinthewirelesscapsuleendoscopy(WCE)field,wherean
earlyCADmodelonWCEshowedasensitivityof88.2%,specificityof90.9%,andaccu
racyof90.8%todetecterosionsandulcerations,withevidenceofrelievingthereader’s
overallworkloadandreadingtime[10,11].Thisrevivedinterestisalsobeingtransferred
intotheCCEworld.AIstartedtobeusedforvarioustasksandachievedthefirstremark
ableresults:arecentmetaanalysisshowedthatthesensitivitieswere86.5–95.5%for
bowelcleansingassessmentand47.4–98.1%forthedetectionofcolorectalcancer(CRC)
andpolyps[12,13].
Understandably,thepredominantfocusintheliteratureisontheevidencearound
theaccuracyoftheseAImodelsinCCE,astheauthors’goalwasundoubtedlytobuild
trustaroundartificialintelligenceintheclinicalworld.However,toencouragetheadop
tionofCCEAItechnologyinaclinicalsetting,understandingthe“how”inadditionto
anydatadrivenevidenceisessentialtobuildthattrustamongclinicalprofessionals.
Therefore,thispaperusesadifferentapproachtobridgethegapbetweenrecognitionand
trust.WefirstsimplifytechnicaltermsandthenfocusonhowexistingevidenceofAIin
CCEshowsitsimpactondiagnosticpathways.Wealsohighlighttheunintendedconse
quencesofusingAI,potentialflaws,andbiaswithinclinicalsettings.
Theultimateaimisaseamlesscollaborationofmedicalprofessionalsandcomputer
scientiststotranslateprototypeAIsolutionsmorequicklyintovaluableclinicaltools.
2.AITerminologyandtheConceptofMachineLearning
2.1.TheDifferencebetweenMachineLearningandArtificialIntelligence
ThepublichasusedAIinterchangeablywithmachinelearning(ML),whichrefersto
usingcomputerstomodelintelligentbehaviourthatcanperformtasks.However,AIis
commonlydefinedasthedevelopmentofmachinecapabilitiestosimulatenaturalintelli
gencebyperformingtasksthatusuallyrequirehumaninput.
Ontheotherhand,MLisasubfieldofAIthatusesacombinationofmathematical
andstatisticalmodellingtechniquesthatutiliseavarietyofalgorithmstolearnandauto
maticallyimprovethepredictionoftheresult.Itaimstobuildmathematicalmodelsbased
onthegivendatathathavepredictivepowerovernewandunseendata[14].Thediffer
enceisthatAIrelatesonlytocreatingintelligentmodelsthatcansimulatehumancogni
tion,whereasMListhesubsetofAIthatallowsmodelstolearnfromdata.
Diagnostics2023,13,10383of19
2.2.TerminologyinMachineLearning
TounderstandandapplythecomplextechnicalscienceofMLinCCE,itisessential
tostartbyaddressingtheterminologygapincomputingengineeringformedicalprofes
sionals.Thiscouldensurethatalltheconceptsareunderstoodcorrectly.Furthermore,
sharingjargonandexpertisefrombothfieldscannarrowthiscommunicationgap.There
fore,weprovidethemostbasicandrelevanttechnicalterminologyinmachinelearning
forallmedicalprofessionals(Table1).
Table1.Relevanttechnicalterminologyinmachinelearning.
Terminology:Definitions:
Artificialintelligence
(AI)
Itisatechnologythatenablesamachinetosimulateahuman’snatural
intelligenceandbehaviour.
Machinelearning(ML)ItisasubfieldofAIthatfocusesonhowacomputersystemdevelopsits
intelligencetopredicttheresultofunseendataaccurately.
Example
ItisasinglepairofinputoutputdatausedintraininganMLalgorithm.
Itincludespairedfeaturesandlabelsineachexample.Asetofexamples
formadataset.
Features
Thisistheinputdatathatisfedintothemachinelearningsystem.For
example,inCCE,thevisualpropertiesoftheimages(inputdata)arepro
cessedintheformofacollectionofnumberstoformfeaturesfortheML
system.
Labels
Thepreciseoutputdatausedtocomparewiththeprediction(thepre
dictedoutputgeneratedbytheMLsystem).
InCCE,labelsaretheannotationsofpolypbyanexpertreader.Then,the
AIpredictedresultsareverifiedagainsttheselabels.
PredictionTheoutputdataproducedfromanunseeninputbytheMLsystemthat
haslearnedfrommanytrainingsamples.
Trainingloop
Thisisarepeatedtrainingprocesstoallowsufficientmachinetrainingto
takeplace.Thisisperformedonnumeroussetsofinput–outputdata(ex
amples)inatrainingset.
TrainingdatasetThiscomprisesasetofexamplesthatareusedfortheMLsystemtolearn
thefunctionthatconnectsthefeaturestothelabels.
Validationdataset
Thiscomprisesasetofexamplesthatareonlyusedperiodicallytoassess
andtunethehyperparametervaluesthatweretrainedonthetraining
set.
Testdataset
ThiscontainsasetofexamplesthattheMLsystemhasneverbeenex
posedto.ItteststheMLsystem’sgeneralisationperformancetounseen
data.
Deeplearning
Atypeofmachinelearningmodelthatisformedbynumerouslayersof
neuralnetworkandallowsthefeaturestobeorganisedintohierarchical
layers.Themajordifferencefromtraditionalmachinelearningisthatthe
featuresandrelationsaredirectlylearntfromtheinputdata(endtoend
learning)toproducetheprediction.
Hyperparameters
Thesearetheparametersusedtocontrolthelearningprocedureand
trainthemodel.Thisispredeterminedbeforethetrainingset.Theselec
tionofhyperparametersincludesthesizeofthesamplesetandthenum
beroflayersintheneuralnetwork.Thehyperparametertuningprocess
involveschangingthetrainingconfigurationsandthistakesplacewhen
themodelisevaluatedonavalidationdataset.
Convolutionalneural
network(CNN)
Itisatypeofneuralnetworkthatisdesignedforvisualimagery.Ituses
convolutionalfilters(kernels)tobuildasharedweightarchitecturethat
includeslayersoffullyconnectedneuralnetworks.
ClassificationAformofsupervisedlearningandthegoalofthemodelistomatchthat
inputwithpredefinedcategoriesattheoutput.Forexample,theCCEML
Diagnostics2023,13,10384of19
algorithmclassifiedthelesionsintopolyporcancer,whichareprede
finedcategories.
Overfitting
Aphenomenonoccurswhenthemodelstartstolearnallthedetailedfea
tures,suchasthebackgroundnoiseand“memorise”thetrainingset
(tightlyfittedtothetrainingset).Thishappenswhentheerrorintheval
idationdatasetstartstodeteriorateduetopoorgeneralisationtonew
data(themodelonlyworkswellinthetrainingexamplesasithasmem
orisedallthedetails).
Underfitting
Aphenomenonoccurswhenthemodelcannotobtainagoodfitofthe
trendtothedatasetduetoalackoftrainingorthemodel’sdesignistoo
simpletofitacomplextrend.
RegularisationTechniquesusedtoaddressoverfittingbycommandingthemodelto
learnandretainsomegeneralisationduringthetrainingprocedure.
2.3.MachineLearning
MLissimilartocomputerprogramming,asillustratedinFigure1.Theprocessof
transformingtheinputintotheoutputisknownasafunction.Incomputerprogramming,
theprogrammerencodesthestepsbasedonrulesintofunctionstoprovideanautomated
output.Manualeffortsarerequiredtosupportthisrulebasedtechnique.
Figure1.Thisshowstheprocessoftransformingtheinputintotheoutputbyusingafunctionthat
programmersincomputersciencecreate.Ontheotherhand,themachinelearningsystemcanlearn
anddevelopthefunctionbystudyingtheexistinginput–outputpairstobuildaperfectfunction
withoutrelyingontheprogrammer.Examplesareincludedtodemonstratethebasicconceptof
machinelearningbyusingasimplemathematicalfunction.
Diagnostics2023,13,10385of19
Incontrast,thatfunctioncorrelatingtheinputandoutputremainsunknowninML.
Insteadofrelyingonaprogrammertofindthefunction,theMLsystemcanlearnand
createthefunctionbystudyingtheexistinginput–outputpairsviatraining.Aftertraining
themachinelearningsystemonnumerousinput–outputpairs,itwillbuildafunctionthat
canaccuratelyprocesstheunseeninputdata(features)toanaccuratelypredictedoutput
data(prediction)(Figure1).Therefore,oneoftheadvantagesofusingmachinelearning
isthatitcanlearnanddevelopatremendouslycomplexfunctionbasedonthemultitude
ofinput–outputpairs,whichwouldbeimpracticalorimpossibleforaprogrammerto
achieve[15].
Anequivalentanalogywouldbetheprocessoflearninghowtodriveamanualcar.
First,thelearneristaughtthebasicprinciples,includinghighwaycodes,gearshifting,
driving,parkingetc.(examples).Forexample,whenstartingacarfromastilltoamoving
position,thedrivermustshiftthegearandapplypressureonthegaspaddle(thisisthe
essentialfunctionlearnedfromexamples).Then,thelearner,taughtbytheexpertinstruc
tor,repeatedlytrainsinvariouspreplannedweather,roads,androundabout(learning
andimprovingthefunction)duringtheirtraining(trainingloops).Oncethesebasicskills
andprinciplesarediscovered,thenewdrivercandrivedifferenttypesofcarsonanypre
viouslyunseenroadsandareas(newunrecognisedinput)withfurtherassistancefrom
thedrivinginstructor(validationsets).Thedrivingskillswillcontinuetoimproveuntil
theyareadequateforthedrivingtest(e.g.,thetestdataset).Whenmoretypesofdifferent
roads,roundabouts,andcountrysideroadsareexploredthroughthedrivingprocess,the
drivingskillsareimprovedcontinuously(exposedtoanextensivedatasettoimprovethe
overallfunctioninretrainingafterthetestdataset).
TheMLalgorithmusedinCCEispredominantlysupervisedlearning,wherethein
puthasbeenprelabelled.UsingCCEasanexample,theMLalgorithmcanproduceapre
cisemappingfunctiontoaccuratelyidentifypolypsbyusingtheseprelabelledinputs,co
loncapsulepolypimages,andthepairedoutputslabelledas“polyp”(Figure1).
2.4.DataTypes:StructuredandUnstructuredData
Datausedandstoredinourhealthcaresystemcomprisesvariousformats,forexam
ple,graphics,laboratoryvalues,andfreetextmedicalsummaries.Thetypeofdataissep
aratedintostructureddataandunstructureddata.Structureddataisstoredandorganised
inawelldefinedmanner,ofteninstructuredsqldatabases,spreadsheets,orlistsofnum
bersorcategories(e.g.,listofnames,diagnosiscoding,hospitalnumbers,andlaboratory
values)thatcanbeanalysedbyusingstatisticalmethods(e.g.,addressingaregressionor
classificationquestion).
Theunstructureddatatypedoesnothavethatpredefinedstructure.Withoutthe
structuredformat,theyarestoredintheirrawunstructuredform,usuallyinlargetext
filesornonstructureddatasets.Theyarealsocategorisedasqualitativedata,makingit
moredifficulttocollectandanalyse.Thisincludesimages,audio,andvideointheform
oftextsdesignedtocategorisedataintodifferentclassifications.
SupervisedMLalgorithmsusuallyrequirestructureddata(e.g.,videosinwhichall
imagesarecorrectlylabelledwithallrelevantclassifiers,suchaspolyps,diverticula,in
flammation,residueetc.).
Ifunstructureddataistobeusedinmachinelearning,specialisedtechniques,such
asdeeplearning,thatrelyonvastamountsofdatawouldberequired.However,thatis
oftenunavailableduetoconfidentialityconcernsortheneedformoreprocedurestogen
eratethosedatasets.Onceunstructureddatacanbeused,thenapplicationsinpredictive
analyticscouldbenefitthemost[16].
Therefore,today,wearelookingatusingstructureddatatoconductmachinelearn
ingandunstructureddatatoinferfromitbyusingtheAIsystem(e.g.,byprovidingan
80%probabilityofamucosalstructurebeingapolyp).
Diagnostics2023,13,10386of19
2.5.MachineAnalysisoftheImages
Imagesaredigitalisedintoone(blackandwhiteimage)ormanygridsofnumbers
(colourimage)(seeAppendixA,FigureA1foragraphicaldemonstrationoftheidea).
Insteadofonegridofnumbersrepresentingeachpixel,thecolourimagesarerepresented
inthreegrids(red,green,andblue(RGB))stackedtogether.Eachpixel’smagnituderep
resentsthecorrespondingcolourbrightnessineachgrid.Inpractice,asingle224×224
pixelcolourimagewouldgenerate150,528numbersorfeaturesforeachimage(SeeAp
pendixA,FigureA1,asimplifiedgraphicalrepresentationoftheconcept).Thisdemon
stratestheimmensevolumeofdataincorporatedinallthesequencesofimageswithina
coloncapsulevideo,whichtheAIsystemwillhavetoprocesstoproducethedesiredout
put10.Toovercomethis,insteadofusingtherawdatafromtheimagesastheinputfea
tures,theexpertsadoptasetofhandcraftedfeaturesmanuallyengineeredbytheexperts.
TheselectedhandcraftedfeatureshaveanenormousimpactontheMLanddependon
thetasktobeaddressed.Forexample,simplefeatureslikeedges,corners,orcolourcan
beused[17].
3.TheConceptofNeuralNetworkandAITraining
3.1.NeuralNetwork
Duetothelargequantityofdata,thenumberofparameters,thespatialinformation
betweeneachpixelandthecomplexityofthedatastructure,deeplearning(DL)models
werecreatedtoorganisethesecomplexfeaturesintoarchitecturallayerscalledneuralnet
work(NN)buildingblocks.
Thesenetworksaremadeupofnumerousneurons,eachactingasanindividualmin
iaturemachinelearningsystem(e.g.,miniatureregressionmodels).Asetofneurons,
whichtakethesameinput,areorganisedintoalayer.Theseneuronsprocessinputsby
usinglinearcombinationmethods,andthelayer’sparametersgenerateanoutputwhich
isthenfedintothenextlayer.Thisprocessisrepeateduntilthefinaloutputisdelivered,
anditissimilartoournervoussystem[15].Inaddition,therearelayersbetweenthefirst
inputlayerandthelastoutputlayer,calledhiddenlayers.Thenumberofhiddenlayers
variesdependingonthecomplexity,function,andassociateddefinedoutputoftheneural
network[18](seeFigure2).
Diagnostics2023,13,10387of19
Figure2.Adenseneuralnetworkdemonstratesthelayers’architecturecomparedwiththenervous
systemmodel.
ThemaindifferencebetweenNNandtraditionalMLmodelsisthatNNworksdi
rectlyfromunstructured,rawdatainsteadofhandcraftedfeatures.Whereastraditional
machinelearningalgorithmsrequireanexperttoselecttheproblem’srelevantcharacter
istics,NNcanautomaticallyperformthefeatureengineeringtask.
3.2.ConvolutionalNeuralNetwork
Aconvolutionalneuralnetwork(CNN)ismainlydesignedtoprocessimages,and
itsapplicationispopularinmedicalfields,suchasradiologyandendoscopy.Itisde
signedtoaddresstwodifficultiesthatastandardneuralnetworkencounterswhenpro
cessingimages.First,eventhoughaneuralnetworkcouldincludeandorganisemany
parametersintothesedensehierarchicallayers,eachparameter(neuron)wouldonlybe
allocatedsimultaneouslytooneorasmallnumberofpixellocations.Takingthehighly
variablepositionsoftheobjectinapracticalimage,thenumberofneuronsrequiredis
enormous;thisinevitablyprolongstheprocessingtimeconsiderably.Thesecondissueis
thatthestandardneuralnetworkcannotrecordthespatialinformationintheimageasit
flattenstheimage(theparametersofeachpixelorganisedinspecificspatialorders)intoa
rollofnumbers(vector)(seediagramintheAppendixAformoreinformation)[18].
Consequently,CNNusestheconvolutionallayerstoresolvetheseissuesbyusing
convolutionalfilters(kernels)(Figure3).Thesefilterscomprisesmallgroupsof
Diagnostics2023,13,10388of19
parametersthataremultipliedandsummedinpatches.Theoutputofeachpatchisplaced
relativetothepositionoftheinputpatchinanewsmallergrid.Forexample,anareaof
interest(e.g.,apolyponacoloncapsuleimage)couldrepresentahighvaluenumberon
thesmallergrid.
Figure3.ThissimplifieddiagramshowshowCNNprocessestheparametersfromanimageby
usingfilters(kernels)tocondensetheparameterstoasmalleroutputtopreservethespatialinfor
mationandimprovethehandlingspeed,astheparametersareanalysedinpatchesratherthanin
dividuals.
Theoutputofeachconvolutionallayercanbefedintothenextlayerasan“image
input”.Inthissequence,eachpixelinthenextconvolutionallayerrepresentsapatchof
pixelsinputtedfromthepreviouslayer.Aftergoingthroughvariouslayersofrepeated
processing,theCNNwillbeabletoseetheoveralllargerpatchesoftheimagesandulti
matelyproduceoutputprobabilitiesoftheimagecategory[19].
Forexample,thepixelsinthefirstlayerofCNNwillformbasicfeaturessuchassmall
points,lines,andridgesfromtherawpixelontheinputimage.Thesepixelsarethencom
binedagaininthesuccessivefewlayers,byusingkernels,intosimpleshapessuchascir
cles,squares,andlargedots.Thisprocessrepeatsuntiltheinputdatagoesdeeperintothe
layers.Supposeaspecificcombinationofshapesorfeaturesrepresentingalesionispre
sentinthedeeperlayer.Inthatcase,theneuronsinthatlayerwilleventuallyfirethe
processedfeaturestothefinallayerthatpredictstheclassoftheobject(e.g.,polypinthe
imagewithaprobabilityscore(Figure4)).
Diagnostics2023,13,10389of19
Figure4.ThesimplifiedoverallCNNlayersinidentifyingpolypsintheCCEvideoandthe
flowchartdemonstratetheCNNusedinthecoloncapsulevideotopredictpolyps,forexample,
accurately.
3.3.TheAITraining,OptimisationandValidationProcess
Theconvolutedneuralnetworkmodelsorapproximatesanaccuratemappingfunc
tionbetweentheinputsandoutputs.Thisrequiresaslowprocessoftraining.
First,theCNNisgivenatrainingdataset,asetofdataexamplesforthemodelto
learnandmapthefunctionthatcorrelatestheinputstotheoutputs.Inthetrainingset,
thedifferenceinerrorbetweentheCNN’spredictionandthetrainingset’slabelwillbe
computedas“loss”.LossisanumericalvaluethatdetermineshowclosetheCNNpre
dictedoutputsaretothetrueoutputs.Aftereachrunofthesametrainingset,theCNN
Diagnostics2023,13,103810of19
willupdateitsparameterstoreducetheloss,calledtheoptimisationstep.TheCNNwill
thenbeevaluatedonavalidationdatasettoassessitsperformanceperiodically.Itisim
portanttonotethatthevalidationdatasetwasnotexposedtotheCNNduringtraining
andinsteadonlyusedforvalidationwithoutmodifyingthevaluesoftheCNN,i.e.,itwas
notbeingtrained.
Hyperparametersarestudyspecificoptionalcomponentsorparametersinthetrain
ingprogrammethattrainsthemodel.Theyaredefinedmanuallybytheuserbeforethe
modelistrained.Theyshapethemodel’sbehaviouraspartofitsperformanceoptimisa
tionbyimpactingitsstructureorcomplexity[20].Theyareappliedinthetrainingloopin
theformofdifferenttrainingconfigurationstotunethemodeloralgorithmbeingtrained.
Thisissubclassifiedintotwotypes[14]:
1. modelhyperparametersthatfocusonthearchitectureofthemodel(e.g.,numberof
layersintheCNN);and
2. traininghyperparametersthatdeterminethetrainingprocess(e.g.,thetrainingrate).
TheseabovestepsformatrainingloopthatallowstheCNNmodeltolearngeneral
isedandaccuratefunctionsfromthetrainingsets.Atthesametime,progressisintermit
tentlyvalidatedthroughthevalidationdatasets.Finally,themodelwillbeexaminedona
testdatasetoncetheperformanceisfullyoptimisedandvalidated.Thisisanentirely“un
seen”setusedattheendofthedevelopmentoftheCNNmodeltoconfirmitsgeneralised
performanceonthesefinalsetsofdatasamples.
Inthetrainingloop,theperformanceoftheCNNisassessedbycomparingthepre
dictedoutputproducedbytheCNNagainstthetrueoutput.Lowvaluelossisdesirable
inmachinetraining.Therefore,thetrainingloopaimstodiscoverafunctionwiththebest
fittedparameterstominimisethelossacrossallthetrainingdatasets.Thiscanbeillus
tratedinasimplestatisticallinearregressionexample,asshowninFigure5[14].
Figure5.Thisusessimplelinearregressionmodelstodemonstratehighandlowlosswhencom
paringthepredictedoutputfromAIagainstthetrueoutput.
3.4.ConsequencesofOverfittingandUnderfittingData
Duringmachinelearning,abalanceinthelossneedstobefoundwhenconductinga
trainingloop.WhentheCNNisovertrained(e.g.,inanextendedtrainingperiod),itleads
tooverfitting.ThisisduetotheCNNmodelmemorisingirrelevantfeatures,including
thebackgroundnoisefromthetrainingset,whichiscommonforthesespecificpatients
butnotrelevanttothefinding.Then,theoverallaccuracyofthevalidationsetstartsto
deteriorate.Thesolutionstoovercomeoverfittinginclude
1. theapplicationofalargerdataset,althoughinmedicalimaging,thatmightnotbe
possibleorverycostly;
2. modificationofthemodeltoasimplerversion;and
Diagnostics2023,13,103811of19
3. theutilisationoftechniquessuchasregularisationanddataaugmentation.These
methodsempowertheAImodeltolearnandpreservethegeneralobservationsonly,
allowingtheextrapolationofwhatithaslearnedtonewunseendata.
Ontheotherhand,underfittingisequallydamagingasthisarisesfromneedingto
beabletocapturetheunderlyingfunctionofthedataduetoalackofexposuretothe
trainingsets(inadequatetraining,seeFigure6A)orbecauseofalowcomplexityofthe
model(seeFigure6B).Therefore,achievinganappropriatefittingremainsoneofthesig
nificantchallengesinthisfield.
Figure6.Graphsdemonstrateoverfittingandunderfitting.Graph(A)comparestheoverallerror
againstseveralloopsconductedinthetrainingsets.Itshowstheerrorinthevalidationsetuptrends
whenoverfittingoccurswhiletheerrorinthetrainingsetcontinuestodowntrendasthefunction
memorisesallthebackgroundnoiseandnonspecificdetailsinthetrainingsets.Finally,graph(B)
demonstratestheunderfitted,bestfit,andoverfittedconceptsbyusingasimplebestfittrendlines
model.
ThefinalstepoftraininganAIisusingtheacompletelynewtestsettodetermine
theAImodel’soverallperformance.Inaclassificationproblem,measuressucharesensi
tivity,specificity,accuracy,andprecisionareusuallyused.Moreover,otherglobal
measuressuchasthereceiveroperating(ROC)curveorareaunderROCcurve(AUC)are
widelyusedtocomparemethodsbecausetheydonotdependonanythreshold.
4.ProcessofColonCapsuleEndoscopyVideoAnalysis
TheAmericanSocietyofGastrointestinalEndoscopy(ASGE)and,morerecently,its
Europeanequivalent(ESGE),havesuggestedcredentialingstandardsandcurriculumfor
CEreadingearlyon[21].Atthesametime,itisknownthatnotonlyexperienceinGI
endoscopybutconcentrationcapacityandfatiguecanalsointerferewiththeoutcomesof
Diagnostics2023,13,103812of19
CEreading[21].Althoughthereis,todate,noscientificprooforguidelinestoindicatethe
optimalwaytoreadaCCEvideo,reviewingCCEvideosposesextrachallengesthatare
absentinsmallbowelCE(SBCE)reading.Prolongedsegmentaldelayscompoundbythe
toandforth,spiralingmovementofthecapsuleinthecaecumandproximalcolon,the
capsule’sbullettypepropulsioninmore“muscular”distalcolonicsegmentscompound
withthecolourandturbidityoftheluminalfluid,requirestime,focusedattention,and
accuratelandmarkingforproperevaluationandvideoreview[22].
CCEreadingshouldbeperformedduringprotectedtimeslotstomaintainhigh
standardsandremainathoroughanddiligentprocess,justlikeanyotherendoscopicpro
cedure.Admittedly,amassingreadingexperiencecanreducereadingtimes;however,the
officialtimeallocatedforreview/landmarkingofaCCEvideoshouldbeatleast45–65min
forthefirst/prereadersandatleast25–35minforthevalidatorsonaverage[21].TheCCE
readingtimerequireddependsonthecleansinglevel,colonanatomy,andtransittimes.
Unfortunately,thesefactorsarenotpredictable.However,itbecomesevidentthatmeth
odstoreducethosetimesandefforts,suchasAI,havetobefoundtoreducetheburden
onexpertsand,morebroadly,adoptCCE.
Withoutthosemethods,thefirststepshouldbeaquickpreviewoftheentirevideo:
Thiscanbedonebyusingafastreading(QuickView)modewithbothcameraheadssim
ultaneously.Next,oneshouldlookatthetotallengthoftimethecapsuleneededtogo
throughthecolon.Then,theyneedtoidentifythelandmarks(caecum,hepaticandsplenic
flexures,andrectum/anus/excretionofcapsule).
Thesecondstepisaproperreviewoftheimages.Asthecoloncapsuleisdesignedto
havetwocameras,theyarerepresentedbyyellowandgreen.Startingfromtheyellowor
greencamerabutonecameraalone,followedbytheothercameraataframeratebetween
8and15picturespersecond.Adifferentapproachisadvisableifthepassagetimeisshort
ortoolong.Often,thecapsuletendstostagnateincolonicsegmentsasthecolon’smus
cular,propulsivemechanismisusuallyweakerthantheSB’spropulsivemechanism.If
thatoccurs,theframeratecouldbeincreased.Onthecontrary,ashortvideomeansthat
thecapsulehasgonethroughthecolonquickly,andtherearefewerframestoview,so
ourrateofframesperminutecouldbedecreasedbyusingthescrollwheel(scrollbutton)
onthecomputermouse.Thisoftenhappensinthetransversecolon,wherethepassage
timecanbequick;alower(pre)readingspeedisadvisedinthissegmenttoavoidmissing
lesions.
Thelaststepisreportingthefindings.Adetailedreviewofthemarkedsuspected
lesionsimages(thumbnails)thatuseswhitelightandvirtualchromoendoscopyforchar
acterisationisusedwhenapplicable.Eachrelevantimageisannotatedandattachedby
usingthehospitalreportingordocumentationsystem.Thereportshouldfinaliseallthe
findings,colonicandextracolonic,transittimes,significantfindings,diagnosis,andrec
ommendations[22,23].
Theoptimalframerateforathoroughcoloninvestigationwithoutanyriskofmissing
lesionsremainsunanswered.Introducingprucaloprideaspartoftheboosterregimento
improvetheoverallprocedurecompletionrateisbeingexamined.Thisregimenreduces
boththetransitandreadingtimes.However,thisalsopotentiallyincreasestheriskof
missinglesionsasthecapsulespeedsthroughthecolon.Morerobustdataontheframe
rateortheminimumlengthofthevideoisundoubtedlyrequiredinfuturestudies[24,25].
5.EvidenceBasedLiteratureReviewofAIandCCE
5.1.AIinColonCapsuleEndoscopyintheLiterature
AIincoloncapsuleendoscopyisanewfieldofinterest.Recently,Afonsoetal.[26]
analysed24CCEexams(PillCamCOLON2)performedatasinglecentrebetween2010
and2020.Fromthesevideorecordings,3635framesofthecolonicmucosawereextracted,
770containingcoloniculcersorerosionsand2865showingnormalcolonicmucosa.After
optimisingtheneuralarchitectureoftheCNN,theirmodelautomaticallydetectedcolonic
Diagnostics2023,13,103813of19
ulcersanderosionswithasensitivityof90.3%,specificityof98.8%,andanaccuracyof
97.0%.Theareaunderthereceiveroperatingcharacteristiccurve(AUROC)was0.99.The
meanprocessingtimeforthevalidationdatasetwas11sec(approximately66frames/s).
Saraivaetal.[2]usedCCEimagestodevelopadeeplearning(DL)toolbasedona
CNNarchitecturetodetectprotrudingcoloniclesionsautomatically.ACNNwascon
structedbyusingananonymiseddatabaseofCCEimagescollectedfrom124patients.
Thisdatabaseincludedimagesofpatientswithprotrudingcoloniclesions,normalcolonic
mucosa,orotherpathologicfindings.Atotalof5715images(2410protrudinglesions,3305
normalmucosaorotherfindings)wereextractedforCNNdevelopment.Theareaunder
thecurve(AUC)fordetectingprotrudinglesionswas0.99.Thesensitivity,specificity,
PPV,andNPVwere90.0%,99.1%,98.6%,and93.2%,respectively.Theoverallaccuracyof
thenetworkwas95.3%.ThisDLalgorithmaccuratelydetectedprotrudinglesionsinCCE
images.
AtsuoYamadaetal.trainedadeepCNNsystembasedonaSingleShotMultiBox
Detectorbyusing15,933CCEimagesofcolorectalneoplasms,suchaspolypsandcancers
[27].Theyassessedperformancebycalculatingareasunderthereceiveroperatingcharac
teristiccurves,alongwithsensitivities,specificities,andaccuraciesbyusinganindepend
enttestsetof4784images,including1850imagesofcolorectalneoplasmsand2934normal
colonimages.TheAUCfordetectingcolorectalneoplasiabytheAImodelwas0.90.The
sensitivity,specificity,andaccuracywere79.0%,87.0%,and83.9%,respectively,ataprob
abilitycutoffof0.35.
HiroakiSaitoetal.[28]usedadatabaseof30,000VCEimagesofprotrudinglesions
from290patientexaminationstodevelopaCNNmodel.TheCNNmodeldevelopedfrom
thisdatabasewas90%sensitiveand79%specificwhenidentifyingtestimagescontaining
protrudinglesions.Inaddition,subsetanalysesevaluatingmodelperformancefordiffer
entlesionsdemonstratedthatthemodelwas86%sensitivefordetectingpolyps,92%sen
sitivefordetectingnodules,95%sensitivefordetectingepithelialbasedtumours,77%
sensitivefordetectingsubmucosallesions,and94%sensitiveforidentifyingprotruding
venousstructures,suchasvarices.
Nadimietal.developedaCNNfortheautonomousdetectionofcolorectalpolyps;
theirCNNwasanimprovedversionofZFNet,aCNNusingacombinationoftransfer
learning,preprocessinganddataaugmentation[29].Theycreatedanimagedatabaseof
11,300capsuleendoscopyimagesfromascreeningpopulation,includingcolorectal
polyps(anysizeormorphology,N=4800)andnormalmucosa(N=6500).TheirCNN
modelresultedinanevenbetterperformancewithanaccuracyof98.0%,asensitivityof
98.1%,andaspecificityof96.3%.(SeeAppendixATableA1forthesummaryoftheabove
results)
5.2.AIAssessmentofCCEBowelCleansing
InapilotstudyconductedbyBuijsetal.,anonlinearindexbasedonthepixelanalysis
modelandamachinelearningalgorithmbasedonthesupportvectormachineswithfour
cleanlinessclasses(unacceptable,poor,fair,andgood)weredevelopedtoclassifytheCCE
videosof41screeningparticipants[30].Theresultsofbothmodelswerecomparedto
cleanlinessevaluationsbyfourCCEreaders.TheMLbasedmodelclassified47%ofthe
videosinagreementwiththeaveragedclassificationbyCCEreaders,comparedto32%
bythepixelanalysismodel.Inaddition,theMLbasedmodelwassuperiortothepixel
analysisinclassifyingbowelcleansingqualityduetoahighersensitivitytounacceptable
andpoorcleansingquality.
Inanotherstudy[31],aCACscore,definedasthecolourintensities’redovergreen
(R/G)ratioandredoverbrown(R/(R + G)ratio,wasdeveloped.Bowelcleansingevalua
tionforeachCCEframewasdefinedaseitheradequatelyorinadequatelycleansed.Four
hundredandeightframeswereextracted.Twohundredsixteenstillframeswerein
cludedintheR/Gsetand192intheR/(R + G)set.RegardingtheR/Gratio,athreshold
valueof1.55wascalculated,withasensitivityof86.5%andaspecificityof77.7%.
Diagnostics2023,13,103814of19
RegardingtheR/(R + G)ratio,athresholdvalueof0.58wascalculatedwithasensi
tivityof95.5%andaspecificityof62.9%.ThetwoproposedCACscoresbasedontheratio
ofcolourintensitieshavehighsensitivitiesfordiscriminatingbetween“adequately
cleansed”and“inadequatelycleansed”CCEstillframes.TheirstudyshowedthatCAC
scorestoassessbowelpreparationqualitybasedonacolourintensityratioofredand
greenpixelsonstillimagesisfeasibleandrapid(seeAppendixATableA2forthesum
maryoftheaboveresults).
6.ChallengesofUtilisingAIinEndoscopyVideoSettings
6.1.UnderstandingtheInputDataUsedbytheAI
Oneofthemainchallengesofthedeepneuralnetworkistheneedtounderstand
whatsignalsthemodelhasextractedfromtheinputtodrawuptheassociationbetween
theinputdataandthepredictedoutput.AstheAIcreatesitsproblemsolvingmethods,
theprocessisentirelyindependentoftheprogrammer.AnexamplewouldbeutilisingAI
topredictfracturesonanklexrays.TheAIcancorrectlypredictfracturesbasedoniden
tifyingthearrowsthattheradiographersdrewtoindicatetheareaofinterestinsteadof
detectingthediscontinuationoftheoutlineofthebone.However,themodeldrewacon
clusionbasedonnonmedicalsignals,andtheoutcomewasconsideredaccurateeven
thoughitwasentirelyincorrect.Thisisaclassicexampleofaccidentallyfittingconfound
ersratherthanthetruesignal[32,33].
6.2.The“BlackBox”orUninterpretableAIAlgorithm
Withthecomplexityofthedeeplearningneuralnetwork,itisverychallengingto
interprettheAI’sprocessingmethodsbeforearrivingatthefinaloutput,whichisreferred
toastheneuralnetworkblackbox.Themorecomplextheneuralnetworkis,themore
accuratebutlessinterpretableitbecomes.Forexample,astudentcouldcomeupwiththe
answertoamathematicalquestionwithoutshowinganyworkingsteps.Asaresult,itis
noteasytounderstandhowthestudentreachesthesolutionintheend,whichleadsto
concernaboutunderstandingtheprinciples.Theneedformoreclarityandinterpretability
intheseneuralnetworksbecomesasignificantobstacleintheprogressionofAIdevelop
mentinthemedicalfield(seeAppendixAFigureA2foragraphicalrepresentationofthe
concept).
Moreover,poorinterpretabilityimpliesmorechallengingadjustmentstothemodel
forimprovement.Toovercomethis,approachessuchasinvolvingamultidisciplinary
teamtoreviewthefalsepositiveandfalsenegativeresultspredictedbythemodeland
testingthemodelonanexternaldatabaseareadopted[34].
6.3.PoorDifferentiationBetweenCorrelationVersusCausation
Inaddition,theAImodelwillnotbeabletodifferentiatethecorrelationorcausation
associationbetweentheinputandoutputdata.AgoodexampleisanAImodelcorrelating
theincreasingnumberofdrowningcasesintheswimmingpoolwiththegrowingice
creamsalesattheentranceinthesummer.Therefore,itconcludesthatgrowthinicecream
salescausesanincreaseindrowningincidentsintheswimmingpoolwhenweknowthat
bothofthesefactorscorrelatetothehotweatherinthesummer.
6.4.TheImportanceofDataQuality
InthecontextofartificialintelligenceinCCE,dataqualityismoreimportantthanthe
neuralnetworkalgorithmordataengineeringtechniques.“Garbagein,garbageout”is
commonlyusedinartificialintelligenceengineering.Thisreferstothefactthatthechosen
datashouldbehighquality,reliable,consistentandreproducible.Unfortunately,inCCE,
awidevariationinquantifyingthequalityofbowelpreparationandthebubbleeffect
amongexpertsisagoodexample.Thelackofaccuratedefinitionsforthesecomponents
Diagnostics2023,13,103815of19
compromisesthedataqualityandremainsproblematicforAIdevelopmentinthefieldof
CE.
6.5.Generalisabilityand“NoOneSizeFitsAll”
Inaddition,samplingstrategiesandtrainingpractices,suchassingleinstitution
data,smallgeographicareasampling,orotherapproaches,cancreateunintentionalbias
andreducegeneralisation.Forexample,aCCE’sAIdevelopedbasedonanEnglishpop
ulation’scolonimagesmightnotapplytoanAsianpopulation.Thisconceptisequivalent
tosamplingerrorinstatisticalterms.Therefore,thefeasibilityandaccuracyoftheAIto
adapttovariousmedicalimagingtechniquesindiversegeographicalandracialpopula
tionsstillrequiresfurtherexplorationandexaminationinfuturestudies.
Oneofthepotentialsolutionsisthepossibilityofsharingdatasetsamongdifferent
countriestocontributetobuildinganAIwithalarge,heterogenous,andmultinational
superalgorithmthatallowsaccuratedataprocessingfromanydatasetintheworld.How
ever,theharmonisationofimagesissimilarlyessential.Moreover,inthecontextofmul
tiinstitutionaldatasources,thereisapotentialriskforvariableequipment,protocols,etc.,
whichcanequallyaffecttheaccuracyoftheAIoutputs[35].
7.FutureofAIinGastroenterology
Accuratelyanalysingcapsuleendoscopyisatimeconsumingtaskforcliniciansde
pendingonthecomfortandexpertiseofthereader[35,36].UsingAIcanreducethattime
byhelpingcliniciansduringanalysisandreducediagnosticerrorsduetohumanlimita
tionssuchasbiasesandfatigue.Thiswouldultimatelyleadtomoretimeforcliniciansto
focusontraininganddiagnosingpathologies.Thiswirelessandpatientfriendlytech
nique,combinedwithrapidreadingplatformsandthehelpofartificialintelligence,will
becomeanattractiveandviablechoicetoalterhowpatientsareinvestigatedinthefuture
withingastroenterology[37].WiththegrowthoftelemedicinesteppedupbytheCOVID
19pandemic,alargepartofspecialistcarewillcontinuetobeperformedremotely.As
CCEbecomesmoreestablished,ithasenormouspotentialintelemedicinesettings.
Withthatinmind,thereareconcernsaboutfuturejobsinthegastroenterologysector
beingreplacedbyAIautomation.However,insteadofjobreplacement,weanticipatethe
shifttowardjobdisplacementbyfocusingmoreresourcesonthetasksthatarenoteasily
automated,suchasclinicianandpatientinteraction,morecomplexprocedures,complex
decisionmaking,education,andtraining.Inaddition,newjobsorindustries,suchas
medicalmachinelearningengineering,mightberequiredinthefuturemedicalhealth
system.
Thehuman–AIpartnershipwouldsuggestthatthemachinecannotbeusedalone.
Furthermore,overdependenceonAIwouldundoubtedlyleadtodeskilling,especiallyin
theformofcognitivework,suchaspolypdetectionandrecognition.Therefore,thekeyto
integratingAIintogastroenterologyshouldfocusonbalancingAIautomationandthe
personalcarewevalueforourpatientstoprovideanefficientandcosteffectiveendos
copyserviceinthefuture[38–40].
8.Conclusions
Inthefuture,AIisexpectedtooffermultiplebeneficialapplicationsinGIdiseaserisk
stratification,lesionrecognitionandassessment,diagnosis,andtreatment.Theprogress
inthelastdecadesuggeststhatAIaidedCCEwillbeavailablesoonandradicallytrans
formmedicalpracticeandpatientcare.Understandingthefundamentalsandthebasic
conceptsofmachinelearningtechnologywillnotonlystrengthenthetrustinAIamong
clinicalprofessionalsbutpreventanyunintendedpitfallsinAIapplicationsinfutureclin
icalpractice.ThismayallowfutureAIrefinementoroptimisationwithamultidisciplinary
teamapproach.
Diagnostics2023,13,103816of19
Withthecurrentethicaluncertaintyandchallenges,futuremulticentre,randomised
trials,whichvalidateAImodels,shouldfocusonansweringthefundamentalquestionof
whetherAImodelscanenhancephysicianperformancesafelyandreliably.Intheend,a
robustmultidisciplinarycollaborationamongphysicians,computerscientists,andentre
preneursisrequiredtopromoteAI’sclinicaluseinmedicalpractice[3840].
AuthorContributions:Conceptualization,I.I.L.,E.W.andG.J.N.;validation,S.S.,A.W.,H.W.,A.K.,
A.J.M.W.andR.P.A.;literaturereview,I.I.L.andG.J.N.;resources,S.S.;writing—originaldraftprep
aration,I.I.L.,E.W.,A.K.andG.J.N.;writing—reviewandediting,I.I.L.,S.S.,A.W.,H.W.,A.J.M.W.,
A.K.andR.P.A.;visualisation,I.I.L.Allauthorshavereadandagreedtothepublishedversionof
themanuscript.
Funding:Thisresearchreceivednoexternalfunding
DataAvailabilityStatement:Notapplicable.
ConflictsofInterest:Theauthorsdeclarenoconflictofinterest.
AppendixA
TableA1.SummaryofCNNperformancefordetectionofcoloniclesions.
StudyNo.of
Images
Colonic
Lesion
Normal
Colonic
Mucosa
SensitivitySpecificity
Accuracy
ofthe
Network
AUROCfor
Detectionof
Protruding
Lesion
Afonso
[26]3635770286590.3%98.8%97.0%0.99
Saraiva[2]57152410330590.0%99.1%95.3%0.99
Atsuo
Yamada
[27]
47841850293479.0%87.0% 0.902
Hiroaki
Saito[28]17,507750710,00090.0%79.0% 0.911
Nadimi,
E.S[14]16954800650098.1%96.3%98.0%
TableA2.SummaryofthetwostudiesonAIassessmentofCCEbowelcleanliness.
StudyTypeofAI
Numberof
Videos/Frames
Analysed
Levelof
AgreementAI
withReaders,%
Sensitivityspecificity
Buijs[30]
Nonlinear
indexmodel
SVMmode
41videos
41videos
32%
47%
_
_
_
_
Becq[31]R/Gratio
R/(R+G)ratio
216frames
192frames
‐
‐
86.5%
95.5%
77.7%
62.9%
Diagnostics2023,13,103817of19
FigureA1.Anexampleofmappinglocationsintheimagestothepixelvalueaspartofmachine
analysisofthepicture.
FigureA2.TheintrinsicmethodorbehaviouroftheAIcodeinthemodelisuninterpretable,likea
blackboxwithnotransparency.
References
1. Bejnordi,B.E.;Veta,M.;VanDiest,P.J.;VanGinneken,B.;Karssemeijer,N.;Litjens,G.;VanDerLaak,J.A.W.M.;Hermsen,M.;
Manson,Q.F.;Balkenhol,M.;etal.DiagnosticAssessmentofDeepLearningAlgorithmsforDetectionofLymphNodeMetas
tasesinWomenwithBreastCancer.JAMA2017,318,2199–2210.https://doi.org/10.1001/jama.2017.14585.
2. Saraiva,M.M.;Ferreira,J.P.S.;Cardoso,H.;Afonso,J.;Ribeiro,T.;Andrade,P.;Parente,M.P.L.;Jorge,R.N.;Macedo,G.Artificial
intelligenceandcoloncapsuleendoscopy:Developmentofanautomateddiagnosticsystemofprotrudinglesionsincoloncap
suleendoscopy.Tech.Coloproctol.2021,25,1243–1248.https://doi.org/10.1007/s10151021025175.
3. BjørsumMeyer,T.;Koulaouzidis,A.;Baatrup,G.Commenton‘Artificialintelligenceingastroenterology:Astateoftheartre
view’.WorldJ.Gastroenterol.2022,28,1722–1724.
4. Robertson,A.R.;Segui,S.;Wenzek,H.;Koulaouzidis,A.Artificialintelligenceforthedetectionofpolypsorcancerwithcolon
capsuleendoscopy.Ther.Adv.Gastrointest.Endosc.2021,14,26317745211020277.https://doi.org/10.1177/26317745211020277.
5. Dray,X.;Iakovidis,D.;Houdeville,C.;Jover,R.;Diamantis,D.;Histace,A.Artificialintelligenceinsmallbowelcapsuleen
doscopy—Currentstatus,challengesandfuturepromise.J.Gastroenterol.Hepatol.2021,36,12–19.
6. Qin,K.;Li,J.;Fang,Y.;Xu,Y.;Wu,J.;Zhang,H.;Li,H.;Liu,S.;Li,Q.Convolutionneuralnetworkforthediagnosisofwireless
capsuleendoscopy:Asystematicreviewandmetaanalysis.Surg.Endosc.2021,36,16–31.https://doi.org/10.1007/s00464021
086893.
7. Soffer,S.;Klang,E.;Shimon,O.;Nachmias,N.;Eliakim,R.;BenHorin,S.;Kopylov,U.;Barash,Y.Deeplearningforwireless
capsuleendoscopy:Asystematicreviewandmetaanalysis.Gastrointest.Endosc.2020,92,831–839.e8.
https://doi.org/10.1016/j.gie.2020.04.039.
8. Horie,Y.;Yoshio,T.;Aoyama,K.;Yoshimizu,S.;Horiuchi,Y.;Ishiyama,A.;Hirasawa,T.;Tsuchida,T.;Ozawa,T.;Ishihara,S.;
etal.Diagnosticoutcomesofesophagealcancerbyartificialintelligenceusingconvolutionalneuralnetworks.Gastrointest.
Endosc.2019,89,25–32.
9. Wang,P.;Berzin,T.M.;GlissenBrown,J.R.;Bharadwaj,S.;Becq,A.;Xiao,X.;Liu,P.;Li,L.;Song,Y.;Zhang,D.;etal.Realtime
automaticdetectionsystemincreasescolonoscopicpolypandadenomadetectionrates:Aprospectiverandomisedcontrolled
study.Gut2019,68,1813–1819.
10. Aoki,T.;Yamada,A.;Aoyama,K.;Saito,H.;Tsuboi,A.;Nakada,A.;Niikura,R.;Fujishiro,M.;Oka,S.;Ishihara,S.;etal.Auto
maticdetectionoferosionsandulcerationsinwirelesscapsuleendoscopyimagesbasedonadeepconvolutionalneuralnetwork.
Gastrointest.Endosc.2019,89,357–363.e2.https://doi.org/10.1016/j.gie.2018.10.027.
11. Aoki,T.;Yamada,A.;Aoyama,K.;Saito,H.;Fujisawa,G.;Odawara,N.;Kondo,R.;Tsuboi,A.;Ishibashi,R.;Nakada,A.;etal.
Clinicalusefulnessofadeeplearningbasedsystemasthefirstscreeningonsmallbowelcapsuleendoscopyreading.Dig.En
dosc.2019,32,585–591.https://doi.org/10.1111/den.13517.
Diagnostics2023,13,103818of19
12. Moen,S.;Vuik,F.E.R.;Kuipers,E.J.;Spaander,M.C.W.ArtificialIntelligenceinColonCapsuleEndoscopy—ASystematicRe
view.Diagnostics2022,12,1994.https://doi.org/10.3390/diagnostics12081994.
13. Min,J.K.;Kwak,M.S.;Cha,J.M.OverviewofDeepLearninginGastrointestinalEndoscopy.GutLiver2019,13,388–393.
https://doi.org/10.5009/gnl18384.
14. VanderSommen,F.;deGroof,J.;Struyvenberg,M.;vanderPutten,J.;Boers,T.;Fockens,K.;Schoon,E.J.;Curvers,W.;deWith,
P.;Mori,Y.;etal.MachinelearninginGIendoscopy:Practicalguidanceinhowtointerpretanovelfield.Gut2020,69,2035–
2045.
15. Deo,R.C.MachineLearninginMedicine.Circulation2015,132,1920.
16. Tayefi,M.;Ngo,P.;Chomutare,T.;Dalianis,H.;Salvi,E.;Budrionis,A.;Godtliebsen,F.Challengesandopportunitiesbeyond
structureddatainanalysisofelectronichealthrecords.WIREsComput.Stat.2021,13,e1549.https://doi.org/10.1002/wics.1549.
17. Cumberlin,R.L.;Rodgers,J.E.;Fahey,F.H.Digitalimageprocessingofradiationtherapyportalfilms.Comput.Med.Imaging
Graph.1989,13,227–233.https://doi.org/10.1016/08956111(89)901298.
18. Yu,H.;Samuels,D.C.;Zhao,Y.Y.;Guo,Y.Architecturesandaccuracyofartificialneuralnetworkfordiseaseclassificationfrom
omicsdata.BMCGenom.2019,20,167.https://doi.org/10.1186/s128640195546z.
19. Missert,A.D.;Yu,L.;Leng,S.;Fletcher,J.G.;McCollough,C.H.Synthesizingimagesfrommultiplekernelsusingadeep
convolutionalneuralnetwork.Med.Phys.2019,47,422–430.https://doi.org/10.1002/mp.13918.
20. Luo,G.Areviewofautomaticselectionmethodsformachinelearningalgorithmsandhyperparametervalues.Netw.Model.
Anal.HealthInform.Bioinform.2016,5,18.https://doi.org/10.1007/s1372101601256.
21. Faigel,D.O.;Baron,T.H.;Adler,D.G.;Davila,R.E.;Egan,J.;Hirota,W.K.;Jacobson,B.C.;Leighton,J.A.;Qureshi,W.;Rajan,E.;
etal.ASGEguideline:Guidelinesforcredentialingandgrantingprivilegesforcapsuleendoscopy.Gastrointest.Endosc.2005,61,
503–505.https://doi.org/10.1016/s00165107(04)027816.
22. Beg,S.;Card,T.;Sidhu,R.;Wronska,E.;Ragunath,K.;Ching,H.L.;Koulaouzidis,A.;Yung,D.;Panter,S.;Mcalindon,M.;et
al.Theimpactofreaderfatigueontheaccuracyofcapsuleendoscopyinterpretation.Dig.LiverDis.2021,53,1028–1033.
https://doi.org/10.1016/j.dld.2021.04.024.
23. Koulaouzidis,A.;Dabos,K.;Philipper,M.;Toth,E.;Keuchel,M.Howshouldwedocoloncapsuleendoscopyreading:A
practicalguide.Ther.Adv.Gastrointest.Endosc.2021,14,26317745211001983.https://doi.org/10.1177/26317745211001983.
24. ElHajjar,A.;Rey,J.F.Artificialintelligenceingastrointestinalendoscopy:Generaloverview.Chin.Med.J.2020,133,326.
25. Pan,G.;Yan,G.;Qiu,X.;Cui,J.BleedingDetectioninWirelessCapsuleEndoscopyBasedonProbabilisticNeuralNetwork.J.
Med.Syst.2010,35,1477–1484.https://doi.org/10.1007/s1091600994240.
26. Mascarenhas,M.;Afonso,J.;Ribeiro,T.;Cardoso,H.;Andrade,P.;Ferreira,J.P.S.;Saraiva,M.M.;Macedo,G.Performanceofa
DeepLearningSystemforAutomaticDiagnosisofProtrudingLesionsinColonCapsuleEndoscopy.Diagnostics2022,12,1445.
https://doi.org/10.3390/diagnostics12061445.
27. Yamada,A.;Niikura,R.;Otani,K.;Aoki,T.;Koike,K.Automaticdetectionofcolorectalneoplasiainwirelesscoloncapsule
endoscopicimagesusingadeepconvolutionalneuralnetwork.Endoscopy2021,53,832–836.
28. Saito,H.;Aoki,T.;Aoyama,K.;Kato,Y.;Tsuboi,A.;Yamada,A.;Fujishiro,M.;Oka,S.;Ishihara,S.;Matsuda,T.;etal.Automatic
detectionandclassificationofprotrudinglesionsinwirelesscapsuleendoscopyimagesbasedonadeepconvolutionalneural
network.Gastrointest.Endosc.2020,92,144–151.e1.https://doi.org/10.1016/j.gie.2020.01.054.
29. Nadimi,E.S.;Buijs,M.M.;Herp,J.;Kroijer,R.;KobaekLarsen,M.;Nielsen,E.;Pedersen,C.D.;BlanesVidal,V.;Baatrup,G.
Applicationofdeeplearningforautonomousdetectionandlocalizationofcolorectalpolypsinwirelesscoloncapsule
endoscopy.Comput.Electr.Eng.2019,81,106531.https://doi.org/10.1016/j.compeleceng.2019.106531.
30. Buijs,M.M.;Ramezani,M.H.;Herp,J.;Kroijer,R.;KobaekLarsen,M.;Baatrup,G.;Nadimi,E.S.Assessmentofbowelcleansing
qualityincoloncapsuleendoscopyusingmachinelearning:Apilotstudy.Endosc.Int.Open2018,6,E1044–E1050.
https://doi.org/10.1055/a06277136.
31. Becq,A.;Histace,A.;Camus,M.;NionLarmurier,I.;Ali,E.A.;Pietri,O.;Romain,O.;Chaput,U.;Li,C.;Marteau,P.;etal.
Developmentofacomputedcleansingscoretoassessqualityofbowelpreparationincoloncapsuleendoscopy.Endosc.Int.
Open2018,6,E844–E850.https://doi.org/10.1055/a05772897.
32. Kelly,C.J.;Karthikesalingam,A.;Suleyman,M.;Corrado,G.;King,D.Keychallengesfordeliveringclinicalimpactwith
artificialintelligence.BMCMed.2019,17,195.https://doi.org/10.1186/s1291601914262.
33. Meskó,B.;Görög,M.Ashortguideformedicalprofessionalsintheeraofartificialintelligence.NPJDigit.Med.2020,3,126.
34. Ting,D.S.W.;Pasquale,L.R.;Peng,L.;Campbell,J.P.;Lee,A.Y.;Raman,R.;Tan,G.S.W.;Schmetterer,L.;Keane,P.A.;Wong,
T.Y.;etal.Artificialintelligenceanddeeplearninginophthalmology.Br.J.Ophthalmol.2019,103,167–175.
35. Glocker,B.;Robinson,R.;deDou,Q.C.;Konukoglu,E.MachineLearningwithMultiSiteImagingData:AnEmpiricalStudy
ontheImpactofScannerEffects.arXiv2019,arXiv:1910.04597.
36. Zheng,Y.P.;Hawkins,L.;Wolff,J.;Goloubeva,O.;Goldberg,E.Detectionoflesionsduringcapsuleendoscopy:Physician
performanceisdisappointing.Am.J.Gastroenterol.2012,107,554–560.
37. ChetcutiZammit,S.;Sidhu,R.Capsuleendoscopy—Recentdevelopmentsandfuturedirections.ExpertRev.Gastroenterol.
Hepatol.2021,15,127–137.
38. Chen,M.;Decary,M.Artificialintelligenceinhealthcare:Anessentialguideforhealthleaders.HealthManag.Forum.2019,33,
10–18.https://doi.org/10.1177/0840470419873123.
Diagnostics2023,13,103819of19
39. Parasher,G.;Wong,M.;Rawat,M.Evolvingroleofartificialintelligenceingastrointestinalendoscopy.WorldJ.Gastroenterol.
2020,26,7287–7298.https://doi.org/10.3748/wjg.v26.i46.7287.
40. Dinga,R.;Penninx,B.W.;Veltman,D.J.;Schmaal,L.;Marquand,A.F.Beyondaccuracy:Measuresforassessingmachinelearning
models,pitfallsandguidelines.bioRxiv2019,743138.https://doi.org/10.1101/743138.
Disclaimer/Publisher’sNote:Thestatements,opinionsanddatacontainedinallpublicationsaresolelythoseoftheindividual
author(s)andcontributor(s)andnotofMDPIand/ortheeditor(s).MDPIand/ortheeditor(s)disclaimresponsibilityforanyinjury
topeopleorpropertyresultingfromanyideas,methods,instructionsorproductsreferredtointhecontent.
... The stages of integrating artificial intelligence (AI) into the CCE process detailed by Robertson et al. propose a scenario where AI could initially replace the pre-reading process and eventually take on the responsibility for analysing the entire CCE video prior to the clinician's validation and reporting. [12][13][14] This proposal comprises of different stages. It suggested that the clinician would be presented with all the polyps' images without a comprehensive review of the whole CCE video. ...
Article
Full-text available
Plain language summary Creating criteria and standards for matching polyps (abnormal growth in the bowels) on colon capsule video analysis: an international expert agreement using the RAND (modified Delphi process) process Background: Doctors often use colon capsule endoscopy (CCE), a high-tech capsule with two cameras, to record and check for diseases in the small and large bowels as the capsule travels through the intestines. One of the most common conditions in the large bowel is polyps, which are abnormal growths in the lining of the bowel. Comparing and matching polyps in the same video from the capsule can be tricky as they look very similar, leading to the possibility of incorrectly reporting the same polyp twice or more. This can lead to wrong results and inaccuracies. The literature did not have any criteria or standards for matching polyps in CCE before. Aim: Using the RAND/UCLA (modified Delphi) process, this study aims to identify the key factors or components used to match polyps within a CCE video. The goal is to explore each factor and create complete criteria for polyp matching based on the agreement from international experts. Method: A group of 11 international CCE experts came together to evaluate a survey with 60 statements. They anonymously rated each statement on a scale from 1 to 9 (1-3: inappropriate, 4-6: uncertain, and 7-9: appropriate). After discussing the Round 1 results virtually, a Round 2 survey with the same but revised questions was created and completed before the final analysis of their agreement. Results: The main factors for matching polyps are 1) the timing when the polyp was seen, 2) where it is in the bowel, 3) its blood vessel pattern, 4) size, 5) the timing of its appearance between cameras, 6) surrounding tissue features, 7) its shape, and 8) surface features. If five or more of these factors match, the compared polyps are likely the same. Conclusion: This study establishes the first complete criteria for matching polyps in CCE. While it may not provide a definitive solution for matching challenging and small polyps, these criteria serve as a guide to help and make the process of polyp matching easier.
... Mitigating this issue is possible by amplifying the sample size. Nevertheless, data sourced from a singular agency or sampled from a restricted geographical region may inadvertently introduce bias and compromise generalizability 177 . Consequently, to successfully deploy AI in autonomous wound detection in a large-scale clinical setting, it becomes indispensable to amalgamate data from varying regions and ethnicities to enable data sharing. ...
Article
Full-text available
Wireless capsule endoscopy (WCE) offers a non-invasive evaluation of the digestive system, eliminating the need for sedation and the risks associated with conventional endoscopic procedures. Its significance lies in diagnosing gastrointestinal tissue irregularities, especially in the small intestine. However, existing commercial WCE devices face limitations, such as the absence of autonomous lesion detection and treatment capabilities. Recent advancements in micro-electromechanical fabrication and computational methods have led to extensive research in sophisticated technology integration into commercial capsule endoscopes, intending to supersede wired endoscopes. This Review discusses the future requirements for intelligent capsule robots, providing a comparative evaluation of various methods’ merits and disadvantages, and highlighting recent developments in six technologies relevant to WCE. These include near-field wireless power transmission, magnetic field active drive, ultra-wideband/intrabody communication, hybrid localization, AI-based autonomous lesion detection, and magnetic-controlled diagnosis and treatment. Moreover, we explore the feasibility for future “capsule surgeons”.
... AI systems have been trained to assess disease severity in inflammatory bowel disease (IBD) (9). Research on the use of computer assistance in colon capsule endoscopy are currently in its preliminary stage of evaluation (10). ...
Article
Full-text available
The purpose of this article is to provide an overview of white light colon capsule endoscopy’s current clinical application, concentrating on its most recent developments. Second-generation colon capsule endoscopy (CCE2) is approved by the FDA for use as an adjunctive test in patients with incomplete colonoscopy and within Europe in patients at average risk, those with incomplete colonoscopies or those unwilling to undergo conventional colonoscopies. Since the publication of European Society of GI Endoscopy guidelines on the use of CCE, there has been a significant increase in comparative studies on the diagnostic yield of CCE. This paper discusses CCE2 in further detail. It explains newly developed colon capsule system and the current status on the use of CCE, it also provides a comprehensive summary of systematic reviews on the implementation of CCE in colorectal cancer screening from a methodological perspective. Patients with ulcerative colitis can benefit from CCE2 in terms of assessing mucosal inflammation. As part of this review, performance of CCE2 for assessing disease severity in ulcerative colitis is compared with colonoscopy. Finally, an assessment if CCE can become a cost-effective clinical service overall.
Article
Full-text available
Background and aims: The applicability of colon capsule endoscopy in daily practice is limited by the accompanying labor-intensive reviewing time and the risk of inter-observer variability. Automated reviewing of colon capsule endoscopy images using artificial intelligence could be timesaving while providing an objective and reproducible outcome. This systematic review aims to provide an overview of the available literature on artificial intelligence for reviewing colonic mucosa by colon capsule endoscopy and to assess the necessary action points for its use in clinical practice. Methods: A systematic literature search of literature published up to January 2022 was conducted using Embase, Web of Science, OVID MEDLINE and Cochrane CENTRAL. Studies reporting on the use of artificial intelligence to review second-generation colon capsule endoscopy colonic images were included. Results: 1017 studies were evaluated for eligibility, of which nine were included. Two studies reported on computed bowel cleansing assessment, five studies reported on computed polyp or colorectal neoplasia detection and two studies reported on other implications. Overall, the sensitivity of the proposed artificial intelligence models were 86.5–95.5% for bowel cleansing and 47.4–98.1% for the detection of polyps and colorectal neoplasia. Two studies performed per-lesion analysis, in addition to per-frame analysis, which improved the sensitivity of polyp or colorectal neoplasia detection to 81.3–98.1%. By applying a convolutional neural network, the highest sensitivity of 98.1% for polyp detection was found. Conclusion: The use of artificial intelligence for reviewing second-generation colon capsule endoscopy images is promising. The highest sensitivity of 98.1% for polyp detection was achieved by deep learning with a convolutional neural network. Convolutional neural network algorithms should be optimized and tested with more data, possibly requiring the set-up of a large international colon capsule endoscopy database. Finally, the accuracy of the optimized convolutional neural network models need to be confirmed in a prospective setting.
Article
Full-text available
Background: Colon capsule endoscopy (CCE) is an alternative for patients unwilling or with contraindications for conventional colonoscopy. Colorectal cancer screening may benefit greatly from widespread acceptance of a non-invasive tool such as CCE. However, reviewing CCE exams is a time-consuming process, with risk of overlooking important lesions. We aimed to develop an artificial intelligence (AI) algorithm using a convolutional neural network (CNN) architecture for automatic detection of colonic protruding lesions in CCE images. An anonymized database of CCE images collected from a total of 124 patients was used. This database included images of patients with colonic protruding lesions or patients with normal colonic mucosa or with other pathologic findings. A total of 5715 images were extracted for CNN development. Two image datasets were created and used for training and validation of the CNN. The AUROC for detection of protruding lesions was 0.99. The sensitivity, specificity, PPV and NPV were 90.0%, 99.1%, 98.6% and 93.2%, respectively. The overall accuracy of the network was 95.3%. The developed deep learning algorithm accurately detected protruding lesions in CCE images. The introduction of AI technology to CCE may increase its diagnostic accuracy and acceptance for screening of colorectal neoplasia.
Article
Full-text available
Colon capsule endoscopy (CCE) was introduced nearly two decades ago. Initially, it was limited by poor image quality and short battery time, but due to technical improvements, it has become an equal diagnostic alternative to optical colonoscopy (OC). Hastened by the coronavirus disease 2019 pandemic, CCE has been introduced in clinical practice to relieve overburdened endoscopy units and move investigations to out-patient clinics. A wider adoption of CCE would be bolstered by positive patient experience, as it offers a diagnostic investigation that is not inferior to other modalities. The shortcomings of CCE include its inability to differentiate adenomatous polyps from hyperplastic polyps. Solving this issue would improve the stratification of patients for polyp removal. Artificial intelligence (AI) has shown promising results in polyp detection and characterization to minimize incomplete CCEs and avoid needless examinations. Onboard AI appears to be a needed application to enable near-real-time decision-making in order to diminish patient waiting times and avoid superfluous subsequent OCs. With this letter, we discuss the potential and role of AI in CCE as a diagnostic tool for the large bowel.
Article
Full-text available
Background Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. Methods A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I ² was used for the evaluation of heterogeneity. Results 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93–0.99), and for polyps/cancer is 0.97 (95% CI 0.82–0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93–0.99), for GI bleeding is 1.00 (95% CI 0.99–1.00), and for polyps/cancer is 0.98 (95% CI 0.92–0.99). Conclusion Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE.
Article
Full-text available
Colorectal cancer is common and can be devastating, with long-term survival rates vastly improved by early diagnosis. Colon capsule endoscopy (CCE) is increasingly recognised as a reliable option for colonic surveillance, but widespread adoption has been slow for several reasons, including the time-consuming reading process of the CCE recording. Automated image recognition and artificial intelligence (AI) are appealing solutions in CCE. Through a review of the currently available and developmental technologies, we discuss how AI is poised to deliver at the forefront of CCE in the coming years. Current practice for CCE reporting often involves a two-step approach, with a ‘pre-reader’ and ‘validator’. This requires skilled and experienced readers with a significant time commitment. Therefore, CCE is well-positioned to reap the benefits of the ongoing digital innovation. This is likely to initially involve an automated AI check of finished CCE evaluations as a quality control measure. Once felt reliable, AI could be used in conjunction with a ‘pre-reader’, before adopting more of this role by sending provisional results and abnormal frames to the validator. With time, AI would be able to evaluate the findings more thoroughly and reduce the input required from human readers and ultimately autogenerate a highly accurate report and recommendation of therapy, if required, for any pathology identified. As with many medical fields reliant on image recognition, AI will be a welcome aid in CCE. Initially, this will be as an adjunct to ‘double-check’ that nothing has been missed, but with time will hopefully lead to a faster, more convenient diagnostic service for the screening population.
Article
Full-text available
In this article, we aim to provide general principles as well as personal views for colonic capsule endoscopy. To allow an in-depth understanding of the recommendations, we also present basic technological characteristics and specifications, with emphasis on the current as well as the previous version of colonic capsule endoscopy and relevant software. To date, there is no scientific proof to support the optimal way of reading a colonic capsule endoscopy video, or any standards or guidelines exist. Hence, any advice is a mixture of recommendations by the capsule manufacturer and experts’ opinion. Furthermore, there is a paucity of data regarding the use of term(s) (pre-reader/reader-validator) in colonic capsule endoscopy. We also include a couple of handy tables in order to get info at a glance.
Article
Full-text available
Electronic health records (EHR) contain a lot of valuable information about individual patients and the whole population. Besides structured data, unstructured data in EHRs can provide extra, valuable information but the analytics processes are complex, time‐consuming, and often require excessive manual effort. Among unstructured data, clinical text and images are the two most popular and important sources of information. Advanced statistical algorithms in natural language processing, machine learning, deep learning, and radiomics have increasingly been used for analyzing clinical text and images. Although there exist many challenges that have not been fully addressed, which can hinder the use of unstructured data, there are clear opportunities for well‐designed diagnosis and decision support tools that efficiently incorporate both structured and unstructured data for extracting useful information and provide better outcomes. However, access to clinical data is still very restricted due to data sensitivity and ethical issues. Data quality is also an important challenge in which methods for improving data completeness, conformity and plausibility are needed. Further, generalizing and explaining the result of machine learning models are important problems for healthcare, and these are open challenges. A possible solution to improve data quality and accessibility of unstructured data is developing machine learning methods that can generate clinically relevant synthetic data, and accelerating further research on privacy preserving techniques such as deidentification and pseudonymization of clinical text. This article is categorized under: • Applications of Computational Statistics > Health and Medical Data/Informatics
Article
Background Colon capsule endoscopy (CCE) is a minimally invasive alternative for patients unwilling to undergo conventional colonoscopy, or for whom the latter exam is contraindicated. This is particularly important in the setting of colorectal cancer screening. Nevertheless, these exams produce large numbers of images, and reading them is a monotonous and time-consuming task, with the risk of overlooking important lesions. The development of automated tools based on artificial intelligence (AI) technology may improve some of the drawbacks of this diagnostic instrument.MethodsA database of CCE images was used for development of a Convolutional Neural Network (CNN) model. This database included anonymized images of patients with protruding lesions in the colon or patients with normal colonic mucosa or with other pathologic findings. A total of 3,387,259 frames from 24 CCE exams were retrospectively reviewed. For CNN development, 3640 images (860 protruding lesions and 2780 with normal mucosa or other findings) were ultimately extracted. Training and validation datasets were constructed for the development and testing of the CNN.ResultsThe CNN detected protruding lesions with a sensitivity, specificity, positive and negative predictive values of 90.7, 92.6, 79.2 and 96.9%, respectively. The area under the receiver operating characteristic curve for detection of protruding lesions was 0.97.Conclusions The deep learning algorithm we developed is capable of accurately detecting protruding lesions. The application of AI technology to CCE may increase its diagnostic accuracy and acceptance for screening of colorectal neoplasia.
Article
Background and aims Capsule endoscopy (CE) interpretation requires the review of many thousands of images, with lesions often limited to just a few frames. In this study we aim to determine whether lesion detection declines according to the number of capsule videos read. Methods 32 participants, 16 of which were novices (NR) and 16 experienced (ER) capsule readers took part in this prospective evaluation study. Participants read six capsule cases with a variety of lesions, in a randomly assigned order during a single sitting. Psychomotor Vigilance Tests and Fatigue Scores were recorded prior to commencing and then after every two capsules read. Changes in lesion detection and measures of fatigue were assessed across the duration of the study. Results Mean agreement with the predefined lesions was 48.3% (SD:16.1), and 21.3% (SD:15.1) for the experienced and novice readers respectively. Lesion detection declined amongst experienced reader after the first study (p = 0.01), but remained stable after subsequent capsules read, while NR accuracy was unaffected by capsule numbers read. Objective measures of fatigue did not correlate with reading accuracy. Conclusion This study demonstrates that reader accuracy declines after reading just one capsule study. Subjective and objective measures of fatigue were not sufficient to predict the onset of the effects of fatigue.
Article
Neural network‐based solutions are under development to alleviate physicians from the tedious task of small‐bowel capsule endoscopy reviewing. Computer‐assisted detection is a critical step, aiming to reduce reading times while maintaining accuracy. Weakly supervised solutions have shown promising results; however, video‐level evaluations are scarce, and no prospective studies have been conducted yet. Automated characterization (in terms of diagnosis and pertinence) by supervised machine learning solutions is the next step. It relies on large, thoroughly labeled databases, for which preliminary “ground truth” definitions by experts are of tremendous importance. Other developments are under ways, to assist physicians in localizing anatomical landmarks and findings in the small bowel, in measuring lesions, and in rating bowel cleanliness. It is still questioned whether artificial intelligence will enter the market with proprietary, built‐in or plug‐in software, or with a universal cloud‐based service, and how it will be accepted by physicians and patients.