Conference PaperPDF Available

Non-parametric laser and video data fusion: Application to pedestrian detection in urban environment

Authors:
  • Université Clermont Auvergne - UCA, Clermont-Ferrand, France

Abstract and Figures

In urban environments, pedestrian detection is a challenging task for automotive research, where algorithms suffer from a lack of reliability due to many false detections. This paper presents a multisensor fusion method based on a stochastic recursive Bayesian framework also called particle filter which fuses information from laser and video sensors to improve the performance of a pedestrian detection system. The main contributions of this paper are first, the use of a non-parametric data association method in order to better approximate the discrete distribution and second, the modeling of the likelihood function with a mixture of Gaussian and uniform distributions in order to take into account all the available information. Simulation results as well as results of experiments conducted on real data demonstrate the effectiveness of the proposed approach.
Content may be subject to copyright.
Non-parametricLaser and VideoDataFusion:Applicationto
PedestrianDetectioninUrbanEnvironment
S.Gidel,C.Blanc,T.Chateau,P.Checchinand L.Trassoudaine
LASMEA-UMR6602 CNRS
BlaisePascalUniversity
Aubière,France
samuel.gidel@lasmea.univ-bpclermont.fr
AbstractIn urban environments,pedestrian detection is
achallenging taskforautomotive research,wherealgo-
rithms sufferfromalack ofreliabilityduetomanyfalsede-
tections.Thispaperpresentsamultisensorfusion method
based on a stochasticrecursive Bayesian frameworkalso
called particlelterwhichfusesinformation fromlaserand
videosensors toimprove theperformance ofa pedestrian
detection system.Themaincontributionsof thispaperare
rst,theuseofa non-parametricdata association method
in orderto betterapproximatethediscretedistribution and
second,themodeling of thelikelihood function with a mix-
tureofGaussian and uniformdistributionsin ordertotake
into accountall theavailableinformation.Simulation re-
sultsaswell as resultsofexperimentsconducted on realdata
demonstratethe effectiveness of theproposed approach.
Keywords:Particlelters,kerneldensityestimation,laser-
scanner,videocamera,sensor fusion,likelihood computa-
tion.
1Introduction
Currently,morethan 8,000 vulnerableroad users,pedes-
triansand cyclists,arekilledevery yearintheEuropean
Union.Accidentstatisticsindicatethatdespiterecentad-
vancesinsafety duetotheintroduction ofpassivesafetysys-
temsand tighterpedestrianlegislations,pedestrianaccidents
still represent thesecond largestcauseoftrafc-relatedin-
juriesand fatalities[1].So,thepedestrian detection be-
comesanessentialfunctionalityfor futurevehicles.This
issuetakesplace inthe contextoftheLOVeProject(Soft-
wareforvulnerablesobservation)whichaimsat improving
roadsafety,mainlyfocusing on pedestriansecurity[2].
Forabroadreviewofthevarious sensorsusedforpedes-
trian detection,one canconsult [3]and [4]wherepiezo-
electric,radar,ultrasound,laser rangescannersensorsand
camerasoperating inthevisibleorintheinfraredarede-
scribed.Using videosensorstosolvedetection and identi-
cation problems seemsnaturalatrst,giventhe capacity of
thistypeofsensorto detect/analyze thesize,theshape and
thetextureofapedestrian.Many methodsto detecthuman
beingshavebeen developedincomputervision based on
monocularorstereoscopicimages[6].However,thestrong
sensitivitytoatmospheric conditions,thewidevariability of
humanappearance,thelimitedapertureofthis sensorand
theimpossibilityto obtain directand accurateinformation
concerning depth havegivenrisetoaninterestforthedevel-
opmentofadetection method starting fromanactivesen-
sorlike a radaroralasersensor.The ability oflasersys-
tembased pedestrian detection tocountand track hasbeen
proved,eveninthe caseofavery high-densitycrowd[7] [8].
However,theobviouslimitationsofthis sensor (no informa-
tion aboutshape,contour,texture,colorofobjects),its sen-
sibilitytoatmospheric conditions suchasrainand fog and
thefrequentocclusionsbetween objects,requireto devise a
method oflaser/camerafusion toimprove a pedestriancol-
lision avoidance system.
Astudy ofsensor-based pedestrian detection,presented
in[5],indicatesthat thelaserscannercooperation withcam-
erasappearsto be a good solution to develop.So,theprob-
lemishowtocombinethediverse and sometimesconict-
ing amountsofinformation inthebestmanner,to outper-
formthebestresultsexpectedfromtheuseofasinglesensor
technology.
Themain difculty ofdatafusion liesinthe association of
thenewobservationscoming fromdifferentsensors.Thus,
two distinctproblemshaveto bejointlysolved:data associ-
ation and estimation.
The conventionalapproachesarebased on thelinear
Kalmanlterand leadto data association suchas JPDAF
(JointProbabilisticDataAssociation Filter) ortheMHT
(MultipleHypothesesTracker) which differintheirasso-
ciation techniquesbutwhichall sharethesameGaussian
assumptions[9].Unfortunately,tracking pedestrian does
notcopewithlinearmovementand Gaussian noises.Un-
dersuchassumptions(stochasticstate equation and non-
Gaussian noises),particleltersareparticularlyappropri-
ate[10].Inthispaper,anovel laser/camerabasedsystemis
presented,thataimsatreliability,real-timemonitoring and
tracking multiplepeopleinan urbanenvironment.
Thisarticleisorganizedasfollows.InSection 2 the ap-
12th International Conference on Information Fusion
Seattle, WA, USA, July 6-9, 2009
978-0-9824438-0-4 ©2009 ISIF 626
proachintheLOVeprojectframeworkisexplained.Then,
thesystemand thesensorsused by Renault manufacturerare
described.Section 3 presentsanon-parametric approachin
amultisensor fusion framework.InSection 4,simulation
and experimentalresultsarepresentedto demonstratethe
effectiveness oftheproposedapproach.Finally,the conclu-
sion ispresentedinSection 5.
2Overview
2.1Ourapproach
Themultisensoroutdoorpedestrian detection systemhas
been designedtot thetechnicalspecicationsdened
withintheLOVeProject[2].Thepurposeisto develop
safe and reliablesoftwarefortheobservationsof"vulner-
ables".Howeverthis softwaremustallowafast indus-
trialexploitation (validatedsoftwaremodules,standardized
and portable).Inthiscontextofstandardization,theLOVe
projectspecicationsprovidedalistofcommon inputand
outputforall thesoftwareblocks.Inthisoutput list,thede-
tected pedestrianset isdened by Zj
k=(z1
k,...,zNz
k),with
Nzthenumberofobservationsat instantk,and isasso-
ciatedwitha"Detection Overall Rate"(DOR)dened by
γ
j
k=(
γ
1
k,...,
γ
Nz
k)with(
γ
j
k]0,1])assessing itsreliability.
In ordertotrack pedestriansfromamultisensor frame-
work,aSequentialImportance Resampling ParticleFilter
(SIRPF)isproposed.It isbased on anon-parametric ap-
proach using theparticlesetand all theDORstocompute
probabilitiesforeach data association.Moreover,these
DORscan beincludedinthelikelihood function withtra-
ditionaluncertaintyassociatedateach detection.So,amix-
tureofGaussianand uniformdistributionsisproposed.The
problemishowtocombineinthebestmanner,thediverse
and sometimesconicting information provided by twosen-
sorsin orderto obtain better resultsthanwith only onesen-
sor.
2.2Vehicledescription
AnIBEOlasersensorand twocamerasensorsequiptheRe-
nault testvehicle.
TheIBEO ALASCA XTismountedinthe centerofthe
frontalarea ofthevehicle and twoSMALvideocamerasare
on top ofthe car forsimultaneousrecording ofthescene(see
Figure1).
3SensorDataFusion
Thetask ofsensordatafusion inautomotive applications
usesmultiplesensorstoconstitute anall-around detection
systemto overcomethedeciency ofanindividualsensing
device.Many workshavebeencarried out tocombinere-
dundantand diversemeasurementdatafromseveralsensors.
Intheparticularcaseofpedestrianclassication,severalap-
proacheshavebeen proposed.Fourmainfusion architec-
turesareidentiedintheliterature:
serial fusion: thelaserscannersegmentsthescene and
then provides someROIs(RegionsOf Interest),which
Figure1:Location ofsensorsintheRenault testvehicle.
are conrmedtomatch pedestriansby meansofavi-
sion basedclassier [11];
centralizedfusion: themeasurementsfromthevarious
sensorsaremerged(associatedand tracked)inasame
centralblock[12];
decentralizedfusion:eachsensorsystemdetects,clas-
sies,identiesand tracksthepotentialpedestriansbe-
forebeing mergedinatrack-to-trackfusion block[13];
hybridfusion:availableinformation includesboth un-
processed datafromonesensorand processed data
fromtheotherone.[14].
Inthispaper,a centralizedfusion architectureischosen.
3.1SystemArchitecture
Pedestrian detection systemarchitectureis showninFig-
ure2.Thepedestrian detection isperformedinthelaser-
scanner [8]and videoimageframes[15].Acentralizedfu-
sion moduleisdevelopedwhosemaincontributionsare:
anon-parametricdata association,
amixtureofGaussianand uniformdistributionsfor
likelihood computation,
the computation ofafusion condence factor.
3.2SIRPF
Inthefollowing section,thetheory ofthesequentialMonte
Carlomethodsintheframework ofmultipleobject tracking
isbrieyreminded.Formoredetails,thereadercanreferto
Doucetswork[10].
Letusconsideradiscretedynamicsystem:
Xk=f(Xk1)+Wk(1)
Zk=h(Xk)+Vk(2)
627
Filter update
(likelihood computation)
Confidence Factor Data Fusion
in Data Fusion
Tracks
Association
(at the same time)
Pedestrian
LaserscannerVideo source
Pedestrian
Tracking Tracking
Pedestrian
Detection Detection
Pedestrian
Input Vector according to LOVe specifications
Figure2:Multi-module architectureusing lidarand vision
information forpedestrian detection and classication.In
red,maincontributionsproposedinthispaper.
whereXkrepresentsthestatevectorat instantk.Noassump-
tion ismade about thetwofunctionsfand h,whereasVk
and Wkaresupposedto betwoindependentwhitenoises.
Particleltersprovide anapproximateBayesiansolution
to discretetimerecursiveproblemsby updating adescrip-
tion oftheposteriorltering densityp(xk|z1:k).Thisapos-
terioribelief representsthestateinwhichtheobjectsare.
Thepriordistribution oftherecursiveBayesianlter
p(xk|z1:k1)isapproximated by asetofNsamples,using
thefollowing equation:
p(xk|z1:k1)=1
N
N
i=1
δ
(xkxi
k)(3)
where
δ
isthediscreteDirac function.Thentheposterior
distribution p(xk|z1:k)can be estimated by:
p(xk|z1:k)=p(zk|xk)
N
i=1
p(xk|xi
k1)(4)
Thisapproachcan beimplemented by abootstraplterora
Sampling Importance Resampling (SIR)lter.
3.3Non-parametricdata association
3.3.1Introduction
Non-parametricmethodsallowtotakeintoaccount thesam-
plesand theirspace distributionsinthespace parameters.
LetNxbethenumberoftargetstotrack.Thisnumber
isunknownat instantk.Inthispaper,multi-target track-
ing consistsinestimating thestatevectorobtained by con-
catenating theNxvectorofall targets.ThevectorXk=
{x1,l
k,...,xNx,l
k}N
l=1isgiven by thestate equation (1)decom-
posedinNxequations:
Xk=Fk(Xk1,Wk)(5)
whereNisthenumberofparticles,and thenoisesWkare
supposedto bespatiallyand temporallywhite.Theobserva-
tion vectorcollectedat timekisdenoted by Zk=(z1
k,...zNz
k),
withNzthenumberofobservationsdeducedfromthepro-
cess:
Zk=Hk(Xk,Vk)(6)
Once again,thenoisesVkaresupposedto beindependent
whitenoises.
The association matrixAkisintroducedto describethe as-
sociation betweenthemeasurementsZkand thetargetsXk.
Anon-parametricframeworkischosenin ordertoestimate
the association matrixAk.
Twotechniquesmakeit possibleto generate a succession
ofareaswhichsatisfy good conditionsofestimation:
1.by xing thevolumeofthe area like a function ofn
samples,forexampleVn=1
n.
It isthe"ParzenWindow"method.
2.by adapting thesize ofthe areaswithsamplenumbers
knxedaccording ton,forexamplekn=n.
It istheKnearestneighborsmethod.
Inthispaper,the"ParzenWindow"method ischosenin or-
dertoexploit theKernelDensityEstimation (KDE)allow-
ing toextrapolatethedataon the entirepopulation.Finally,
aBernoulli variablewh{w1,w2}isalso dened,given by
wh=w1ifthe associatedevent isclassiedasfused dataor
wh=w2inall othercases.
3.3.2Parzenassociationforparticlelter
Anapproachisproposedto buildanon-parametricmodel
based on kernelfunctionsallowing asmartselection ofthe
mostpertinentdatafusion fromalikelihood analysisfunc-
tion.Thismethod isnotsupervised,so no priorknowledge
isrequiredto process datafusion.Thelikelihood function
p(zj
k,xi
k|wh)representstheprobabilitythata2Dparticlebe-
longstothefused data.Thelikelihood p(zj
k,xi
k|wh)will be
modeled on aParzenwindow whichcalculatesthedistance
betweenan observation zj
klocatedintheimage and all its
neighborsxi,l
ksuchas:
p(zj
k,xi
k|wh)=1
N
N
l=1
ϕ
(zj
k,xi,l
k)(7)
628
where
ϕ
(zj
k,xi,l
k)isthekernelfunction whichallowsto
modifytheinuence zoneofobservation withitsneighbors.
AmixtureofGaussianand uniformdistributionsisusedin
ordertomergeinasamedistribution all theinformation
availablefrom mono-sensoralgorithmoutputs.
ϕ
(zj
k,xi,l
k)
isgiven by:
ϕ
(zj
k,xi,l
k)=(1
γ
j
k)·U(1
γ
j
k
,1
γ
j
k
)+
γ
j
k·exp[
λ
c·dc(zj
k,xi,l
k)]
(8)
Theparameter
λ
cpermitstoadjust theweights.Thegener-
alizedsquaredinterpointdistance dcisdened by:
dc(zj
k,xi,l
k)=(zj
kxi,l
k)Σ1
ϕ
(zk
kxi,l
k)T(9)
withthe covariance matrixΣ
ϕ
,result ofthesumbetween
the covariance matrixΣSP2given by thetracking algorithm
(representing thevariance on pedestrian position)and the
measurementnoise covariance matrixR:
Σ
ϕ
=ΣSP2+R(10)
The2Dparticlexi,l
kw1having thehighestprobabilityis
chosen by themaximumlikelihood estimatorsuchas:
Ai,j
k|(i,j)=argmax
(i,j)
(p(zj
k|xi
k)|w1)) (11)
3.3.3Likelihoodcomputation
SIRPF allowstoapproximatetheltering distribution
p(xk|z1:k)by aweightedsetofNparticles.So,thedata as-
sociation abovebeing known,thenextstepconsistsincom-
puting theweightsofall theparticlesbelonging tothe as-
sociated gravitycentergiven by (11).Thus,theweight list
Li,j
kiscalculatedfromamixtureofGaussianand uniform
distributions(see Figure3)in orderto keepall theinforma-
tion used during thedata association step.Li,j
kisdenedas
follows:
Li,j
k={
ϕ
(zj
k,xi,l
k)}(12)
Finally,theweightsarenormalized beforetheresampling
stage.Thisalgorithmis summarizedinAlgorithm1.
−1.5 −1 −0.5 00.5 11.5
0
0.2
0.4
0.6
0.8
1
X [m]
Probability density
−1.5 −1 −0.5 00.5 11.5
0
0.2
0.4
0.6
0.8
1
X [m]
Probability density
Figure3:AnexampleofamixtureofGaussianand uniform
distributions,within bluetheuniformdistribution,in green
theGaussian distribution and inredthedistribution mixture.
Ontheleft,
σ
=0.15 mand DOR=0.95.Ontheright,
σ
=0.15 mand DOR=0.55.
3.4Computationof the confidence factorin
datafusion
All currentlytracked objectsaretestedto determineifthey
areornot theresult of fusion betweenlaserand video data.
Forthispurpose,eachtarget isevaluated by computing its
Condence FactorinDataFusion (CFDF).
Three criteria constitutetheCFDF: theCondence inthe
AgeofTrack(CAT),theDetection Overall Rate(DOR)and
theSensorFusion Rate(SFR).TheCATallowstoevaluateif
thepedestriantargethasbeentrackedforalong timeornot.
TheDOR,provided by mono sensoralgorithms,istherateof
condence that thedetected object isactuallyahuman.The
SFR allowstoevaluateifthetrackistheresult ornotofdata
fusion betweenlaserand video pedestrianmeasurements.
TheCATand theSFR are computedfromaGaussian dis-
tribution:
CAT(t)=(1
σ
o2
π
exp[1
2(t
µ
o
σ
o)]0<t
µ
o
1t>
µ
o
(13)
where
µ
orepresentstheminimumtimeoflifeofatrack
withoutobservation,
σ
oallowsto decreasemore and less
quicklytheCATand tisthe ageofthetrack.
SFR(x)=1
σ
f2
π
exp[1
2(x
µ
f
σ
f
)](14)
Where
µ
frepresentsthetheoreticalratio betweenthenum-
beroflaserdata and thenumberofvideo data,
σ
fallowsto
decreasemoreorless quicklythesensor fusion rate and xis
theratio betweenthenumberoflaserdata and thenumber
ofvideo data.
Finally,thenalresult isgiven by:
CFDF=CAT+SFR+DOR
3(15)
4Experiments
This section presents simulationsand experimentswhichal-
lowto validatethe algorithmoflaserand video datafusion.
4.1Simulations
Thegoalofthesesimulationsistoshowhowthe algo-
rithmofdatafusion improvesthepedestriantracking sys-
tem.First,astudy ofthisdata association algorithmispro-
posed.InFigure4,ascenariowithtwo pedestriansde-
tectedfromlaserand video datawasgenerated.Therele-
vance ofadata association based kerneldensityestimation is
demonstrated herewhenseveralmeasurementsfromdiffer-
entsensorscan be associatedwiththesametrack.Second,
astudy oflikelihood computation isproposed.InFigure5,
a cloud of randomparticles(red points)wasgenerated.The
particles(bluestar) representing the cloud centerwerese-
lectedasmeasurements.According tothelikelihood (see
Figure3),theweightsare computed(green points).With
thesameuncertaintyconcerning theposition,theresultsare
differentaccording tothe"detection overall rate".Thislast
629
Algorithm1Non-parametricDataFusion
1.Setk=0,generateNsamplesfromeachmeasurement
j=1,...,Nz,i.e{xj,l
0}N
l=1={x1,l
0,...,xNz,l
0}N
l=1where
xj,l
0=p(X0j).
2.ComputethematrixAkforall measurementsand
targets(Nz,Nx).
if(Ak
α
)with
α
=thegatethreshold
Ai,j
k=p(zj
k,xi
k|wh)wherep(zj
k,xi
k|wh)isthe associa-
tion probabilityforhypothesisiusing Nparticlesac-
cording toequation (7).
else
Ai,j
k=0then{xi,l
k}={xi,l
k1}go toitem5.
3.Computetheweightswi,l
k=Li,j
k(12)and normalize,
i.e,wi,l
k=wl
k
N
l=1wl
k
.
4.Generate a newset{xi,l
k}N
l=1by resampling withre-
placementNtimesfrom{xi,l
k}N
l=1,wherePr(xi,l
k=
xi,l
k)=wl
k.
5.Predict(simulate)newparticles,i.e,xi,l
k+1=f(xi,l
k,vk),
l=1,...,Nusing differentnoiserealizationsforthepar-
ticles.Computeforeachestimation itsCFDF(15).
6.Increasekand iteratetoitem2.
point isimportantbecause,when pedestrianclassication is
notreliable,morehypotheses(particles)should bekept in
ordertocorrectapossible errorinthestate estimator.
4.2Experimentsonrealdata
Variousresultswithreal laserand video datafusion arein-
troduced.Lidarand cameradata arenotgiveninthesame
reference frames.Thereference framerelatedtothelidar
waschosenfor fusion.TheSIRPF withParzenWindowas-
sociation wastestedinmany differentsituationson realdata
provided by Renault,theFrench vehiclemanufacturer.The
presentedscenarios(see Figure8,9and 10)includeseveral
pedestrians(>5)who appearand disappearinthesensor
area and showdifferentsituations suchasan urbanscene,a
semi-urbanscene,ora carpark.Thepedestriansmoveinall
directions.Thevehiclemovesataspeedranging from0to
50 km/h,whichallowstotest therobustness ofthismethod.
Usually,in pedestrianclassication framework,lidarmea-
surementscan generatefalsetracks[17].Theselidarmea-
surementsresult inmostcasesfromxed objects suitable
foran urbanenvironment.Falselidarmeasurementscan be
assimilatedwithsecurity barriers,poles,trees,etc.Foreach
iteration,thenumberof falsedetectionsisobtained by cal-
00.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
X [m]
Y [m]
Particles of object 1
Particles of object 2
laser data
video data
data5
O1: position uncertainty : +/− 5 cm
pedestrian confident rate : 0.55
O2: position uncertainty : +/− 70 cm
pedestrian confident rate : 0.95
O3: position uncertainty : +/− 55cm
pedestrian confident rate : 0.89
Figure4:Data association fromseveralsensors.Here,ac-
cording tothenearestneighborcriterion,O1would be as-
sociatedto object1and O3to object2,whereasthe correct
association isgiven by theParzenalgorithmi.eO2with ob-
ject1and O3with object2,whileO1isinrealityapole.
−1.5 −1 −0.5 00.5 11.5
−1.5
−1
−0.5
0
0.5
1
1.5
X [m]
Y [m]
−1.5 −1 −0.5 00.5 11.5
−1.5
−1
−0.5
0
0.5
1
1.5
X [m]
Y [m]
Figure5:Exampleoflikelihood computation on a cloud
ofparticlespresentedinFigure3.Inleft,
σ
=0.15 mand
DOR=0.95 and inright,
σ
=0.15 mand DOR=0.55.
culating theratio:
rate_of_false_detection =NTNP
NT
(16)
withNTthetotalnumberofdetectionsand NPthenumberof
detected pedestrians.Therateofdetection ofpedestrian(s)
isgiven by calculating theratio:
rate_of_pedestrian_detection =NP
NP_VT
(17)
withNP_VTthenumberofpedestrianswho are actuallyin
the area observed by thesensors.Table4.2showsthe ad-
vantageofdatafusion tosignicantly decreasethenumber
of falsedetectionswhenasinglelidarorasingle camera
areused.Itcanalso benoticedthat therateofpedestrian
detection ishigherafterdatafusion.Toconclude,astudy
illustrating theresultsoftheCFDFmoduledepending on
timeisproposed,where eachtrackisassociatedwithadif-
ferentcolor (see Figure6and 7).This study isconducted on
amulti-pedestrianscenario presentedinFigure9.
630
Table1:Rateof false and correctdetectionsforthescenarios
presentedinthe article:when onlylidaroronlycamera are
used,and afterdatafusion.
LidaronlyCameraonlyAfter fusion
False
Detection
Rate
0.536 0.274 0.108
Pedestrian
Detection
Rate
0.916 0.702 0.928
0 10 20 30 40 50 60 70
−4
−2
0
2
4
Y [m]
0 10 20 30 40 50 60 70
0
5
10
15
20
iterations
X [m]
Figure6:Result ofmulti-pedestriantracking on Xand Y
positions.Measurements(videoand lidar) are constantly
represented by graycircles.Eachtrackisrepresented by a
differentcolor.
0 10 20 30 40 50 60 70
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
iterations
CFDF
Figure7:Result ofmulti-pedestriantracking withCFDF.
EachCFDFisrepresented by astarofadifferentcolorwhile
eachDORisrepresented by adotofadifferentcolor.
5Conclusions
Thispaperpresentedamultisensorpedestrian detection
system.The centralizedfusion algorithmisappliedina
Figure8:Detection exampleina cross section aftera cen-
tralizedfusion fromlidarand videoimagedata.Thered
dotsrepresent lidardetection and thebluerectanglesrepre-
sentcameradetection.Theyellowrectanglesaretheresults
provided by thedatafusion module.
Figure9:Detection exampleina carparkaftera centralized
fusion fromlidarand videoimagedata.Thered dotsrepre-
sent lidardetection and thebluerectanglesrepresentcamera
detection.Theyellowrectanglesaretheresultsprovided by
thedatafusion module.We canalso notice a correctpedes-
trian detection atadistance up to 25 meters.
Bayesianframework.Indeed,in ordertotrackmore eas-
ily pedestriansrandom movementswhichcaninclude abrupt
trajectorychange,aSIRPF waschosen.First,in orderto
takeintoaccount theunspeciedcharacterofthedistribu-
tion ofparticlespredicted by theSIRlterand all theDOR
given by mono sensoralgorithm,adata association based
on kerneldensityestimation wasused.Second,alikelihood
based on amixtureofGaussianand uniformdistributions
wasused; thusit waspossibletotakeintoaccountmorepre-
ciselyall the availableinformation relatedtotheuncertain-
tiesoflaserand videomeasurements(uncertaintyconcern-
ing boththepedestriansposition and classication).Ex-
631
Figure10:Detection exampleina cross section aftera cen-
tralizedfusion fromlidarand videoimagedata.Thered
dotsrepresent thelidardetection and thebluerectanglesrep-
resent the cameradetection.Theyellowrectanglesarethe
resultsprovided by thedatafusion module.Pedestriansin
variousorientationsaredetected.
perimentalresultson simulated data aswell ason realdata
demonstratedthe effectiveness ofthisapproach.
Thenextstepistotest this systemon moredatasequences
to be abletocharacterize it intermsof falsepositive and
correctdetection rates.
References
[1]http://www.euractiv.com/en/health/road-safety-
pedestrians/article-117530
[2]http://love.univ-bpclermont.fr/
[3]T.Gandhiand M.M.Trivedi,"PedestrianCollision
Avoidance Systems:aSurvey ofComputerVision based
RecentStudies",inProc.IEEE IntelligentTransporta-
tion Systems2006,Sept2006,pp.976-981.
[4]F.Buand C-Y.Chan,"PedestrianDetection inTransit
BusApplication:Sensing Technologiesand SafetySo-
lutions",inIEEE IntelligentVehiclesSymposium(IV),
LasVegas,June2005.
[5]D.M.Gavrila,"Sensor-basedPedestrianProtection",
IEEE IntelligentSystems,vol.16,NR.6,pp.77-81,
2001.
[6]F.Arnell,"Vision-BasedPedestrianDetection System
foruseinSmartCars",Master’sThesis,Stockholm,
Sweden,2005.
[7]X.Shao,H.Zhao,K.Nakamura,K.Katabira,
R.Shibasakiand Y.Nakagawa,"Detection and Track-
ing ofMultiplePedestriansby Using LaserRangeScan-
ner",IEEE/RSJInternationalConference on Intelligent
Robotsand Systems,SanDiego,USA,2007.
[8]S.Gidel,P.Checchin,C.Blanc,T.Chateauand
L.Trassoudaine,"PedestrianDetection Method using
aMultilayerLaserscanner:Application inUrbanEn-
vironment",inProc.IEEE/RSJInt.Conf.on Intelligent
Robotsand Systems(IROS),2008.
[9]Y.Bar-Shalomand X.R.Li,"Multitarget-Multisensor
Tracking:Principlesand Techniques",ISBN0-
9648312-0-0,1995.
[10]A.Doucet,S.Godsill,and C.Andrieu,"Onsequential
MonteCarlosampling methodsforBayesianltering",
Statisticsand Computing,vol.10,CA,no.3.pp.197-
208,2000.
[11]M.Szarvas,U.Sakaiand J.Ogata,"Real-timePedes-
trianDetection using Lidarand ConvolutionalNeural
Network",inProc.oftheIEEE IntelligentVehicles
Symposium(IV),Tokyo,Japan,2006.
[12]D.T.Linzmeier,M.Skutek,M.Mekhaieland
K.C.J.Dietmayer,"APedestrianDetection System
based on Thermophile and RadarSensorDataFusion",
8thInternationalConference on Information Fusion,
Philadelphia,USA,2005.
[13]C.Blanc,L.Trassoudaine and J.Gallice,"EKFand
ParticleFilterTrack-to-TrackFusion:aQuantitative
Comparison fromRadar/LidarObstacleTracks",8thIn-
ternationalConference on Information Fusion,Philadel-
phia,USA,2005.
[14]G.Monteiro,C.Premebida,P.Peixotoand U.Nunes,
"Tracking and Classication ofDynamicObstaclesUs-
ing RangeFinderand Vision",inIROS2006 -Work-
shop on "SafeNavigation inOpenand DynamicEnvi-
ronments-AutonomousSystemsversusDriving Assis-
tance Systems",Beijing,China,2006.
[15]L.Leyrit,T.Chateau,C.Tournayre and J.T.Lapreste,
"Association ofAdaBoostand KernelBasedMachine
Learning MethodsforVisualPedestrianRecognition",
inProc.oftheIEEE IntelligentVehiclesSymposium
(IV),Eindhoven,Netherlands,2008.
[16]L.Trailovic and L.Y.Pao,"Position errormodeling
using Gaussianmixturedistribution withapplication to
comparison oftracking algorithms",inAmericanCon-
trolConference,Denver,USA,2003.
[17]S.Gidel,P.Checchin,T.Chateau,C.Blanc and
L.Trassoudaine,"ParzenMethod forFusion ofLaser-
scannerData:Application toPedestrianDetection",in
Proc.oftheIEEE IntelligentVehiclesSymposium(IV),
Eindhoven,TheNetherlands,2008.
632
... Historiquement, le LASMEA travaille depuis longtemps sur le développement de logiciels de perception à partir de capteurs lasers tels que le Riegl LD90-25S [Checchin, 1996], le Riegl LMSZ210-60 [Blanc, 2005], le Sick LMS 221 [Chanier et al., 2008] et depuis 2006 sur l'IBEO LD ML [Gidel et al., 2008a] [Gidel et al., 2009b]. ...
... Les principales contributions de cet algorithme sont premièrement, la modélisation de la fonction de vraisemblance par le mélange d'une loi gaussienne avec une loi uniforme afin de mieux prendre en compte toute l'information disponible à chaque itération k et deuxièmement, le calcul d'une note de fusion qui prend en compte la qualité des détections (ou pistes) renseignées par les algorithmes situés en amont [Gidel et al., 2009b]. ...
Article
At first my work consisted in developing an appropriate method to detect, then identify and track pedestrians in an outdoor environment from a single four-plane laser sensor. Pedestrian extraction and the merging of the four laser planes are both based on a nonparametric kernel method, also called "Parzen Windows". Initially, this estimator is used to approximate the likelihood function of the impact record in the laser image according to a pedestrian's geometrical characteristics. Secondly this estimator is used to calculate the likelihood that a pedestrian should be located within the four laser planes. Finally, to best characterize the complex trajectory of a pedestrian, the tracking process is based on the traditional particle filter. Unfortunately a pedestrian detection system which relies only on a laser sensor remains unsatisfactory as far as performance is concerned. Indeed, the inherent limitations of this sensor (no information about height, the outline or the color of the objects), as well as its sensitivity to such atmospheric conditions as rain or fog, make it necessary to resort to a multisensorial solution which allows to effectively combine the information provided by the laser and video sensors. This fusion methods is based on the development of a non-parametric method for data association, which allows to keep all the information contained in the measurements sent by the laser and video sensors. The performance of each proposed algorithm was characterized and reviewed, using real data obtained from numerous recording on board the LASMEA and Renault test vehicle ; Renault being the French vehicle manufacturer with whom we collaborate on our ANR LOVe project.
... Data association algorithms are used in cluttered environment, where measurement not only originate from the target but also from other sources (thermal noise, obstacles, clouds, terrain) [11,31]. A Bayesian data association technique, Probabilistic data association (PDA), uses all the latest validated measurements with different weights for associating any validated measurements with the track [32]. ...
Article
Full-text available
Target detection and tracking is important in military as well as in civilian applications. In order to detect and track high-speed incoming threats, modern surveillance systems are equipped with multiple sensors to overcome the limitations of single-sensor based tracking systems. This research proposes the use of information from RADAR and Infrared sensors (IR) for tracking and estimating target state dynamics. A new technique is developed for information fusion of the two sensors in a way that enhances performance of the data association algorithm. The measurement acquisition and processing time of these sensors is not the same; consequently the fusion center measurements arrive out of sequence. To ensure the practicality of system, proposed algorithm compensates the Out of Sequence Measurements (OOSMs) in cluttered environment. This is achieved by a novel algorithm which incorporates a retrodiction based approach to compensate the effects of OOSMs in a modified Bayesian technique. The proposed modification includes a new gating strategy to fuse and select measurements from two sensors which originate from the same target. The state estimation performance is evaluated in terms of Root Mean Squared Error (RMSE) for both position and velocity, whereas, track retention statistics are evaluated to gauge the performance of the proposed tracking algorithm. The results clearly show that the proposed technique improves track retention and and false track discrimination (FTD).
... Fusion centralisée Pour une architecture centralisée, toutes les informations sont traitées et analysées dans un unique noeud de fusion [33]. Tous les capteurs lui sont reliés. ...
Thesis
Full-text available
Cette thèse, réalisée en coopération avec l'Institut Pascal et Renault, s'inscrit dans le domaine des applications d'aide à la conduite, la plupart de ces systèmes visant à améliorer la sécurité des passagers du véhicule. La fusion de différents capteurs permet de rendre plus fiable la prise de décision. L'objectif des travaux de cette thèse a été de développer un système de fusion entre un radar et une caméra intelligente pour la détection des obstacles frontaux au véhicule. Nous avons proposé une architecture modulaire de fusion temps réel utilisant des données asynchrones provenant des capteurs sans a priori applicatif. Notre système de fusion de capteurs est basé sur des méthodes de suivi de plusieurs cibles. Des méthodes probabilistes de suivi de cibles ont été envisagées et une méthode particulière, basée sur la modélisation des obstacles par un ensemble fini de variables aléatoires a été choisie et testée en temps réel. Cette méthode, appelée CPHD (Cardinalized Probability Hypothesis Density) permet de gérer les différents défauts des capteurs (non détections, fausses alarmes, imprécision de positions et de vitesses mesurées) et les incertitudes liées à l’environnement (nombre inconnu d'obstacles à détecter). Ce système a été amélioré par la gestion de différents types d'obstacles : piéton, voiture, camion, vélo. Nous avons proposé aussi une méthode permettant de résoudre le problème des occultations avec une caméra de manière explicite par une méthode probabiliste en prenant en compte les imprécisions de ce capteur. L'utilisation de capteurs intelligents a introduit un problème de corrélation des mesures (dues à un prétraitement des données) que nous avons réussi à gérer grâce à une analyse de l'estimation des performances de détection de ces capteurs. Afin de compléter ce système de fusion, nous avons mis en place un outil permettant de déterminer rapidement les paramètres de fusion à utiliser pour les différents capteurs. Notre système a été testé en situation réelle lors de nombreuses expérimentations. Nous avons ainsi validé chacune des contributions de manière qualitative et quantitative.
... On the other hand, when the variations of the Bayes' filter are used, the method includes a prediction step, based on mathematical models and information from the past, and a correction step, when sensor data are used to update the prediction. Some works that follow this idea are (Cho et al. 2014;Gidel et al. 2009;Monteiro et al. 2006). ...
Article
Full-text available
Automatic detection of people is essential for automated systems that interact with persons and perform complex tasks in an environment with humans. To detect people efficiently, in this article it is proposed the use of high-level information from several people detectors, which are combined using probabilistic techniques. The detectors rely on information from one or more sensors, such as cameras and laser rangefinders. The detectors’ combination allows the prediction of the position of the persons inside the sensors’ fields of view and, in some situations, outside them. Also, the fusion of the detector’s output can make people detection more robust to failures and occlusions, yielding in more accurate and complete information than the one given by a single detector. The methodology presented in this paper is based on a recursive Bayes filter, whose prediction and update models are specified in function of the detectors used. Experiments were executed with a mobile robot that collects real data in a dynamic environment, which, in our methodology, is represented by a local semantic grid that combines three different people detectors. Results indicate the improvements brought by the approach in relation to a single detector alone.
... and Cherfaoui(2008)proposed an evidential fusion approach to combine and update detection and recognition confidences in a multi-sensor pedestrian tracking systems. Their proposed method is tested using synthetic data. Their results showed that the consideration of reliability in the sources and confidence factors improve the object detection rate.Gidel et al. (2009) presented a multi-sensor pedestrian detection system. The centralized fusion algorithm is applied in a Bayesian framework. The main contributions consist of the development of a non parametric data association technique based on machine learning kernel methods. The performance of the methods depends on the number of particles. Besides, ...
Article
Full-text available
Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts of the environment. The perception output is used to reason and decide which driving actions are the best for specific driving situations. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Multiple sensor fusion has been a topic of research since long; the reason is the need to combine information from different views of the environment to obtain a more accurate model. This is achieved by combining redundant and complementary measurements of the environment. Fusion can be performed at different levels inside the perception task. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help to improve their tracking, reason about their behaviour, and decide what to do according to their nature. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, the management of incomplete information is an important issue in these perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking into account the degree of imprecision and uncertainty into the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. We believe that a richer list of tracked objects can improve future stages of an ADAS and enhance its final results. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We apply this approach at tracking level to fuse information from two track representations, and at detection level to find the relations between observations and to fuse their representations. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios and focusing on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck. We obtained promising results from our proposed approaches and empirically showed how our composite representation can improve the final result when included at different stages of the perception task.
... Another challenge is posed by the sea clutter, i.e. reflections that occur when the angle of attack of the radar's beam to the sea surface increases due to waves (Skolnik, 2001). In the automotive industry sensor fusion of range finders and computer vision has been used to enhance robustness of obstacle detection, as shown in (Coue et al., 2003;Gidel et al., 2009;Monteiro et al., 2006). This paper proposes a sensor fusion strategy of radar and vision technologies further supplemented with dedicated filtering and statistical methods to obtain robust results for marine operations of highly manoeuvrable and fast planing crafts. ...
Conference Paper
This paper describes an obstacle detection system for a high-speed and agile unmanned surface vehicle (USV), running at speeds up to 30 m/s. The aim is a real-time and high performance obstacle detection system using both radar and vision technologies to detect obstacles within a range of 175 m. A computer vision horizon detector enables a highly accurate attitude estimation despite large and sudden vehicle accelerations. This further facilitates the reduction of sea clutter by utilising a attitude based statistical measure. Full scale sea trials show a significant increase in obstacle tracking performance using sensor fusion of radar and computer vision.
Chapter
this paper presents a new taxonomy for the context-awareness problem in data fusion. It interprets fusing data extracted from multiple sensory datatypes like images, videos, or text. Any constructed smart environment generates big data with various datatypes which are extracted from multiple sensors. This big data requires to fuse with expert people due to the context-awareness problem. Each smart environment has specific characteristics, conditions, and roles that need to expert human in each context. The proposed taxonomy tries to cure this problem by focusing on three dimensions classes for data fusion, types of generated data, data properties as reduction or noisy data, or challenges. It neglects the context domain and introduces solutions for fusing big data through classes in the proposed taxonomy. This taxonomy is presented from studying sixty-six research papers in various types of fusion, and different properties of data fusion. This paper presents new research challenges of multi-data fusion.
Conference Paper
Pedestrian protection system plays an important role in perceptual system of unmanned vehicles and Advanced Drive Assistant System. In order to get more details information about surrounding objects, perceptual system of such kind intelligence system is usually equipped with different sensors, so technology to fuse information of heterogeneous sensors is very important. This paper proposed a novel way to fuse the information of radar and image of camera to realize pedestrian detection and acquire its dynamic information. Contribution of this paper are as following First, a new intra-frame cluster algorithm and an inter-frame tracking algorithm are put forward to extract valid target signal from original radar data with noise. Second, to realize radar-vision data space alignment, least square algorithm is used to get the coordinate transformation matrix. Then with the aid of projections of radar points, a flexible strategy to generate region of interest (ROI) is put forward. Furthermore, to further accelerate detection, an improved fast object estimation algorithm is proposed to acquire a more accurate potential target area based on ROI. At last, histogram of gradient (HOG) features of potential area are extracted and support vector machine is used to judge whether it's a pedestrian. The proposed approach is verified through real experimental examples and the trial results show this method is feasible and effective.
Conference Paper
Full-text available
A multi-module architecture to detect, track and classify objects in semi-structured outdoor scenarios for intel-ligent vehicles is proposed in this paper. In order to fulfill this task it was used the information provided by a laser range finder (LRF) and a monocular camera. The detection and tracking phases are performed in the LRF space, and the object classification methods work both in laser (with a Majority Voting scheme and a Gaussian Mixture Model (GMM) classifier) and in vision spaces (AdaBoost classifier). A sum decision rule based on the Bayes approach is used in order to combine the results of each classification technique, and hence a more reliable object classification is achieved. Experiments using real data confirm the robustness of the proposed architecture.
Conference Paper
Full-text available
This paper presents a novel real-time pedestrian detection system utilizing a LIDAR-based object detector and convolutional neural network (CNN)-based image classifier. Our method achieves over 10 frames/second processing speed by constraining the search space using the range information from the LIDAR. The image region candidates detected by the LIDAR are confirmed for the presence of pedestrians by a convolutional neural network classifier. Our CNN classifier achieves high accuracy at a low computational cost thanks to its ability to automatically learn a small number of highly discriminating features. The focus of this paper is the evaluation of the effect of region of interest (ROI) detection on system accuracy and processing speed. The evaluation results indicate that the use of the LIDAR-based ROI detector can reduce the number of false positives by a factor of 2 and reduce the processing time by a factor of 4. The single frame detection accuracy of the system is above 90% when there is 1 false positive per second
Conference Paper
Full-text available
Pedestrian safety is a primary traffic issue in urban environment. This article deals with the detection of pedestrians by means of a laser sensor. This sensor, placed on the front of a vehicle collects information about distance distributed according to 4 laser planes. Like a vehicle, a pedestrian constitutes in the vehicle environment an obstacle which must be detected, located, then identified and tracked if necessary. In order to improve the robustness of pedestrian detection using a single laser sensor we propose here a detection system based on the fusion of information located in the 4 laser planes. In this paper, we propose a Parzen kernel method that allows first to isolate the ldquopedestrian objectsrdquo in each plane and then to carry out a decentralized fusion according to the 4 laser planes. Finally, to improve our pedestrian detection algorithm we use a MCMC based PF method allowing a closer observation of pedestrian random movement dynamics. Many experimental results validate and show the relevance of our pedestrian detection algorithm in regard to a method using only a single-row laser-range scanner.
Conference Paper
Full-text available
This article deals with the detection of pedestrians by means of a laser sensor. This sensor placed on the front of a vehicle collects information about distance distributed according to 4 horizontal planes. Like a vehicle, a pedestrian constitutes in the vehicle environment an obstacle which must be detected, located, then identified and tracked if necessary. In order to improve the robustness of pedestrian detection using a single laser sensor this article propose a detection system based on the fusion of information located in the 4 horizontal laser planes. A "Parzen Window" kernel method is described and allows the centralized fusion of different planes. Many experimental results validate and show the relevance of our pedestrian detection algorithm in regard to a method using only a single-row laser-range scanner.
Conference Paper
Full-text available
Pedestrian safety is a primary traffic issue in urban environment. The use of modern sensing technologies to improve pedestrian safety has remained an active research topic for years. A variety of sensing technologies have been developed for pedestrian detection. The application of pedestrian detection on transit vehicle platforms is desirable and feasible in the near future. In this paper, potential sensing technologies are first reviewed for their advantages and limitations. Several sensors are then chosen for further experimental testing and evaluation. A reliable sensing system will require a combination of multiple sensors to deal with near-range in stationary conditions and longer-range detection in moving conditions. An approach of vehicle-infrastructure integrated solution is suggested for the pedestrian detection in transit bus application.
Conference Paper
This paper gives a survey of recent research on pedestrian collision avoidance systems. Collision avoidance not only requires detection of pedestrians, but also collision prediction using pedestrian dynamics and behavior analysis. The paper reviews various approaches based on cues such as shape, motion, and stereo used for detecting pedestrians from visible as well as non-visible light sensors. This is followed by the study of research dealing with probabilistic modeling of pedestrian behavior for predicting collisions between pedestrian and vehicle. The literature review is also condensed in tabular form for quick reference
Conference Paper
We present a real-time solution for pedestrian detection in images. The key point of such method is the definition of a generic model able to describe the huge variability of pedestrians. We propose a learning based approach using a training set composed by positive and negative samples. A simple description of each candidate image provides a huge feature vector from which can be built weak classifiers. We select a subset of relevant weak classifiers using a classic AdaBoost algorithm. The resulting subset is then used as binary vectors in a kernel based machine learning classifier (like SVM, RVM, ...). The major contribution of the paper is the original association of an AdaBoost algorithm to select the relevant weak classifiers, followed by a SVM like classifier for which input data are given by the selected weak classifiers. Kernel based machine learning provides non-linear separator into the weak classifier space while standard AdaBoost gives a linear one. Performances of this method are compared to state of art methods and a real-time application with a monocular camera embedded in a moving vehicle is also presented to match this approach against a real context.
Conference Paper
We propose a novel system for tracking multiple pedestrians in a crowded scene by exploiting single-row laser range scanners that measure distances of surrounding objects. A walking model is built to describe the periodicity of the movement of the feet in the spatial-temporal domain, and a mean-shift clustering technique in combination with spatial- temporal correlation analysis is applied to detect pedestrians. Based on the walking model, particle filter is employed to track multiple pedestrians. Compared with camera-based methods, our system provides a novel technique to track multiple pedestrians in a relatively large area. The experiments, in which over 300 pedestrians were tracked in 5 minutes, show the validity of the proposed system.
Conference Paper
In road environment, road obstacle tracking is able to extract important information for driving safety. Indeed, kinematic characteristics estimation (relative position, relative speed, ...) provides a clearer road scene comprehension. This estimate is one of the important parameters of driver assistance systems. However, only one sensor generally does not allow to detect quickly (all the potentially dangerous obstacles) in all the directions, under all the atmospheric conditions. Moreover, the degree of obstacle recognition is different according to the sensor used. Multiplication of sensors makes it possible to face these various problems. Amalgamated information will represent entities in further details and with less uncertainty than with only one source. A system of higher level has been thus developed in order to have a robust management of all tracks and measurements coming from various sensors. This system, applied to radar and lidar measurements combination, gives important obstacles characteristics present in the front bumper of our experimental vehicle (VELAC: LASMEA's experimental vehicle for driving assistance). This state estimate is based on the use of various Bayesian methods (Extended Kalman Filter and Particle Filter). Here we will use the fusion of two-obstacle tracking delivered by two independent systems: track-to-track fusion. These two systems propose estimates characterizing obstacles positions and relative speeds. Fusion estimation is based on the use of extended Kalman filter (EKF) or particle filters. A comparison of these two methods is presented in this article.