Content uploaded by Raghavendra Ramachandra
Author content
All content in this area was uploaded by Raghavendra Ramachandra on Feb 03, 2015
Content may be subject to copyright.
Int. J. Biometrics, Vol. 2, No. 1, 2010 19
Multisensor biometric evidence fusion of face
and palmprint for person authentication using
Particle Swarm Optimisation (PSO)
R. Raghavendra*
Department of Studies in Computer Science,
University of Mysore,
Mysore 570006, India
E-mail: raghu07.mys@gmail.com
*Corresponding author
Ashok Rao
Department of E&C,
Channabasaveshwara Institute of Technology,
Gubbi, Tumkur 572216, India
E-mail: ashokrao@gmail.com
G. Hemantha Kumar
Department of Studies in Computer Science,
University of Mysore,
Mysore 570006, India
E-mail: hemanthakumar@compsci.uni-mysore.ac.in
Abstract: This paper presents a novel biometric sensor fusion technique
for face and palmprint images using Particle Swarm Optimisation (PSO).
The proposed method can be visualised in the following steps: we first
decompose the face and palmprint image obtained from different sensors
using wavelet transformation and then, we employ PSO to select most
informative wavelet coefficients from face and palmprint to produce a
new fused image. We then employed Kernel Direct Discriminant Analysis
(KDDA) for feature extraction and the decision about accept/reject
is carried out using Nearest Neighbour Classifier (NNC). Extensive
experiments carried out on a virtual multimodal biometric database of
250 users indicate the efficacy of the proposed method.
Keywords: multimodal biometrics; image fusion; match score level fusion;
face; palmprint; PSO; particle swarm optimisation.
Reference to this paper should be made as follows: Raghavendra, R.,
Rao, A. and Hemantha Kumar, G. (2010) ‘Multisensor biometric evidence
fusion of face and palmprint for person authentication using Particle
Swarm Optimisation (PSO)’, Int. J. Biometrics, Vol. 2, No. 1, pp.19–33.
Biographical notes: R. Raghavendra received the BE and MTech Degree in
Electronics and Communication Engineering from University of Mysore
and Visvesvaraya Technological University respectively. Since 2007, he has
Copyright © 2010 Inderscience Enterprises Ltd.
20 R. Raghavendra et al.
been a candidate for PhD Degree in Computer Science and Technology in
University of Mysore. His current research interest includes DSP, pattern
recognition, feature selection, optimisation techniques and finite mixture
models. He is an author of more than 15 research papers.
Ashok Rao received his BE, ME and PhD in 1982, 1985, 1991 all
in EE from University of Mysore, IISc Bangalore and IIT Bombay
respectively. His current area of research include bio-metrics, DSP and
image processing, bio-medical signal processing, applied linear algebra,
social computing, renewable energy and engineering education. From
1999–2005, he was Head of Network Project, CEDT, IISc. From August
2009, he is working as a Professor in CIT, Gubbi, Tumkur. He has
authored over 75 research publications in all areas of his interest.
G. Hemantha Kumar received BSc, MSc and PhD from University of
Mysore. He is working as a Professor in the Department of Studies
in Computer Science, University of Mysore, Mysore. He has published
more than 200 papers in journals, edited books and refereed conferences.
His current research interest includes numerical techniques, digital image
processing, pattern recognition and multimodal biometrics.
1 Introduction
Identification of person with high accuracy is becoming critical in a number of
security applications in our society. Biometric based person verification has attracted
more attention in designing security system. Most of the biometric systems that are
presently in use typically have a single biometric trait to establish identity (Unimodal
biometric systems). The Unimodal biometric system suffers from drawbacks such as
noise in sensed data, lack of universality, spoof attacks and so on (Ross et al., 2006).
Some of the limitations of Unimodal biometrics are alleviated by combining more
than one evidence from different source of biometric information. Such a system that
combine more that one biometric trait is termed as Multimodal biometric system
which improve the matching accuracy of a system while increasing population
coverage and preventing spoof attacks. The heart of multimodal biometric system
relies in fusing the information of multiple biometric in order to achieve better
recognition performance. Fusion can be performed at four different levels of
information such as sensor level, feature level, match score level and decision
level. Match score level fusion is generally preferred as all commercially available
biometric devices do not provide access to the features at all levels and also that it
is easy to fuse.
A multimodal biometric system that fuses information at sensor level (i.e., image
level) is excepted to produce more accurate results than the system that integrate
information at a later stages, namely, feature level, match score level and decision
level because of the availability of more richer and relevant information. Face
and palmprint biometrics have been considered and accepted as most widely used
biometric traits as these two modalities shows number of useful features. While face
sensing is noninvasive and friendly, palmprints are non-intrusive and can be easily
acquired using low resolution sensors.
Multisensor biometric evidence fusion of face and palmprint 21
The majority of the work reported on multimodal biometric fusion using face
and palmprint are confined to match score and feature level fusions (Feng et al.,
2004; Yao et al., 2007; Jing et al., 2007; Yan and Zhang, 2008). Recently, Kisku
et al. (2009) proposed a multimodal biometric systems using palmprint and face
using sensor (image) level fusion. Here, face and palmprint images are decomposed
using Haar wavelet into a set of low resolution images with wavelet coefficients for
each level. Then, wavelet coefficients of face and palmprint are fused by taking the
average of each coefficient in every subband and then, inverse wavelet transform is
employed to convert the fused coefficients back to original resolution level. Then,
Shift Invariant Feature Transform (SIFT) is used to extract the features of this
fused image. Finally, matching between a pair of fused images is accomplished by
incorporating structured graph matching. This method exhibits some limitations
such as:
1 Feature registration is needed before performing the fusion.
2 Valid for only frontal view of face and palmprint samples i.e., this method has
the limitation in handling the face and palmprint traits with variation such as
pose, illumination that are commonly encountered in real time biometric
applications.
In our present work, we propose a novel approach to fuse the face and palmprint
at image level using PSO. Here, we first decompose face and palmprint image using
Haar wavelet. Then, we use PSO to choose the most informative wavelet coefficients
that can contribute to accurate verification of individuals from face and palmprint.
Extensive experiments conducted on virtual database of 250 users indicates the
efficacy of the proposed method. We also compare the proposed method with the
existing state of the art methods such as Genetic Algorithm and wavelet based image
fusion approaches. Further, we also compare the performance of proposed image
fusion approach with more robust fusion approach based on match scores using
weighted sum rule and also we present the statistical variation of results with 90%
confidence interval.
The rest of the paper is organised as follows: Section 2 describes the proposed
method of image fusion using PSO. Section 3 discuss about selection of different
parameters used in PSO. The use of Genetic Algorithm in fusing face and palmprint
at image level is presented in Section 4. In Section 5, wavelet based image fusion of
face and palmprint is discussed. Section 6 presents the match score level fusion of
face and palmprint. The experimental setup and procedure is described in Section 7.
Section 8 discuss the results obtained using proposed method. Section 9 draws the
conclusion.
2 Proposed method
This section describes the proposed approach of combining the information from
face and palmprint image using PSO. Figure 1 shows the block diagram of the
proposed approach. First, we compute a multi-resolution representation for each
face and palmprint images using Haar wavelet (Chui, 1992). In this work, we have
employed the Haar wavelet, which is the simplest to implement and computationally
22 R. Raghavendra et al.
inexpensive (Chui, 1992). Furthermore, since Haar basis forms an orthogonal basis,
the transformation provides a non-redundant representation of the input images.
The Haar wavelet is defined as:
ψ(x)=
1 for 0 ≤ x ≤ 1/2,
−1 for 1/2 ≤ x ≤ 1,
0 Otherwise.
To fuse the face and palmprint images, we select a subset of wavelet coefficients
from the face image and rest from the palmprint image. Here, we employ PSO
to decide which wavelet coefficients to select from face and is palmprint. Figure 2
illustrates the main idea of the approach and explained further in Section 3.
Verification is carried out using KDDA as it is known for good performance and
high dimensionality reduction (Chui, 1992). Finally, decision about accept/reject is
carried out using NNC.
Figure 1 Block diagram of proposed method (see online version for colours)
Multisensor biometric evidence fusion of face and palmprint 23
Figure 2 Illustration of proposed method (see online version for colours)
3 Image fusion using PSO
PSO is a stochastic, population based random optimisation technique aimed at
finding a solution to an optimisation problem in a search space. The PSO algorithm
was first described by Kennedy and Eberhart (1995). The main idea of PSO is to
simulate the social behaviour of birds flocking to describe an evolving system. Each
candidate solution is therefore modelled by an individual bird that is a particle in a
search space. Each particle adjusts its flight by making use of its individual memory
and of the knowledge gained by its neighbours to find the best solution.
3.1 Principle of PSO
The main objective of PSO is to optimise a given function using a fitness function.
PSO is initialised with a population of particles distributed randomly over the
search space and evaluated to compute the fitness function together. Each particle
is treated as a point in the N-Dimension space. The ith particle is represented
as X
i
= {x
i1
,x
i2
,...,x
iN
}. At every iteration, each particle is updated by two
best values called pbest and gbest. pbest is the best position associated with the
best fitness value of particle ‘ i’ obtained so far and is represented as pbest
i
=
{pbest
i1
,pbest
i2
,...,pbest
iN
} with fitness function f(pbest
i
). gbest is the best
position among all the particles in the swarm. The rate of the position change
24 R. Raghavendra et al.
(velocity) for particle ‘i’ is represented as V
i
= {v
i1
,v
i2
,...,v
iN
}. The particle
velocities are updated according to the following equations (Kennedy and Eberhart,
1995):
V
new
id
= w × V
old
id
+ C1 × rand
1
() × (P
bestid
− x
id
)
+ C2 × rand
2
() × (gbest
d
− x
id
) (1)
x
id
= x
id
+ V
new
id
(2)
where, d =1, 2,...,N,w is the inertia weight. The suitable selection of inertia
weights provides a balance between global and local explorations, and results
in fewer iterations on average to find near optimal results. C1 and C2 are the
acceleration constants used to pull each particle towards pbest and gbest. Low values
of C1 and C2 allow the particle to roam far from target regions, while high values
result in abrupt movements towards or past the target regions. rand
1
() and rand
2
()
are random numbers between (0, 1).
3.2 Binary PSO
The original PSO was introduced for continuous population but has been later
extended by Kennedy and Eberhart (1997) to the discrete valued population. In the
binary PSO, the particles are represented by binary values (0 or 1). Each particle
velocity is updated according to the following equations:
S(V
new
id
)=
1
1+e
−V
new
id
(3)
if (rand <S(V
new
id
)) then x
id
=1; else x
id
=0; (4)
where V
new
id
denotes the particle velocity obtained from equation (1), function
S(V
new
id
) is a sigmoid transformation and rand is a random number selected from
the uniform distribution (0, 1).IfS(V
new
id
) is larger than a random number then
its position value is represented as {1} else its position value is represented as {0}.
In order to apply the idea of binary PSO for image fusion of face and palmprint,
we need to adapt the general binary PSO concept to this precise application. This
will be the objective of the following subsections.
3.2.1 Representation of position
The initial swarm is created such that the population of the particles is distributed
randomly over the search space. Since, we are using binary PSO, the particle position
is represented as a binary bit string of length N, where ‘N’ is the total number of
wavelet coefficients in the image (face and palmprint) decomposition. Each bit in the
particle is associated with a wavelet coefficient at a specific location. The value of bit
in the particle determines whether the corresponding wavelet coefficient is selected
from the face (for ex. 0) or from the palmprint (for ex. 1). Each particle velocity is
updated according to the equations (1), (3) and (4).
Multisensor biometric evidence fusion of face and palmprint 25
3.2.2 Fitness function
The optimal image fusion relies on an appropriate formulation of the fitness
function. In biometric verification, it is difficult to identify a single function that
would characterise the matching performance across a range of False Acceptance
Rate (FAR) and False Reject Rate (FRR) values i.e., across all matching thresholds
(Ross et al., 2006). Thus, in our experiments the following steps are followed:
• First, obtain the fused image using selected wavelet coefficients from face and
palmprint using PSO (see Figure 2).
• We project this fused image on to a lower dimensional space using KDDA.
• We then compute the distance (using NNC) between reference and testing
samples to get the match scores.
• Then, compute FAR and Genuine Acceptance Rate (GAR) by setting
thresholds at different points.
• Finally we optimise the performance gain across a wide range of thresholds,
we define the objective function to be the average of 12 GAR values
corresponding to 12 different FAR values (90%, 70%, 50%, 30%, 10%, 0.8%,
0.6%, 0.4%, 0.2%, 0.09%, 0.05%, 0.01%). Thus, the main objective of the
verification fitness function is to maximise this average GAR value.
3.2.3 Velocity limitation V
max
In the binary version of PSO, the value of V
max
limits the probability that bit x
id
takes value 0 or 1 and therefore the use of high V
max
value in binary PSO will
decrease the range explored by the particle (Kennedy and Eberhart, 1997). In our
experiments, we tried different values of V
max
and finally selected V
max
=6,asit
allows the particle to reach near optimal solutions.
3.2.4 Inertia weight and acceleration constant
The weight of inertia is an important parameter as it provides the particles with a
degree of memory capability. It is experimentally found that inertia weight ‘w’inthe
range [0.8, 1.2] yields a better performance (Kennedy and Eberhart, 1997). Hence
in our present work, we initially set ‘w’ to 1.2 and then decrease it to zero during
subsequent iterations (in our work, we experimentally fix number of iteration equal
to 50, as further increase in number of iteration does not provide better optimisation
results). This scheme of decreasing inertia weight is found to be better than the
fixed one (Kennedy and Eberhart, 1997) as it allows reaching an optimal solution.
Even though the rate of acceleration constants C1 and C2 are not so critical in the
convergence of PSO, a suitable chosen value may lead to faster convergence of the
algorithm. In our experiments, we varied the value of C1 and C2 from 0 to 2 and
finally chose C1=0.7 and C2=1.2 as it yields better convergence.
3.2.5 Population size
The population size i.e., the number of particles in the swarm plays an important
role as it not only influences the performance but also the computation cost.
26 R. Raghavendra et al.
In our present work, we experimentally varied the size of the population from 10
to 30 in steps of 5 for each of the fitness function that we have used and finally,
we fixed the population size as 20 as further increase of this value did not provide
significant improvement in the performance.
4 Image fusion using Binary Genetic Algorithm
The Binary Genetic Algorithm is an optimisation and search technique inspired by
the principles of genetics and natural selection (Haupt and Haupt, 2004). Binary
Genetic Algorithm allows a population composed of many individuals to evolve
under specified selection of a rule that maximises the fitness function. These are
designed to effectively search large, non-linear, poorly-understood search space.
Even though no work has been reported on the Binary Genetic Algorithm being
used for fusing the face and palmprint images, but it is still widely used in biometric
applications that involves the fusion of visible and infrared face images (Bebis et al.,
2000, 2006). Thus, in this work, we experiment the binary Genetic Algorithm for
the image level fusion of palmprint and face and compare the performance with
PSO and match score level fusion. The following subsection gives the details of
parameters used in our work to select most informative wavelet coefficients from
face and palmprint using Genetic Algorithm.
4.1 Initial population
The initial population is generated randomly over a search space and each
individual is represented by 1’s and 0’s. Where, 1 corresponds to the position
of face feature and 0 corresponds to the position of palmprint feature. In our
work, we experimentally fix the population size equal to 200 and 150 generations
(iterations).
4.2 Fitness function
We use the same fitness function as mentioned for PSO (see Section 3.2.2) to have
consistency in comparing the performance.
4.3 Selection
We use cross generational selection (Bebis et al., 2000) that is given the population
size of N , the offspring double the size of the population and we select the best
N individuals from the combined parent-offspring population (Bebis et al., 2000).
4.4 Crossover
In our work, we use uniform crossover as it is proved to give better performance
that any other cross over method (Haupt and Haupt, 2004) and also we do not
know the relationship between different wavelet coefficients of face and palmprint.
Hence, we experimentally fix the crossover probability as 0.92.
Multisensor biometric evidence fusion of face and palmprint 27
4.5 Mutation
Mutation is a low probability operator which flips the values of random chosen bit.
In our work, we experimentally fix the mutation probability as 0.01.
5 Image fusion using wavelet
Figure 3 shows the state of art in fusing the face and palmprint images using wavelet
transform. Here, we first decompose both face and palmprint images independently
using Haar wavelet. Then, we use the simple average rule (Stathaki, 2008) to fuse
the wavelet coefficients in face and palmprint image. We then take inverse wavelet
transform to obtain the final fused image. To have consistency with the proposed
approach, we use KDDA scheme to represent the fused image in lower dimension
space and decision about accept/reject is carried out using NNC.
Figure 3 Block diagram of image fusion using wavelets (see online version for colours)
6 Match score level fusion
Match score level fusion is widely used in multimodal biometrics as it is easy to
access and to combine the scores generated by different biometric matchers (Ross
et al., 2006).
Figure 4 shows the block diagram of the match score level fusion procedure
used in our present work. To obtain the match scores for individual biometrics
(face and palmprint), we use the combination of wavelets and KDDA schemes for
feature extraction and match scores are obtained using NNC. Several algorithms
can be used for performing score fusion going from simple addition or product
rules to more complicated ones including classification using SVM or score density
estimation. However recent works (Ross et al., 2006; Nandakumar et al., 2008) show
that all these methods give roughly equivalent performance to that of weighted SUM
rule at the condition that the weights reflect the relative difference in performance
of the individual systems. This is therefore the technique that we are going to apply
in this work. The weighted SUM rule can be written as:
S
fuse
= W 1 × S
face
+ W 2 × S
palmprint
(5)
28 R. Raghavendra et al.
where, W 1 and W 2 denote the weights associated with face and palmprint
respectively, such that W 1+W 2=1, S
fuse
denotes the fused scores, S
face
denotes
the scores of the face system and S
palmprint
denotes the scores of the palmprint
system. We calculate the weights as mentioned in Raghavendra et al. (2009).
Figure 4 Block diagram of match score level fusion (see online version for colours)
7 Experimental setup
This section describes the experimental setup that we have build in order to evaluate
the proposed feature level fusion schemes. Because of a lack of real multimodal
database of face and palmprint data, experiments are carried out on a database of
virtual persons using face and palmprint data coming from two different databases.
This procedure is valid as for one person, face and palmprint can be considered as
two independent modalities (Ross et al., 2006). For face modality we choose the
FRGC face database (Phillips et al., 2005) as it is a big database, widely used for
benchmarking. From this database, we choose 250 different users from 2 different
sessions. The first session consists of 6 samples for each user taken from data
collected during Fall 2003 and the second session consists of 6 samples for each user
taken from data collected during Spring 2004. Out of these 6 samples, the first 4
samples are taken in controlled condition and the next 2 samples are taken from
uncontrolled conditions. Figure 5 shows the sample images for one subject.
For palmprint modality, we select a subset of 250 different palmprints from
polyU database (Jing et al., 2007), each of these users possesses 12 samples such that
6 samples are taken from the first session and next 6 samples are taken from the
second session. The average time interval between the first and second session is two
months. Figure 6 shows the sample palmprint images of one subject in one session.
In building our multimodal biometric database of face and palmprint, each virtual
person is associated with 12 samples of face and palmprint produced randomly from
the face and palmprint samples of 2 persons in the respective databases. Thus, the
built virtual multimodal biometric database consists of 250 users such that each user
has 12 samples.
7.1 Experimental procedure
This section describes in detail the experimental procedure employed in our work.
For learning the projection spaces, we use a subset of 100 users called LDB, such
that each user has 6 samples (selected randomly out of 12 samples). To validate the
Multisensor biometric evidence fusion of face and palmprint 29
performance of all the algorithms, we divide the whole database of 250 users into
two independent sets called Set-I and Set-II. Set-I consists of 200 users and Set-II
consists of 50 users. Set-II is used as the development set to fix the parameters of
PSO (like V
max
, C1, C2, Population Size) and also to set the parameters for match
score level fusion. Set-I is used to test the algorithms. Set-I is divided into two equal
partitions providing 6 reference data and 6 testing data for each of the 200 persons.
The reference and testing partition was repeated ‘m’ times (where m =10) using
Holdout crossvalidation and there is no overlapping between these two subsets.
Thus, in each of the 10 trials we have 1200 (= 200 × 6) reference samples and 1200
(= 200 × 6) testing samples and hence we have 1500 genuine matching scores and
238800 (= 200 × 199 × 6) impostor matching scores, as for each user, all other users
are considered as an impostor. This scheme is valid for verification performance
evaluation.
Figure 5 Images of one subject from FRGC face database: (a) controlled images and
(b) uncontrolled images, from Phillips et al. (2005) (see online version for colours)
Figure 6 Images of one subject from PolyU palmprint database
Because of the small size of the database, we repeat the partition of the whole
database of 250 users into Set-I and Set-II three times. We therefore obtain three
different sets Set-I and Set-II resulting in a total of 30 different runs because of the
30 R. Raghavendra et al.
Holdout cross validation. Finally, results are presented by taking the mean of all
runs and we also present the statistical variation of the results with 90% parametric
confidence interval (Bolle et al., 2004) which gives a better estimation of standard
deviation than the one that we can obtain thanks to the cross validation.
8 Results and discussion
This section describes the detail analysis of the results. Figure 7 shows the
performance of the proposed method with state of art methods such as image
fusion using Wavelet, image fusion using Genetic Algorithm. Figure 7 also shows
the comparative performance of image fusion algorithms with match score level
fusion scheme using weighted SUM rule and also with individual biometrics. Table 1
shows the comparative performance of the same systems with mean GAR at
FAR =0.01%. It can be observed from Figure 7 (and also from Table 1) that, the
proposed method of fusing face and palmprint images shows the best performance
with GAR =93.12% at FAR =0.01%. We can also observe from Figure 7 that,
the proposed method has acheived the improved performance of roughly 10% over
Figure 7 ROC curves of the different fusion schemes (see online version for colours)
Table 1 Comparative performance of the different image fusion schemes (mean GAR
at FAR =0.01%)
Mean GAR at 0.01% of FAR (%)
Methods with 90% confidence interval
Face alone 64.86 [62.59; 67.12]
Palmprint alone 72.46 [70.34; 74.57]
Match score fusion 82.80 [80.61; 84.18]
Image fusion-WL 77.28 [76.30; 80.21]
Image fusion-GA 88.40 [86.88; 89.91]
Proposed image fusion-PSO 94.26 [93.15; 95.36]
Multisensor biometric evidence fusion of face and palmprint 31
match score level fusion and about 4% over Genetic Algorithm. It can also be
observed that, image fusion based on wavelet method shows worst performance
when compared with match score level fusion. Thus, the above analysis indicate the
efficacy of the proposed image fusion scheme for face and palmprint. We have also
attempted to analyse the PSO solution in order to understand what features are
selected from face and palmprint. Figure 8 shows the images of face, palmprint and
fused image obtained using proposed scheme of image fusion using PSO. In order
to make the analysis more interesting we considered the examples of images from
different conditions. The first row in Figure 8 shows the images of the person whose
palmprint is affected because of illumination and face is occlude by wearing a glass.
In this case, we can observe from fused image that, the glass part in the spectacles
which was present in the face image is replaced by palmprint features. Further,
we can also observe that, the more informative features of face such as nose, lips
and in the case of palmprint, as principle lines are also considered in obtaining
the optimised fused image. The second row in Figure 8 shows the fused image
when we fuse clean face and palmprint images. Here, we can observe that, most
informative features of face (e.g., eyes, eye browse, nose and lips) and palmprint
(principle lines, ridges) are consider by proposed method to obtain the fused image.
Further, similar type of analysis can also be observed for third and fourth row
in Figure 8. In third row, we have considered the example of fusing the clean
Figure 8 Fusion using proposed method: (a) palmpint images; (b) face images
and (c) fused images
32 R. Raghavendra et al.
image from palmprint and uncontrolled image from face. In this example, we can
observe that most of the information present in the fused image corresponds to
palmprint (e.g., see cheeks of fused image in Figure 8) but still most informative
information of face is also retained. Finally, fourth row shows the case when noisy
palmprint (illuminated) and noisy face (i.e., with variation in pose and uncontrolled
illumination) are combined using proposed PSO scheme. Here also we can observe
that, the most informative features from palmprint and face is selected by the
proposed method. Thus, from the above analysis, it is clear that the proposed
method of image fusion using PSO selects the best available information from face
and palmprint and hence it constitutes for more accurate multimodal biometric
system using face and palmprint.
9 Conclusion
Here, we have presented a novel and efficient scheme for combining face and
palmprint at image level using PSO. The high-resolution multispectral face and
palmprint images are fused using PSO that employs KDDA as an fitness function.
To verify the efficacy of the proposed scheme extensive experiments are carried
out on build multimodal biometric database of 250 users. The proposed method is
compared with state of art methods such as image fusion using genetic algorithm
and wavelet schemes. We have also reported the comparative analysis of image
fusion schemes with match score level fusion and we also present the statistical
validation of results with 90% confidence interval. The proposed method of image
fusion using PSO shows the best performance as compared with match score level
fusion, state of art schemes and unimodal with GAR =94.26% at FAR =0.01%.
Further, the detail analysis of features selected from proposed method indicates that,
the proposed method is higly robust to unwanted noise present in the image and
there by confirming the validity and efficacy.
Acknowledgements
The authors would like to acknowledge the many helpful suggestions of two
anonymous reviewers also the Editor of this Journal.
References
Bebis, G., Uthiram, S. and Georgiopoulos, M. (2000) ‘Face detection and verification using
genetic search’, International Journal of Artificial Tools, Vol. 6, pp.225–246.
Bebis, G., Gyaourova, A., Singh, S. and Pavlids, I. (2006) ‘Face recognition by fusing thermal
and visible imagery’, Image and Vision Computing, Vol. 24, pp.727–742.
Bolle, R.M., Ratha, N.K. and Pankanti, S. (2004) ‘Qualitative weight assignment for
multimodal biometric fusion’, Proceedings ICPR-04, UK, pp.103–106.
Chui, C. (1992) An Introduction to Wavelets, Academic Press, UK.
Feng, G., Dong, K., Hu, D. and Zhang, D. (2004) ‘When faces are combined with
palmprints: a novel biometric fusion strategy’, Proceedings of First International
Conference on Biometric Authentication, Springer-Verlag, Berlin, pp.701–707.
Multisensor biometric evidence fusion of face and palmprint 33
Haupt, R.L. and Haupt, S.E. (2004) Practical Genetic Algorithms, 2nd ed.,
A Wiley-Interscience Publications, New Jersy.
Jing, X.Y., Yao, Y.F., Yang, J.Y., Li, M. and Zhang, D. (2007) ‘Face and palmprint pixel
level fusion and kernel DCV-RBF classifier for small sample biometric recognition’,
Pattern Recognition, Vol. 40, pp.3209–3224.
Kennedy, J. and Eberhart, R.C. (1995) ‘Particle swarm optimization’, Proceedings of IEEE
International Conference on Neural Networks, Australia, pp.1942–1948.
Kennedy, J. and Eberhart, R.C. (1997) ‘A discrete binary version of the particle swarm
algorithm’, Proceedings of IEEE International Conference on Systems, Man and
Cybernetics, Vancouver, pp.4104–4108.
Kisku, D.R., Singh, J.K., Tistarelli, M. and Gupta, P. (2009) ‘Multisensor biometric
evidence fusion for person authentication using wavelet decomposition and
monotonic-decreasing graph’, Proceedings of 7th International Conference on
Advances in Pattern Recognition (ICAPR’09), India, pp.205–208.
Nandakumar, K., Chen, Y., Dass, S.C. and Jain, A.K. (2008) ‘Likelihood ratio-based
biometric score fusion’, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 30, pp.342–347.
Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J.,
Min, J. and Worek, W. (2005) ‘Overview of the face recognition grand challenge’,
Proceedings of CVPR05, USA, pp.947–954.
Raghavendra, R., Rao, A. and Hemantha Kumar, G. (2009) ‘Qualitative weight assignment
for multimodal biometric fusion’, Proceedings of 7th International Conference on
Advances in Pattern Recognition (ICAPR’09), India, pp.193–196.
Ross, A., Nandakumar, K. and Jain, K.A. (2006) Handbook of Multibiometrics,
Springer-Verlag, Berlin.
Stathaki, T. (2008) Image Fusion-Algorithms and Applications, Academic Press, UK.
Yan, Y. and Zhang, Y.J. (2008) ‘Multimodal biometrics fusion using correlation filter bank’,
Proceedings of International Conference on Pattern Recognition (ICPR-2008), USA,
pp.1–4.
Yao, Y., Jing, X. and Wong, H. (2007) ‘Face and palmprint feature level fusion for single
sample biometric recognition’, Nerocomputing, Vol. 70, pp.1582–1586.