ArticlePDF Available

Multisensor biometric evidence fusion of face and palmprint for person authentication using Particle Swarm Optimisation (PSO)

Authors:
  • Norwegian University of Science and Technology, Norway
  • Swami Shivananda Memorial Institute

Abstract and Figures

This paper presents a novel biometric sensor fusion technique for face and palmprint images using Particle Swarm Optimisation (PSO). The proposed method can be visualised in the following steps: we first decompose the face and palmprint image obtained from different sensors using wavelet transformation and then, we employ PSO to select most informative wavelet coefficients from face and palmprint to produce a new fused image. We then employed Kernel Direct Discriminant Analysis (KDDA) for feature extraction and the decision about accept/reject is carried out using Nearest Neighbour Classifier (NNC). Extensive experiments carried out on a virtual multimodal biometric database of 250 users indicate the efficacy of the proposed method.
Content may be subject to copyright.
Int. J. Biometrics, Vol. 2, No. 1, 2010 19
Multisensor biometric evidence fusion of face
and palmprint for person authentication using
Particle Swarm Optimisation (PSO)
R. Raghavendra*
Department of Studies in Computer Science,
University of Mysore,
Mysore 570006, India
E-mail: raghu07.mys@gmail.com
*Corresponding author
Ashok Rao
Department of E&C,
Channabasaveshwara Institute of Technology,
Gubbi, Tumkur 572216, India
E-mail: ashokrao@gmail.com
G. Hemantha Kumar
Department of Studies in Computer Science,
University of Mysore,
Mysore 570006, India
E-mail: hemanthakumar@compsci.uni-mysore.ac.in
Abstract: This paper presents a novel biometric sensor fusion technique
for face and palmprint images using Particle Swarm Optimisation (PSO).
The proposed method can be visualised in the following steps: we first
decompose the face and palmprint image obtained from different sensors
using wavelet transformation and then, we employ PSO to select most
informative wavelet coefficients from face and palmprint to produce a
new fused image. We then employed Kernel Direct Discriminant Analysis
(KDDA) for feature extraction and the decision about accept/reject
is carried out using Nearest Neighbour Classifier (NNC). Extensive
experiments carried out on a virtual multimodal biometric database of
250 users indicate the efficacy of the proposed method.
Keywords: multimodal biometrics; image fusion; match score level fusion;
face; palmprint; PSO; particle swarm optimisation.
Reference to this paper should be made as follows: Raghavendra, R.,
Rao, A. and Hemantha Kumar, G. (2010) ‘Multisensor biometric evidence
fusion of face and palmprint for person authentication using Particle
Swarm Optimisation (PSO)’, Int. J. Biometrics, Vol. 2, No. 1, pp.19–33.
Biographical notes: R. Raghavendra received the BE and MTech Degree in
Electronics and Communication Engineering from University of Mysore
and Visvesvaraya Technological University respectively. Since 2007, he has
Copyright © 2010 Inderscience Enterprises Ltd.
20 R. Raghavendra et al.
been a candidate for PhD Degree in Computer Science and Technology in
University of Mysore. His current research interest includes DSP, pattern
recognition, feature selection, optimisation techniques and finite mixture
models. He is an author of more than 15 research papers.
Ashok Rao received his BE, ME and PhD in 1982, 1985, 1991 all
in EE from University of Mysore, IISc Bangalore and IIT Bombay
respectively. His current area of research include bio-metrics, DSP and
image processing, bio-medical signal processing, applied linear algebra,
social computing, renewable energy and engineering education. From
1999–2005, he was Head of Network Project, CEDT, IISc. From August
2009, he is working as a Professor in CIT, Gubbi, Tumkur. He has
authored over 75 research publications in all areas of his interest.
G. Hemantha Kumar received BSc, MSc and PhD from University of
Mysore. He is working as a Professor in the Department of Studies
in Computer Science, University of Mysore, Mysore. He has published
more than 200 papers in journals, edited books and refereed conferences.
His current research interest includes numerical techniques, digital image
processing, pattern recognition and multimodal biometrics.
1 Introduction
Identification of person with high accuracy is becoming critical in a number of
security applications in our society. Biometric based person verification has attracted
more attention in designing security system. Most of the biometric systems that are
presently in use typically have a single biometric trait to establish identity (Unimodal
biometric systems). The Unimodal biometric system suffers from drawbacks such as
noise in sensed data, lack of universality, spoof attacks and so on (Ross et al., 2006).
Some of the limitations of Unimodal biometrics are alleviated by combining more
than one evidence from different source of biometric information. Such a system that
combine more that one biometric trait is termed as Multimodal biometric system
which improve the matching accuracy of a system while increasing population
coverage and preventing spoof attacks. The heart of multimodal biometric system
relies in fusing the information of multiple biometric in order to achieve better
recognition performance. Fusion can be performed at four different levels of
information such as sensor level, feature level, match score level and decision
level. Match score level fusion is generally preferred as all commercially available
biometric devices do not provide access to the features at all levels and also that it
is easy to fuse.
A multimodal biometric system that fuses information at sensor level (i.e., image
level) is excepted to produce more accurate results than the system that integrate
information at a later stages, namely, feature level, match score level and decision
level because of the availability of more richer and relevant information. Face
and palmprint biometrics have been considered and accepted as most widely used
biometric traits as these two modalities shows number of useful features. While face
sensing is noninvasive and friendly, palmprints are non-intrusive and can be easily
acquired using low resolution sensors.
Multisensor biometric evidence fusion of face and palmprint 21
The majority of the work reported on multimodal biometric fusion using face
and palmprint are confined to match score and feature level fusions (Feng et al.,
2004; Yao et al., 2007; Jing et al., 2007; Yan and Zhang, 2008). Recently, Kisku
et al. (2009) proposed a multimodal biometric systems using palmprint and face
using sensor (image) level fusion. Here, face and palmprint images are decomposed
using Haar wavelet into a set of low resolution images with wavelet coefficients for
each level. Then, wavelet coefficients of face and palmprint are fused by taking the
average of each coefficient in every subband and then, inverse wavelet transform is
employed to convert the fused coefficients back to original resolution level. Then,
Shift Invariant Feature Transform (SIFT) is used to extract the features of this
fused image. Finally, matching between a pair of fused images is accomplished by
incorporating structured graph matching. This method exhibits some limitations
such as:
1 Feature registration is needed before performing the fusion.
2 Valid for only frontal view of face and palmprint samples i.e., this method has
the limitation in handling the face and palmprint traits with variation such as
pose, illumination that are commonly encountered in real time biometric
applications.
In our present work, we propose a novel approach to fuse the face and palmprint
at image level using PSO. Here, we first decompose face and palmprint image using
Haar wavelet. Then, we use PSO to choose the most informative wavelet coefficients
that can contribute to accurate verification of individuals from face and palmprint.
Extensive experiments conducted on virtual database of 250 users indicates the
efficacy of the proposed method. We also compare the proposed method with the
existing state of the art methods such as Genetic Algorithm and wavelet based image
fusion approaches. Further, we also compare the performance of proposed image
fusion approach with more robust fusion approach based on match scores using
weighted sum rule and also we present the statistical variation of results with 90%
confidence interval.
The rest of the paper is organised as follows: Section 2 describes the proposed
method of image fusion using PSO. Section 3 discuss about selection of different
parameters used in PSO. The use of Genetic Algorithm in fusing face and palmprint
at image level is presented in Section 4. In Section 5, wavelet based image fusion of
face and palmprint is discussed. Section 6 presents the match score level fusion of
face and palmprint. The experimental setup and procedure is described in Section 7.
Section 8 discuss the results obtained using proposed method. Section 9 draws the
conclusion.
2 Proposed method
This section describes the proposed approach of combining the information from
face and palmprint image using PSO. Figure 1 shows the block diagram of the
proposed approach. First, we compute a multi-resolution representation for each
face and palmprint images using Haar wavelet (Chui, 1992). In this work, we have
employed the Haar wavelet, which is the simplest to implement and computationally
22 R. Raghavendra et al.
inexpensive (Chui, 1992). Furthermore, since Haar basis forms an orthogonal basis,
the transformation provides a non-redundant representation of the input images.
The Haar wavelet is defined as:
ψ(x)=
1 for 0 x 1/2,
1 for 1/2 x 1,
0 Otherwise.
To fuse the face and palmprint images, we select a subset of wavelet coefficients
from the face image and rest from the palmprint image. Here, we employ PSO
to decide which wavelet coefficients to select from face and is palmprint. Figure 2
illustrates the main idea of the approach and explained further in Section 3.
Verification is carried out using KDDA as it is known for good performance and
high dimensionality reduction (Chui, 1992). Finally, decision about accept/reject is
carried out using NNC.
Figure 1 Block diagram of proposed method (see online version for colours)
Multisensor biometric evidence fusion of face and palmprint 23
Figure 2 Illustration of proposed method (see online version for colours)
3 Image fusion using PSO
PSO is a stochastic, population based random optimisation technique aimed at
finding a solution to an optimisation problem in a search space. The PSO algorithm
was first described by Kennedy and Eberhart (1995). The main idea of PSO is to
simulate the social behaviour of birds flocking to describe an evolving system. Each
candidate solution is therefore modelled by an individual bird that is a particle in a
search space. Each particle adjusts its flight by making use of its individual memory
and of the knowledge gained by its neighbours to find the best solution.
3.1 Principle of PSO
The main objective of PSO is to optimise a given function using a fitness function.
PSO is initialised with a population of particles distributed randomly over the
search space and evaluated to compute the fitness function together. Each particle
is treated as a point in the N-Dimension space. The ith particle is represented
as X
i
= {x
i1
,x
i2
,...,x
iN
}. At every iteration, each particle is updated by two
best values called pbest and gbest. pbest is the best position associated with the
best fitness value of particle i obtained so far and is represented as pbest
i
=
{pbest
i1
,pbest
i2
,...,pbest
iN
} with fitness function f(pbest
i
). gbest is the best
position among all the particles in the swarm. The rate of the position change
24 R. Raghavendra et al.
(velocity) for particle i is represented as V
i
= {v
i1
,v
i2
,...,v
iN
}. The particle
velocities are updated according to the following equations (Kennedy and Eberhart,
1995):
V
new
id
= w × V
old
id
+ C1 × rand
1
() × (P
bestid
x
id
)
+ C2 × rand
2
() × (gbest
d
x
id
) (1)
x
id
= x
id
+ V
new
id
(2)
where, d =1, 2,...,N,w is the inertia weight. The suitable selection of inertia
weights provides a balance between global and local explorations, and results
in fewer iterations on average to find near optimal results. C1 and C2 are the
acceleration constants used to pull each particle towards pbest and gbest. Low values
of C1 and C2 allow the particle to roam far from target regions, while high values
result in abrupt movements towards or past the target regions. rand
1
() and rand
2
()
are random numbers between (0, 1).
3.2 Binary PSO
The original PSO was introduced for continuous population but has been later
extended by Kennedy and Eberhart (1997) to the discrete valued population. In the
binary PSO, the particles are represented by binary values (0 or 1). Each particle
velocity is updated according to the following equations:
S(V
new
id
)=
1
1+e
V
new
id
(3)
if (rand <S(V
new
id
)) then x
id
=1; else x
id
=0; (4)
where V
new
id
denotes the particle velocity obtained from equation (1), function
S(V
new
id
) is a sigmoid transformation and rand is a random number selected from
the uniform distribution (0, 1).IfS(V
new
id
) is larger than a random number then
its position value is represented as {1} else its position value is represented as {0}.
In order to apply the idea of binary PSO for image fusion of face and palmprint,
we need to adapt the general binary PSO concept to this precise application. This
will be the objective of the following subsections.
3.2.1 Representation of position
The initial swarm is created such that the population of the particles is distributed
randomly over the search space. Since, we are using binary PSO, the particle position
is represented as a binary bit string of length N, where N is the total number of
wavelet coefficients in the image (face and palmprint) decomposition. Each bit in the
particle is associated with a wavelet coefficient at a specific location. The value of bit
in the particle determines whether the corresponding wavelet coefficient is selected
from the face (for ex. 0) or from the palmprint (for ex. 1). Each particle velocity is
updated according to the equations (1), (3) and (4).
Multisensor biometric evidence fusion of face and palmprint 25
3.2.2 Fitness function
The optimal image fusion relies on an appropriate formulation of the fitness
function. In biometric verification, it is difficult to identify a single function that
would characterise the matching performance across a range of False Acceptance
Rate (FAR) and False Reject Rate (FRR) values i.e., across all matching thresholds
(Ross et al., 2006). Thus, in our experiments the following steps are followed:
First, obtain the fused image using selected wavelet coefficients from face and
palmprint using PSO (see Figure 2).
We project this fused image on to a lower dimensional space using KDDA.
We then compute the distance (using NNC) between reference and testing
samples to get the match scores.
Then, compute FAR and Genuine Acceptance Rate (GAR) by setting
thresholds at different points.
Finally we optimise the performance gain across a wide range of thresholds,
we define the objective function to be the average of 12 GAR values
corresponding to 12 different FAR values (90%, 70%, 50%, 30%, 10%, 0.8%,
0.6%, 0.4%, 0.2%, 0.09%, 0.05%, 0.01%). Thus, the main objective of the
verification fitness function is to maximise this average GAR value.
3.2.3 Velocity limitation V
max
In the binary version of PSO, the value of V
max
limits the probability that bit x
id
takes value 0 or 1 and therefore the use of high V
max
value in binary PSO will
decrease the range explored by the particle (Kennedy and Eberhart, 1997). In our
experiments, we tried different values of V
max
and finally selected V
max
=6,asit
allows the particle to reach near optimal solutions.
3.2.4 Inertia weight and acceleration constant
The weight of inertia is an important parameter as it provides the particles with a
degree of memory capability. It is experimentally found that inertia weight w’inthe
range [0.8, 1.2] yields a better performance (Kennedy and Eberhart, 1997). Hence
in our present work, we initially set w to 1.2 and then decrease it to zero during
subsequent iterations (in our work, we experimentally fix number of iteration equal
to 50, as further increase in number of iteration does not provide better optimisation
results). This scheme of decreasing inertia weight is found to be better than the
fixed one (Kennedy and Eberhart, 1997) as it allows reaching an optimal solution.
Even though the rate of acceleration constants C1 and C2 are not so critical in the
convergence of PSO, a suitable chosen value may lead to faster convergence of the
algorithm. In our experiments, we varied the value of C1 and C2 from 0 to 2 and
finally chose C1=0.7 and C2=1.2 as it yields better convergence.
3.2.5 Population size
The population size i.e., the number of particles in the swarm plays an important
role as it not only influences the performance but also the computation cost.
26 R. Raghavendra et al.
In our present work, we experimentally varied the size of the population from 10
to 30 in steps of 5 for each of the fitness function that we have used and finally,
we fixed the population size as 20 as further increase of this value did not provide
significant improvement in the performance.
4 Image fusion using Binary Genetic Algorithm
The Binary Genetic Algorithm is an optimisation and search technique inspired by
the principles of genetics and natural selection (Haupt and Haupt, 2004). Binary
Genetic Algorithm allows a population composed of many individuals to evolve
under specified selection of a rule that maximises the fitness function. These are
designed to effectively search large, non-linear, poorly-understood search space.
Even though no work has been reported on the Binary Genetic Algorithm being
used for fusing the face and palmprint images, but it is still widely used in biometric
applications that involves the fusion of visible and infrared face images (Bebis et al.,
2000, 2006). Thus, in this work, we experiment the binary Genetic Algorithm for
the image level fusion of palmprint and face and compare the performance with
PSO and match score level fusion. The following subsection gives the details of
parameters used in our work to select most informative wavelet coefficients from
face and palmprint using Genetic Algorithm.
4.1 Initial population
The initial population is generated randomly over a search space and each
individual is represented by 1’s and 0’s. Where, 1 corresponds to the position
of face feature and 0 corresponds to the position of palmprint feature. In our
work, we experimentally fix the population size equal to 200 and 150 generations
(iterations).
4.2 Fitness function
We use the same fitness function as mentioned for PSO (see Section 3.2.2) to have
consistency in comparing the performance.
4.3 Selection
We use cross generational selection (Bebis et al., 2000) that is given the population
size of N , the offspring double the size of the population and we select the best
N individuals from the combined parent-offspring population (Bebis et al., 2000).
4.4 Crossover
In our work, we use uniform crossover as it is proved to give better performance
that any other cross over method (Haupt and Haupt, 2004) and also we do not
know the relationship between different wavelet coefficients of face and palmprint.
Hence, we experimentally fix the crossover probability as 0.92.
Multisensor biometric evidence fusion of face and palmprint 27
4.5 Mutation
Mutation is a low probability operator which flips the values of random chosen bit.
In our work, we experimentally fix the mutation probability as 0.01.
5 Image fusion using wavelet
Figure 3 shows the state of art in fusing the face and palmprint images using wavelet
transform. Here, we first decompose both face and palmprint images independently
using Haar wavelet. Then, we use the simple average rule (Stathaki, 2008) to fuse
the wavelet coefficients in face and palmprint image. We then take inverse wavelet
transform to obtain the final fused image. To have consistency with the proposed
approach, we use KDDA scheme to represent the fused image in lower dimension
space and decision about accept/reject is carried out using NNC.
Figure 3 Block diagram of image fusion using wavelets (see online version for colours)
6 Match score level fusion
Match score level fusion is widely used in multimodal biometrics as it is easy to
access and to combine the scores generated by different biometric matchers (Ross
et al., 2006).
Figure 4 shows the block diagram of the match score level fusion procedure
used in our present work. To obtain the match scores for individual biometrics
(face and palmprint), we use the combination of wavelets and KDDA schemes for
feature extraction and match scores are obtained using NNC. Several algorithms
can be used for performing score fusion going from simple addition or product
rules to more complicated ones including classification using SVM or score density
estimation. However recent works (Ross et al., 2006; Nandakumar et al., 2008) show
that all these methods give roughly equivalent performance to that of weighted SUM
rule at the condition that the weights reflect the relative difference in performance
of the individual systems. This is therefore the technique that we are going to apply
in this work. The weighted SUM rule can be written as:
S
fuse
= W 1 × S
face
+ W 2 × S
palmprint
(5)
28 R. Raghavendra et al.
where, W 1 and W 2 denote the weights associated with face and palmprint
respectively, such that W 1+W 2=1, S
fuse
denotes the fused scores, S
face
denotes
the scores of the face system and S
palmprint
denotes the scores of the palmprint
system. We calculate the weights as mentioned in Raghavendra et al. (2009).
Figure 4 Block diagram of match score level fusion (see online version for colours)
7 Experimental setup
This section describes the experimental setup that we have build in order to evaluate
the proposed feature level fusion schemes. Because of a lack of real multimodal
database of face and palmprint data, experiments are carried out on a database of
virtual persons using face and palmprint data coming from two different databases.
This procedure is valid as for one person, face and palmprint can be considered as
two independent modalities (Ross et al., 2006). For face modality we choose the
FRGC face database (Phillips et al., 2005) as it is a big database, widely used for
benchmarking. From this database, we choose 250 different users from 2 different
sessions. The first session consists of 6 samples for each user taken from data
collected during Fall 2003 and the second session consists of 6 samples for each user
taken from data collected during Spring 2004. Out of these 6 samples, the first 4
samples are taken in controlled condition and the next 2 samples are taken from
uncontrolled conditions. Figure 5 shows the sample images for one subject.
For palmprint modality, we select a subset of 250 different palmprints from
polyU database (Jing et al., 2007), each of these users possesses 12 samples such that
6 samples are taken from the first session and next 6 samples are taken from the
second session. The average time interval between the first and second session is two
months. Figure 6 shows the sample palmprint images of one subject in one session.
In building our multimodal biometric database of face and palmprint, each virtual
person is associated with 12 samples of face and palmprint produced randomly from
the face and palmprint samples of 2 persons in the respective databases. Thus, the
built virtual multimodal biometric database consists of 250 users such that each user
has 12 samples.
7.1 Experimental procedure
This section describes in detail the experimental procedure employed in our work.
For learning the projection spaces, we use a subset of 100 users called LDB, such
that each user has 6 samples (selected randomly out of 12 samples). To validate the
Multisensor biometric evidence fusion of face and palmprint 29
performance of all the algorithms, we divide the whole database of 250 users into
two independent sets called Set-I and Set-II. Set-I consists of 200 users and Set-II
consists of 50 users. Set-II is used as the development set to fix the parameters of
PSO (like V
max
, C1, C2, Population Size) and also to set the parameters for match
score level fusion. Set-I is used to test the algorithms. Set-I is divided into two equal
partitions providing 6 reference data and 6 testing data for each of the 200 persons.
The reference and testing partition was repeated m times (where m =10) using
Holdout crossvalidation and there is no overlapping between these two subsets.
Thus, in each of the 10 trials we have 1200 (= 200 × 6) reference samples and 1200
(= 200 × 6) testing samples and hence we have 1500 genuine matching scores and
238800 (= 200 × 199 × 6) impostor matching scores, as for each user, all other users
are considered as an impostor. This scheme is valid for verification performance
evaluation.
Figure 5 Images of one subject from FRGC face database: (a) controlled images and
(b) uncontrolled images, from Phillips et al. (2005) (see online version for colours)
Figure 6 Images of one subject from PolyU palmprint database
Because of the small size of the database, we repeat the partition of the whole
database of 250 users into Set-I and Set-II three times. We therefore obtain three
different sets Set-I and Set-II resulting in a total of 30 different runs because of the
30 R. Raghavendra et al.
Holdout cross validation. Finally, results are presented by taking the mean of all
runs and we also present the statistical variation of the results with 90% parametric
confidence interval (Bolle et al., 2004) which gives a better estimation of standard
deviation than the one that we can obtain thanks to the cross validation.
8 Results and discussion
This section describes the detail analysis of the results. Figure 7 shows the
performance of the proposed method with state of art methods such as image
fusion using Wavelet, image fusion using Genetic Algorithm. Figure 7 also shows
the comparative performance of image fusion algorithms with match score level
fusion scheme using weighted SUM rule and also with individual biometrics. Table 1
shows the comparative performance of the same systems with mean GAR at
FAR =0.01%. It can be observed from Figure 7 (and also from Table 1) that, the
proposed method of fusing face and palmprint images shows the best performance
with GAR =93.12% at FAR =0.01%. We can also observe from Figure 7 that,
the proposed method has acheived the improved performance of roughly 10% over
Figure 7 ROC curves of the different fusion schemes (see online version for colours)
Table 1 Comparative performance of the different image fusion schemes (mean GAR
at FAR =0.01%)
Mean GAR at 0.01% of FAR (%)
Methods with 90% confidence interval
Face alone 64.86 [62.59; 67.12]
Palmprint alone 72.46 [70.34; 74.57]
Match score fusion 82.80 [80.61; 84.18]
Image fusion-WL 77.28 [76.30; 80.21]
Image fusion-GA 88.40 [86.88; 89.91]
Proposed image fusion-PSO 94.26 [93.15; 95.36]
Multisensor biometric evidence fusion of face and palmprint 31
match score level fusion and about 4% over Genetic Algorithm. It can also be
observed that, image fusion based on wavelet method shows worst performance
when compared with match score level fusion. Thus, the above analysis indicate the
efficacy of the proposed image fusion scheme for face and palmprint. We have also
attempted to analyse the PSO solution in order to understand what features are
selected from face and palmprint. Figure 8 shows the images of face, palmprint and
fused image obtained using proposed scheme of image fusion using PSO. In order
to make the analysis more interesting we considered the examples of images from
different conditions. The first row in Figure 8 shows the images of the person whose
palmprint is affected because of illumination and face is occlude by wearing a glass.
In this case, we can observe from fused image that, the glass part in the spectacles
which was present in the face image is replaced by palmprint features. Further,
we can also observe that, the more informative features of face such as nose, lips
and in the case of palmprint, as principle lines are also considered in obtaining
the optimised fused image. The second row in Figure 8 shows the fused image
when we fuse clean face and palmprint images. Here, we can observe that, most
informative features of face (e.g., eyes, eye browse, nose and lips) and palmprint
(principle lines, ridges) are consider by proposed method to obtain the fused image.
Further, similar type of analysis can also be observed for third and fourth row
in Figure 8. In third row, we have considered the example of fusing the clean
Figure 8 Fusion using proposed method: (a) palmpint images; (b) face images
and (c) fused images
32 R. Raghavendra et al.
image from palmprint and uncontrolled image from face. In this example, we can
observe that most of the information present in the fused image corresponds to
palmprint (e.g., see cheeks of fused image in Figure 8) but still most informative
information of face is also retained. Finally, fourth row shows the case when noisy
palmprint (illuminated) and noisy face (i.e., with variation in pose and uncontrolled
illumination) are combined using proposed PSO scheme. Here also we can observe
that, the most informative features from palmprint and face is selected by the
proposed method. Thus, from the above analysis, it is clear that the proposed
method of image fusion using PSO selects the best available information from face
and palmprint and hence it constitutes for more accurate multimodal biometric
system using face and palmprint.
9 Conclusion
Here, we have presented a novel and efficient scheme for combining face and
palmprint at image level using PSO. The high-resolution multispectral face and
palmprint images are fused using PSO that employs KDDA as an fitness function.
To verify the efficacy of the proposed scheme extensive experiments are carried
out on build multimodal biometric database of 250 users. The proposed method is
compared with state of art methods such as image fusion using genetic algorithm
and wavelet schemes. We have also reported the comparative analysis of image
fusion schemes with match score level fusion and we also present the statistical
validation of results with 90% confidence interval. The proposed method of image
fusion using PSO shows the best performance as compared with match score level
fusion, state of art schemes and unimodal with GAR =94.26% at FAR =0.01%.
Further, the detail analysis of features selected from proposed method indicates that,
the proposed method is higly robust to unwanted noise present in the image and
there by confirming the validity and efficacy.
Acknowledgements
The authors would like to acknowledge the many helpful suggestions of two
anonymous reviewers also the Editor of this Journal.
References
Bebis, G., Uthiram, S. and Georgiopoulos, M. (2000) ‘Face detection and verification using
genetic search’, International Journal of Artificial Tools, Vol. 6, pp.225–246.
Bebis, G., Gyaourova, A., Singh, S. and Pavlids, I. (2006) ‘Face recognition by fusing thermal
and visible imagery’, Image and Vision Computing, Vol. 24, pp.727–742.
Bolle, R.M., Ratha, N.K. and Pankanti, S. (2004) ‘Qualitative weight assignment for
multimodal biometric fusion’, Proceedings ICPR-04, UK, pp.103–106.
Chui, C. (1992) An Introduction to Wavelets, Academic Press, UK.
Feng, G., Dong, K., Hu, D. and Zhang, D. (2004) ‘When faces are combined with
palmprints: a novel biometric fusion strategy’, Proceedings of First International
Conference on Biometric Authentication, Springer-Verlag, Berlin, pp.701–707.
Multisensor biometric evidence fusion of face and palmprint 33
Haupt, R.L. and Haupt, S.E. (2004) Practical Genetic Algorithms, 2nd ed.,
A Wiley-Interscience Publications, New Jersy.
Jing, X.Y., Yao, Y.F., Yang, J.Y., Li, M. and Zhang, D. (2007) ‘Face and palmprint pixel
level fusion and kernel DCV-RBF classifier for small sample biometric recognition’,
Pattern Recognition, Vol. 40, pp.3209–3224.
Kennedy, J. and Eberhart, R.C. (1995) ‘Particle swarm optimization’, Proceedings of IEEE
International Conference on Neural Networks, Australia, pp.1942–1948.
Kennedy, J. and Eberhart, R.C. (1997) ‘A discrete binary version of the particle swarm
algorithm’, Proceedings of IEEE International Conference on Systems, Man and
Cybernetics, Vancouver, pp.4104–4108.
Kisku, D.R., Singh, J.K., Tistarelli, M. and Gupta, P. (2009) ‘Multisensor biometric
evidence fusion for person authentication using wavelet decomposition and
monotonic-decreasing graph’, Proceedings of 7th International Conference on
Advances in Pattern Recognition (ICAPR’09), India, pp.205–208.
Nandakumar, K., Chen, Y., Dass, S.C. and Jain, A.K. (2008) ‘Likelihood ratio-based
biometric score fusion’, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 30, pp.342–347.
Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J.,
Min, J. and Worek, W. (2005) ‘Overview of the face recognition grand challenge’,
Proceedings of CVPR05, USA, pp.947–954.
Raghavendra, R., Rao, A. and Hemantha Kumar, G. (2009) ‘Qualitative weight assignment
for multimodal biometric fusion’, Proceedings of 7th International Conference on
Advances in Pattern Recognition (ICAPR’09), India, pp.193–196.
Ross, A., Nandakumar, K. and Jain, K.A. (2006) Handbook of Multibiometrics,
Springer-Verlag, Berlin.
Stathaki, T. (2008) Image Fusion-Algorithms and Applications, Academic Press, UK.
Yan, Y. and Zhang, Y.J. (2008) ‘Multimodal biometrics fusion using correlation filter bank’,
Proceedings of International Conference on Pattern Recognition (ICPR-2008), USA,
pp.1–4.
Yao, Y., Jing, X. and Wong, H. (2007) ‘Face and palmprint feature level fusion for single
sample biometric recognition’, Nerocomputing, Vol. 70, pp.1582–1586.
... On the other hand, Bacterial Foraging Optimization is another inspiring example of SI, where bacteria's formation based on environmental parameters were inspired to develop a sophisticated algorithm for multi-agent optimization. The implementation of Swarm algorithms in cloud security can be found in the following fields: Authentications [17], [18], Forensics [19], and Virtualizations [20]. ...
... In IAS, most security-related works have been constructed using NN algorithms, and more often applied to Keystroke and face identifications [17], [18], [24], [34]. For example, Wei et al. (2011) [19] briefly analyzed NN in password authentication systems; similarly, [35] developed password verification techniques for multi-server architecture using Neural algorithms. ...
... However, authentication research using SI did not explore that much. Still, some of the notable works that draw attention is as follows: the author(s) in [17] used Particle Swarm for palm and face identity; [18] used Immune algorithms for signature identification; [21], [22], [36], [37] developed a negative authentication system using Negative Selection algorithms. Apart from this, some studies also distilled promising results for the authentication process considering the GA based approach with adaptive selections [34], [38]. ...
Article
Full-text available
Cloud computing gained much popularity in the recent past due to its many internet-based services related to data, application, operating system, and eliminating the need for central hardware access. Many of the challenges associated with cloud computing can be specified as network load, security intrusion, authentication, biometric identification, and information leakage. Numerous algorithms have been proposed and evaluated to solve those challenges. Among those, bio-inspired algorithms such as Evolutionary, Swarm, Immune, and Neural algorithms are the most prominent ones which are developed based on nature's ecosystems. Bio-inspired algorithms' adaptability allows many researchers and practitioners to utilize them to solve many security-related cloud computing issues. This paper aims to explore previous research, recent studies, challenges, and scope for further analysis of cloud security. Therefore, this study provides an overview of bio-inspired algorithms application and evaluations, taking into account cloud security challenges, such as Identity
... This is because more relevant information is provided at this stage. In a research work carried out by [8] on face biometrics and palmprints, the images were first captured by different sensors and then decomposed using wavelet transformation. Particle Swarm Optimisation (PSO) was applied to obtain the fused image and most discriminative wavelet coefficients are selected from the biometrics features. ...
... has been used since(8,2) neighborhood accounts for more patterns. The LBP patterns have been grouped into 10 texture categories and 9 rotations to obtain the texture of each sub image. ...
... Moon et al. [20] created a composite image through ICP registration algorithm to showcase the sensor level fusion for the minutiae samples from two different fingerprint images. Recently, Raghavendra et al. [21] has demonstrated a novel approach of sensor level fusion for face and palm print images applying Particle Swarm Optimization (PSO). Singh et al. [22] conducted a sensor level fusion using multispectral imaging concept to integrate the face images from visible and NIR. ...
Chapter
Full-text available
Biometric has been an extensive method for years, which allow the comprehensive recognition and identification of an individual on the basis of various biometric traits such as face, fingerprint, iris, and retina and gait etc. although these biological and behavioral information of an individual is unique to oneself yet they failed to protect against the intruder attack sometimes. A Multibiometric system which can be formed by clubbing two or more traits has shown the better result in developing a more secure and reliable authentication system. In this paper we have presented a comparative study of various machine learning based unimodal and multimodal biometric systems.
... The ability to capture ear images from a distance and in a covert manner makes the technology an appealing choice for surveillance and security applications as well as other application domains. Significant contributions have been made in the field over recent years [8], [9], [10]. The ear is made up of standard features including the helix, the anti helix, the lobe, and the u-shaped intertragic notch between the ear hole and the lobe. ...
Conference Paper
Full-text available
Recently, the world is becoming ever more computerized and, as the risks of computer fraud increase, the security of personal information must be enforced. Biometrics is defined as the automatic personal identification from their physical and/or behavioral traits, today is needed in many fields such as: surveillance systems, access control systems, physical buildings and many more applications. In this paper, we propose an efficient offline personal identification system based on ear images. In this purpose, the identification algorithm aims to extract, for each ear, a specific set of features in which the binarized statistical image features (ML-BSIF) is used to extract only the distinctive biometric features. Thus, the Region Of Interest (ROI) image is divided into several sub-blocks then the BSIF method is applied in each sub-block(MB-BSIF). The main idea of ML-BSIF is to extract features from different MB-BSIF divisions and then combine them. Using these features, several combinations are tested in the fusion phase to achieve an optimal multi-representation system which leads to a better identification accuracy. The obtained experimental results show that the system yields the best performance for identifying a person and it can provide the highest degree of biometrics-based system security.
Chapter
Metaheuristic optimization over the last decade has been progressed remarkably and is being utilized in different fields with the aim of improving the performnces and achieved notable successes. Following the efficacy of the optimization usage in different domains, it was later also adopted in biometric systems to improve their accuracy and robustness. In this chapter, we present an unprecendent overview of usage of optimization techniques in face recognition systems. We analyzed various works presented in the literature, which have employed the optimization techniques in different steps of face recognition systems. Then, systematic analysis and findings about different optimization frameworks for face biometrics are introduced. In addition to different optimization techniques applied, we also explore a number of avaiable databases used in face recognition systems with respect to optimization schemes. Finally, open issues and challenges are highlighted and potential future research directions are discussed.
Article
Full-text available
Biometric gadgets utilize their physiological or behavioural properties for the confirmation and acknowledgment of people. Spoofing attack can be done by using any spoofing materials. Such features can be arranged into unimodal and multimodal frameworks. Some state-of-the-methods have some drawbacks, which reduce the efficiency of the system. Multimodal biometric detecting frameworks utilize at least two behavioural or physiological attributes. The multimodal system has showed to increase the success rate of identification and authentication meaningfully. Data from different modalities are acquired, pre-handled, removed noise, and contrasted and finally converted into features. At last, selection of features acknowledges the identification of a person. In multimodal biometric identification system, biometric features can club at any of the stages. i.e. sensor level, feature level, score level, rank level, and decision level. This paper presents an effective survey on fusion of features at different level in a multimodal biometric system. It also focuses in the field with a better thoughtful of multimodal biometric sensing and handling systems and research inclinations in this field.
Chapter
Integrating different information originating from different sources, known as information fusion, is one of the main factors of designing a biometric system involving more than one biometric source. In this chapter, various information fusion techniques in the context of multimodal biometric systems are discussed. Usually, the information in a multimodal biometric system can be combined in senor level, feature extraction level, match score level, rank level, and decision level. There is also another emerging fusion method, which is becoming popular—the fuzzy fusion. Fuzzy fusion deals with the quality of the inputs or with the quality of any system components. This chapter discusses the associated challenges related to making the choice of appropriate fusion method for the application domain, to balance between fully automated versus user defined operational parameters of the system and to take the decision on governing rules and weight assignment for fuzzy fusion.
Book
This book highlights recent advances in Cybernetics, Machine Learning and Cognitive Science applied to Communications Engineering and Technologies, and presents high-quality research conducted by experts in this area. It provides a valuable reference guide for students, researchers and industry practitioners who want to keep abreast of the latest developments in this dynamic, exciting and interesting research field of communication engineering, driven by next-generation IT-enabled techniques. The book will also benefit practitioners whose work involves the development of communication systems using advanced cybernetics, data processing, swarm intelligence and cyber-physical systems; applied mathematicians; and developers of embedded and real-time systems. Moreover, it shares insights into applying concepts from Machine Learning, Cognitive Science, Cybernetics and other areas of artificial intelligence to wireless and mobile systems, control systems and biomedical engineering.
Book
Full-text available
The growth in the use of sensor technology has led to the demand for image fusion: signal processing techniques that can combine information received from different sensors into a single composite image in an efficient and reliable manner. This book brings together classical and modern algorithms and design architectures, demonstrating through applications how these can be implemented. Image Fusion: Algorithms and Applications provides a representative collection of the recent advances in research and development in the field of image fusion, demonstrating both spatial domain and transform domain fusion methods including Bayesian methods, statistical approaches, ICA and wavelet domain techniques. It also includes valuable material on image mosaics, remote sensing applications and performance evaluation. This book will be an invaluable resource to R&D engineers, academic researchers and system developers requiring the most up-to-date and complete information on image fusion algorithms, design architectures and applications. - Combines theory and practice to create a unique point of reference - Contains contributions from leading experts in this rapidly-developing field - Demonstrates potential uses in military, medical and civilian areas
Conference Paper
Full-text available
Multimodal biometrics has drawn lot of attention in recent days as it provides more reliable scheme for person verification. Multimodal biometrics includes the fusion of information from different modalities. This paper presents a novel method for assigning weights before performing fusion at match score level. The proposed method is based on False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR) obtained for each modality and weights are assigned on match scores of individual modality before performing match score level fusion. Extensive experiments carried out on three different build multimodal biometric databases shows the efficacy of the proposed methods.
Article
Recently, multi-modal biometric fusion techniques have attracted increasing atove the recognition performance in some difficult biometric problems. The small sample biometric recognition problem is such a research difficulty in real-world applications. So far, most research work on fusion techniques has been done at the highest fusion level, i.e. the decision level. In this paper, we propose a novel fusion approach at the lowest level, i.e. the image pixel level. We first combine two kinds of biometrics: the face feature, which is a representative of contactless biometric, and the palmprint feature, which is a typical contacting biometric. We perform the Gabor transform on face and palmprint images and combine them at the pixel level. The correlation analysis shows that there is very small correlation between their normalized Gabor-transformed images. This paper also presents a novel classifier, KDCV-RBF, to classify the fused biometric images. It extracts the image discriminative features using a Kernel discriminative common vectors (KDCV) approach and classifies the features by using the radial base function (RBF) network. As the test data, we take two largest public face databases (AR and FERET) and a large palmprint database. The experimental results demonstrate that the proposed biometric fusion recognition approach is a rather effective solution for the small sample recognition problem.
Article
In the application of biometrics authentication (BA) technologies, the biometric data usually shows three characteristics: large numbers of individuals, small sample size and high dimensionality. One of major research difficulties of BA is the single sample biometrics recognition problem. We often face this problem in real-world applications. It may lead to bad recognition result. To solve this problem, we present a novel approach based on feature level biometrics fusion. We combine two kinds of biometrics: one is the face feature which is a representative of contactless biometrics, and another is the palmprint feature which is a typical contact biometrics. We extract the discriminant feature using Gabor-based image preprocessing and principal component analysis (PCA) techniques. And then design a distance-based separability weighting strategy to conduct feature level fusion. Using a large face database and a large palmprint database as the test data, the experimental results show that the presented approach significantly improves the recognition effect of single sample biometrics problem, and there is strong supplement between face and palmprint biometrics.
Article
Thermal infrared (IR) imagery offers a promising alternative to visible imagery for face recognition due to its relative insensitive to variations in face appearance caused by illumination changes. Despite its advantages, however, thermal IR has several limitations including that it is opaque to glass. The focus of this study is on the sensitivity of thermal IR imagery to facial occlusions caused by eyeglasses. Specifically, our experimental results illustrate that recognition performance in the IR spectrum degrades seriously when eyeglasses are present in the probe image but not in the gallery image and vice versa. To address this serious limitation of IR, we propose fusing IR with visible imagery. Since IR and visible imagery capture intrinsically different characteristics of the observed faces, intuitively, a better face description could be found by utilizing the complimentary information present in the two spectra. Two different fusion schemes have been investigated in this study. The first one is pixel-based and operates in the wavelet domain, while the second one is feature-based and operates in the eigenspace domain. In both cases, we employ a simple and general framework based on Genetic Algorithms (GAs) to find an optimum fusion strategy. We have evaluated our approaches through extensive experiments using the Equinox face database and the eigenface recognition methodology. Our results illustrate significant performance improvements in recognition, suggesting that IR and visible fusion is a viable approach that deserves further consideration.