ArticlePDF Available

Abstract

Several contributions have shown that fusion of decisions or scores obtained from various single-modal biometrics verification systems often enhances the overall system performance. A recent approach of multimodal biometric systems with the use of single sensor has received significant attention among researchers. In this paper, a combination of hand geometry and palmprint verification system is being developed. This system uses a scanner as sole sensor to obtain the hands images. First, the hand geometry verification system performs the feature extraction to obtain the geometrical information of the fingers and palm. Second, the region of interest (ROI) is detected and cropped by palmprint verification system. This ROI acts as the base for palmprint feature extraction by using Linear Discriminant Analysis (LDA). Lastly, the matching scores of the two individual classifiers is fused by several fusion algorithms namely sum rule, weighted sum rule and Support Vector Machine (SVM). The results of the fusion algorithms are being compared with the outcomes of the individual palm and hand geometry classifiers. We are able to show that fusion using SVM with Radial Basis Function (RBF) kernel has outperformed other combined and individual classifiers.
A Single-sensor Hand Geometry and Palmprint
Verification System
Michael Goh Kah Ong, Tee Connie, Andrew Teoh Beng Jin, David Ngo Chek Ling
Faculty of Information Science and Technology
Multimedia University
Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia.
+606-2523611
michael.goh, tee.connie, bjteoh, david.ngo@mmu.edu.my
ABSTRACT
Several contributions have shown that fusion of decisions or
scores obtained from various single-modal biometrics verification
systems often enhances the overall system performance. A recent
approach of multimodal biometric systems with the use of single
sensor has received significant attention among researchers. In
this paper, a combination of hand geometry and palmprint
verification system is being developed. This system uses a scanner
as sole sensor to obtain the hands images. First, the hand
geometry verification system performs the feature extraction to
obtain the geometrical information of the fingers and palm.
Second, the region of interest (ROI) is detected and cropped by
palmprint verification system. This ROI acts as the base for
palmprint feature extraction by using Linear Discriminant
Analysis (LDA). Lastly, the matching scores of the two individual
classifiers is fused by several fusion algorithms namely sum rule,
weighted sum rule and Support Vector Machine (SVM). The
results of the fusion algorithms are being compared with the
outcomes of the individual palm and hand geometry classifiers.
We are able to show that fusion using SVM with Radial Basis
Function (RBF) kernel has outperformed other combined and
individual classifiers.
Categories and Subject Descriptors
I.5.4 [Pattern Recognition]: Application Computer vision and
Signal processing
General Terms
Design, Verification
Keywords
Multimodal biometric, fusion, palmprint, hand geometry.
1. INTRODUCTION
Biometric system has been actively emerging in various industries
for the past few years, and it is continuing to roll to provide higher
security features for access control system. Many types of single-
modal biometric systems have been developed and deployed, for
example fingerprint, face, speaker, palmprint and hand geometry
verification systems. However, these systems are only capable to
provide low to middle range of security feature. Thus, for higher
security feature, the combination of two or more single-modal
biometrics (also known as multimodal biometrics) is required. In
addition, the industry is currently exploring the characteristics of
multimodal biometric that are reliable, able to provide high
security features, non-intrusive and widely accepted by the public.
Multimodal biometrics has significant functional advantages over
single biometrics, for example, elimination of False Acceptance
Rate (FAR) (by adjusting FAR=0%) without suffering from
increase occurrence of False Rejection Rate (FRR). In practice, it
is difficult to obtain both FAR and FRR to be equal to zero in a
single-modal biometric verification measurement space. The
biometrics industry emphasis heavily on security issues relating to
choosing the lowest FAR with relaxed FRR requirement. This
often causes high FRR and results in the increase of rejection of
valid users. Denial of access by failing to identify genuine user
would have adverse effects in the usability and public acceptance
of the biometrics system. In fact, both aspects are significant
obstacles to the wide deployment of the biometric technology.
Some work in multimodal biometric identification systems have
been reported in the literature. Wang et al. [1] uses the
combination of face and iris biometrics for identity verification
using RBF neural network fusion has produced higher verification
accuracy over iris or face only biometric. The hybrid biometric
authentication system [2] using vector abstraction scheme and
learning-based classifiers to fuse voice and face vectors has
significantly reduced the FAR and FRR. Work in [3] has
investigated the integration of palmprint and hand geometry by
using fusion at decision level by combining the decision scores of
both biometric systems. A multimodal person verification system
proposed by Kittler et. al. [4] using three experts namely, frontal
face profile, and voice. The best combination results are obtained
by applying simple sum rule.
A bimodal biometric verification system based on hand geometry
and palmprint modalities are described in this paper. This system
uses natural fusion approach as both of the biometric features
originate from the same part of the body. Apart from that, unlike
the other multimodal biometric system that required multiple
input devices [5], only a single image capturing device is needed
in this system. With this, the users do not need to go through the
inconvenience of using several different acquiring devices for
security access. They can be shielded completely from the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
WBMA ’03, November 8, 2003, Berkeley, California, USA.
Copyright 2003 ACM 1-58113-779-6/03/00011…$5.00.
100
complexity of multimodal verification system by using a single
sensor.
In this research, an optical scanner is selected instead of a CCD
camera as the input sensor. This is due to the reason that the
scanner is able to provide better quality images than the CCD
camera and it is not easily affected by lighting factors. In term of
cost, the scanner is much cheaper than a high resolution CCD
camera. Another advantage of the scanner is that, it is equipped
with a flat glass which enables the user to flatten their palm
properly on the glass to reduce bended palm ridges and wrinkles
errors.
Our proposed system does not need any pegs on the scanner to fix
the position of the user’s hand. Another special feature about the
system is that the size of the image captured is not fixed but varies
proportionally to the actual size of the user’s hands. In [6,7,8,9],
each captured image must adhere to the predetermine size, and
this has few limitations. When a small predetermined size is used,
some hand information will be lost; when a large predetermined
size is used, much space will be wasted thus increase the
computational load. The later problem is particularly apparent in
the case of acquiring children’s hand. Therefore, the proposed
system overcomes this problem by allowing the image acquired to
vary according to the actual user hand’s size.
This paper is organized as follows: Section 2 introduces the
framework of the proposed system. The extraction of individual
hand geometry and palmprint system is discussed in Section 3 and
4 respectively. The fusion strategies used are explained in Section
5, while Section 6 shows the experimental results.
2. PROPOSED SYSTEM
Image obtained
using optical
scanner
Binarization Border tracing
Salient points
Extraction of
Hand Geometry
Features
Locate & obtain
ROI from original
hand image
Palmprint
Rotation and
Normalization
Extraction of
Palmprint
Features
Classifier
Classifier
Decision Fusion
Yes / No
Figure 1. Automated Hand Geometry and Palmprint
Verification System framework.
The proposed system combines two biometric modalities, namely
palmprint and hand geometry verification system. Only one hand
image is captured during the image acquiring process. The
palmprint and hand geometry features are simultaneously
extracted using the same hand image. The Euclidean distance
classifier is used to classify both individual hand geometry and
palmprint features. Sum rule, weighted sum rule and SVM were
used as decision level fusions to fuse the matching scores
obtained from the hand geometry and palmprint individual
classifiers.
3. IMAGE EXTRACTION
During the image acquiring process, the users are required to
stretch their fingers and put their palm straight on the platform of
the scanner. The hand images acquired is in 256 RGB colors (8
bits per channel) format. The three color components are
important in the pre-processing stage as it can distinguish the
background, finger nails, rings and shadow from the hand image.
This clear distinction helps to trace the hand image more
accurately and reliably.
3.1 Extraction of salient points
The hand image acquired from the optical scanner is binarized by
using thresholding method [10] to filter the background and
shadow from the image. The border tracing algorithm [11] is used
to obtain all the vertical coordinates of the border pixels that
represents the signature of the hand contour, f(i) where i is the
array index. The hand contour signature is then blocked into non-
overlapping frames of 10 samples to check for existence of
stationary points in each frame, where their absolute values
exceed a predefined threshold, T
s
= 25. Hence, the nine salient
points with five valleys and four peaks (see Figure. 2) which
represent the tips and roots of the fingers are detected
respectively. These nine salient points serves as the reference
points to measure the length, width and height of the fingers and
palm, and also used to detect ROI.
0
100
200
300
400
500
1 290 579 868 1157 1446 1735 2024 2313
i
f (i )
Figure 2. Hand contour signature plot against index.
Peak/ root
Valley/ tip
101
(a)
(b)
(c)
(d)
Figure 3. Salient point detection process, (a) Original
hand image acquired from scanner, (b) binarized image,
(c) hand contour, (d) nine salient points that represent
the tips and roots of the fingers.
3.2 Extraction of ROI
For palmprint verification system, the ROI is located based on the
salient points by using right-angle coordination system [12]. After
obtaining the outline of the ROI, the image is cropped and rotated
(see Figure 4). As the size of ROI vary from hand to hand
(depending on the width of the hand), there is a need to resize all
of them to a fixed size. In this research, the ROIs are resized to
200 x 200 pixels.
(a)
(b)
(c)
Figure 4. ROI extraction process, (a) ROI detection based on
salient points, (b) ROI crop, (c) rotation and normalization.
4. FEATURE EXTRACTION
4.1 Extraction of hand geometry features
Figure 5. Hand geometry features.
Based on the salient points obtained, a automated hand geometry
measurement technique proposed in [10] is used to extract the
finger lengths, widths and the relative location of the crucial
features like knuckles and other joints (as shown in Figure 5).
These hand features are important in order to construct a unique
pattern for each person. The measuring process generates a feature
vector that is an array consists of N feature values as shown in
equation 1.
57
Ng
=+
(1)
where g is the number of segments that is set for each finger and 7
is the number of features that are obtained from the height of the
fingers and width of the palm. For illustration, Figure 5 depicts
the case where g = 3 yield N = 22 features.
4.2 Extraction of palmprint features
For palmprint extraction module in this system, Linear
Discriminant Analysis, also known as Fisher Discriminant
Analysis (FDA), [13] is used to extract the important palmprint
feature from the hand images. FDA maximizes the ratio of
between-class scatter to that of within-class scatter. In other
words, it projects images such that images of the same class are
close to each other while images of different classes are far apart.
The basis vectors calculated by Fisher Discriminant create the
Fisher Discriminant subspace, which is also called Fisherpalms in
this paper.
More formally, consider a set of M palmprint images having c
classes of images, with each class containing n set images, i
1,
i
2, … ,
i
n
. Let the mean of images in each class and the total mean of all
images be represented by
c
m
and
, respectively, the images in
each class are centered as
cc
c
nn
im
φ
=−
(2)
102
and the class mean is centered as
c
c
mm
ω
=−
(3)
The centered images are then combined side by side into a data
matrix. By using this data matrix, an orthonormal basis U is
obtained by calculating the full set of eigenvectors of the
covariance matrix
cTc
nn
φφ
. The centered images are then projected
into this orthonormal basis as follow
cTc
nn
U
φφ
= (4)
The centered means are also projected into the orthonormal basis
as
T
c
c
U
ωω
= (5)
Based on this information, the within class scatter matrix
W
S
is
calculated as
11
j
T
n
c
W
jk
jj
S
kk
φφ
==
=
∑∑
(6)
and the between class scatter matrix
B
S
is calculated as
1
C
T
jj
Bj
j
Sn
ωω
=
=
(7)
The generalized eigenvectors V and eigenvalues
λ
of the within
class and between class scatter matrix are solved as follow:
BW
SVSV
λ= (8)
The eigenvectors are sorted according to their associated
eigenvalues. The first M-1 eigenvectors are kept as the Fisher
basis vectors. The rotated images,
M
α
where
T
MM
Ui
α = are
projected into the orthonormal basis by
T
njj
U
ϖα
= (9)
where n = 1, … ,M and j=1, … ,M-1.
The weights obtained form a vector
n
= [ϖ
n1
, ϖ
n2, ,
ϖ
nM-1
] that
describes the contribution of each fisherpalm in representing the
input palm image, treating fisherpalms as a basis set for palm
images.
5. FUSION STRATEGIES
The decision level fusion is selected over feature fusion because
matching scores has the lowest data complexity and fusion at
decision level often achieves better overall authentication
performance [14,15]. In the proposed system, we adopt SVM as it
is a type of machine learning technique that learns the decision
surface to separate the two classes of genuine and imposters
through a process of discrimination. It also has good
generalization characteristics and has been proven to be a
successful classifier on several classical pattern recognition
problems [16,17]. Two other combined classifiers, namely sum
rule and weighted sum rule are used to compare with the proposed
fusion method.
5.1 Sum Rule
The summation of both single-modals classifiers matching score
or distance is calculated as
S = P
ms
+ H
ms
(10)
where P
ms
and H
ms
represent the matching score of palm-print
and hand geometry respectively and output the class with the
smallest value of S.
5.2 Weighted Sum Rule
There exists different classifiers with different performance, thus
weights can be formed to combine the individual classifiers. Since
there is only two single-modal biometrics used in our system, the
weighted sum S
w
can be formed as
S
w
= wP
ms
+ (1-w)H
ms
(11)
where w is the weight that fall within 0 to 1.
5.3 Support Vector Machine
The classification problem in the proposed system can be
restricted to two class problem which are genuine and imposter
without loss of generality. The goal of using SVM is to separate
those two classes by hyper planes, which gives the maximum
margin [18].
The support vectors are determined through numerical
optimization during the training phase. The Lagrangian (Wolfe)
dual objective function for maximal margin separation is given as
111
1
min(,)
2
NNN
Dijijiji
iji
LddKxx
α
ααα
===
=−
∑∑
(12)
where N is the number of training samples,
i
α
and
j
α
are
constants determined from training, d
i
and d
j
is the class indicator
(for example, class 1 for genuine and class 2 for imposters)
associated with each support vectors,
(,)
ij
Kxx
is kernel
function performing the non-linear mapping into feature space, x
i
and x
j
are support vectors obtained from the matching scores of
the two individual classifiers. Equation (12) is subject to fulfill the
following condition,
0
i
C
α
≤≤
for
1,2,3...,
iN
=
and
1
0
N
ii
i
dα
=
=
where C is a positive regularization parameter that controls the
tradeoff between complexity of the machine.
103
The kernel function plays an essential role to enable operations to
be performed in the input space rather than the high dimensional
feature space to achieve better separability between two classes.
Two types of kernel functions, the polynomial and Gaussian RBF
are being experimented. The polynomial kernel function is
formally describe as
(,)(()1)
d
ijij
Kxxxx
=⋅+
(13)
where d > 0 represent a constant for the function’s degree. On the
other hand, RBF kernel function has the Gaussian form of
(
)
2
2
(,)exp
2
ij
ij
xx
Kxx
σ


=−


(14)
where
>0 is a constant that defines the kernel width.
6. EXPERIMENTS AND RESULTS
6.1 Data Collection
A total of 600 images have been collected from 50 users. Since
the two hands of each person were different, we acquired both
hand images and treated them as hands from different users. Each
of the users was requested to provide 6 images from their left
hand and another 6 images from their right hand with different
positions. Therefore, the hand database of 100 (50 users x 2
hands) subjects was obtained. In our experimental schemes, four
hand images were selected randomly from same subject for
training and another two hand images were used as testing data.
In hand geometry verification system, the hand image sizes vary
according to the user hand size. This is to maintain the actual size
of the image and avoid deformation of original hand images. For
palmprint verification system, the ROI is cropped, converted to
grayscale and resized automatically.
In classification phase, each individual biometrics classifier
produce their own matching scores of 100 genuine and 9900
imposters based on their feature vectors using Euclidean distance
classifier. Both set of matching scores are then fused by the three
decision fusion modules mentioned in Section 5 to obtain the final
scores.
6.2 Verification Test
For performance criteria, the error measure of a verification
system are FAR and FRR as defined in the equations below:
Number of rejected genuine claims
FRR= 100%
Total number of genuine accesses
×
(15)
Number of accepted imposter claims
FAR= 100%
Total number of imposter accesses
×
(16)
A unique measure, total success rate (TSR) is obtained as follows,
FAR+FRR
TSR=1-100%
Total number of accesses

×


(17)
6.3 Results Comparison & Discussion
Table 1 shows the comparison between the combined classifiers
and individual classifiers based on the Equal Error Rate (EER)
conditions where FAR
FRR. In this experiment, SVM
polynomial kernel with d=2 is selected for comparison, as it gives
the optimum value for polynomial kernel in this study.
Table 1. Combined classifiers and individual based classifiers
comparison based on EER.
Classifiers FAR % FRR % TSR %
Hand geometry 4.2828 4.0000 95.7200
Palmprint 5.9798 6.0000 94.0200
Sum Rule 1.8283 2.0000 98.1700
Weighted Sum Rule 1.1818 1.0000 98.8200
SVM (Polynomial: d=2) 1.0000 1.0000 99.0000
SVM (RBF Kernel) 0.1818 1.0000 99.8100
It can be observed that all the combined classifiers perform better
than the individual classifiers. Among the combine classifiers,
SVM with RBF kernel gives the best performance result. Figure 6
shows dramatic decrement of EER for the combined classifiers.
Figure 6. Comparison of ROC curves of verification systems
For the case where FAR=0% is selected to obverse the FRR
behavior (as shown in Table 2). We observed that all combined
classifiers are able to reduce the FRR compare to each individual
classifier. However, SVM with RBF kernel is able to maintain the
low FRR as obtained in Table 1, while other combined classifiers
and individual classifiers suffer from the incremental of FRR.
104
Table 2. Combined and individual classifiers comparison,
when FAR = 0%.
Classifiers FAR % FRR % TSR %
Hand geometry 0 26.0000 99.7400
Palmprint 0 29.0000 99.7100
Sum Rule 0 12.0000 99.8800
Weighted Sum Rule 0 8.0000 98.9200
SVM (Polynomial: d=2) 0 2.0000 99.9800
SVM (RBF Kernel) 0 1.0000 99.9900
From the experiments, it is apparent that decision fusion using
SVM with RBF kernel outperforms the other individual and
combined classifiers. This is due to the reason that the prototype
system is build with a quality checker module [10] that is able to
verify poor hand images obtained from the scanner during data
collection process as shown in Figure 7. If any poor image (e.g.
do not stretched their fingers properly) is detected (see Figure 8),
then the users will be requested to repeat the data collection
process again. Thus, the feature extraction modules are able to
process quality hand images, and causing the individual classifiers
to generate two distinct classes of genuine and impostors test
point’s distribution as shown in Figure 9. This enables SVM with
RBF kernel to separate those two classes almost correctly. Figure
10 illustrates the pyramid distribution of genuine and impostors
matching distance generated by SVM classifier with RBF kernel.
Figure 7. An example of using good hand image in the
automated hand geometry and palmprint verification system.
Figure 8. Error occurred when the user does not place his
fingers properly on the scanner platform. This caused invalid
features being detected.
Figure 9. Distribution of test points for hand geometry and
palmprint population for the non-linear SVM classifier with
RBF kernel.
+ Genuine
w Imposter
105
Figure 10. Matching distance distribution of SVM with RBF
kernel.
7. CONCLUSION
In this paper, a prototype of bimodal biometrics system by using
single sensor has been developed. The fusion of two individual
biometrics matching scores has significantly reduce the equal
error rate of FAR and FRR. The proposed fusion method by using
SVM with RBF kernel has been compare with palmprint and hand
geometry individual classifiers and two combined classifiers,
namely non-weighted sum rule and weighted sum rule. The SVM
with RBF kernel has shown the highest total success rate of
99.99% based on our database when FAR equal zero without
affecting the FRR. Further work are planned to do robust testing
for unbalanced cases, experiments comparison of different kind of
fusions approach (e.g. neural-network and fuzzy integral) and
increase the database size.
8. REFERENCES
[1] Wang, Y., Tan, T., and Jain, A.K. "Combining Face and
Iris Biometrics for Identity Verification", Proc. of 4th Int'l
Conf. on Audio- and Video-Based Biometric Person
Authentication (AVBPA), pp. 805-813, Guildford, UK,
June 9-11, 2003.
[2] Sanderson, C., and Paliwal, K.K. “Information Fusion and
Person Verification Using Speech and Face Information”,
IDIAP-RR 02-33, 2003.
[3] Kumar, A., Wong, C.M., Shen, C., Jain, A.K. Personal
Verification Using Palmprint and Hand Geometry
Biometric”, Proc. of 4
th
International Conference on
Audio-and Video-Based Biometric Person Authentication
(AVBPA), Guildford, UK, 2003.
[4] Kittler, J., Hatef, M., Duin, R.P.W., and Matas, J. “On
Combining Classifiers”, IEEE Trans. Pattern Analysis and
Machine Intell. Vol. 20, No. 3, 1998, 226-239.
[5] Sanderson, C., Bengio, S., Bourlard, H., Johnny, M.,
Ronan C., Mohamed F.B., Fabien C., Marcel, S. “Speech
& Face based Biometric Authentication at IDIAP”.
IDIAP-RR 03-13, February 2003.
[6] Jain, A.K., Ross, A., and Pankanti, S. A prototype hand
geometry-based verification system”, Proc. of 2nd Int'l
Conference on Audio- and Video-based Biometric Person
Authentication(AVBPA), pp. 166-171, 1999.
[7] Wai, K.K., David, Z., Li, W. “Palmprint feature extraction
using 2-D Gabor filters”, Pattern Recognition, Volume 36,
Issue 10, October 2003, pp. 2339-2347.
[8] S.-Reillo, R. “Hand Geometry Pattern Recognition
Through Gaussian Mixture Modelling”, IEEE, pp 937-
940, 2000.
[9] S.-Reillo, R., S.-Avila, C. “Biometric Identification
through Hand Geometry Measurements”, IEEE
Transactions on Pattern Analysis and Machine
Inttelligence, Vol 22, pp. 1168-1171, 2000.
[10] Michael, G.K.O., Tee, C., Andrew, T.B.J., and David,
N.C.L. Automated Hand Geometry Verification System
Base on Salient Points”. The 3rd International Symposium
on Communications and Information Technologies (ISCIT
2003), pp. 720-724, Songkla, Thailand.
[11] Sonka, M., Hlavac, V., and Bolye, R. “Image Processing,
Analysis and Machine Vision”, PWS publisher, 1999.
[12] Tee, C., Michael, G.K.O., Andrew, T.B.J., and David,
N.C.L. An Automated Biometric Palmprint Verification
System”, ISCIT 2003, pp. 714-719, Songkla, Thailand.
[13] Peter, N.B., Hespanha, J.P., and David, J.K. “Eigenfaces
vs. Fisherfaces: Recognition Using Class Specific Linear
Projection”, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 19, no. 7, July 1997.
[14] Lu, X., Wang, Y., and Jain, A.K. "Combining Classifiers
for Face Recognition", Proc. ICME 2003, IEEE
International Conference on Multimedia & Expo, vol. III,
pp. 13-16, Baltimore, MD, July 6-9, 2003.
[15] Poh, N., Samy, B., Jerzy, K. “IDIAP Research Report: A
Multi-sample Multi-source Model for Biometric
Authentication”, April 2002.
[16] Issam, E.-N., Yang, Y., Miles N.W., Nikolas, P.G., and
Robert, N. "Support Vector Machine Learning for
Detection of Microcalcifications in Mammograms", IEEE
International Symposium on Biomedical Imaging,
Washington D.C., July 2002.
[17] Andrew, T.B.J., Samad, S.A., and Hussain, A. 2002.
“Fusion Decision for a Bimodal Biometric Verification
System Using Support Vector Machine and Its
Variations.” ASEAN Journal on Science and Technology
for development, 19(1):1-16.
[18] Steve, G. “ISIS Technical Report: Support Vector
Machine for Classification and Regression”, Image Speeh
& Intelligent System Group University of Southhampton.
14
th
May 1998.
106
... The independent acquisition of each modality from different sensors may increase sensor cost, data acquisition time and the risk of spoofing in chimeric datasets. It is much more desirable to acquire multiple modalities from a single sensor for security and usability reasons in practice [12], [13]. In addition, since iris images can be extracted from face images without incurring additional hardware cost as shown in Fig. 1, it is economical and convenient to obtain face and iris samples using a single sensor device. ...
Article
Full-text available
Automatic person identification (API) using human biometrics is essential and highly demanded compared to traditional API methods, where a person is automatically identified using his/her distinct characteristics including speech, fingerprint, iris, handwritten signatures, and others. The fusion of more than one human biometric produces bimodal and multimodal API systems that normally outperform single modality systems. This paper presents our work towards fusing speech and handwritten signatures for developing a bimodal API system, where fusion was conducted at the decision level due to the differences in the type and format of the features extracted. A data set is created that contains recordings of usernames and handwritten signatures of 100 persons (50 males and 50 females), where each person recorded his/her username 30 times and provided his/her handwritten signature 30 times. Consequently, a total of 3000 utterances and 3000 handwritten signatures were collected. The speech API used Mel-Frequency Cepstral Coefficients (MFCC) technique for features extraction and Vector Quantization (VQ) for features training and classification. On the other hand, the handwritten signatures API used global features for reflecting the structure of the hand signature image such as image area, pure height, pure width and signature height and the Multi-Layer Perceptron (MLP) architecture of Artificial Neural Network for features training and classification. Once the best matches for both the speech and the handwritten signatures API are produced, the fusion process takes place at decision level. It computes the difference between the two best matches for each modality and selects the modality of the maximum difference. Based on our experimental results, the bimodal API obtained an average recognition rate of 96.40%, whereas the speech API and the handwritten signatures API obtained average recognition rates of 92.60% and 75.20%, respectively. Therefore, the bimodal API system is able to outperform other single modality API systems.
Article
Segmentation is a crucial stage in hand biometric recognition due to its direct influence on the feature extraction process. The actual trend toward contactless biometrics adds new challenges to traditional defiances, which are mainly related to the capturing conditions and the limitations on computational resources. Traditional methods do not succeed when variable capturing conditions are imposed and methods which are able to deal with daily-life situations are, in general, computationally expensive. In this study, a competitive flooding-based segmentation method oriented to mobile devices is proposed in order to achieve a compromised solution between accuracy and computational resources consumption. The method has been evaluated using images coming from five different databases which cover a wide spectrum of capturing conditions, one of them recorded as a part of this study. The results have been compared with other two well known segmentation techniques in terms of both accuracy and computation time.
Chapter
Biometric recognition systems have been widely used globally. However, one effective and highly accurate biometric authentication method, palmprint recognition, has not been popularly applied as it should have been, which could be due to the lack of small, flexible and user-friendly acquisition systems. To expand the use of palmprint biometrics, we propose a novel palmprint acquisition system based on the line-scan image sensor. The proposed system consists of a customized and highly integrated line-scan sensor, a self-adaptive synchronizing unit, and a field-programmable gate array controller with a cross-platform interface. The volume of the proposed system is over 94% smaller than the volume of existing palmprint systems, without compromising its verification performance. The verification performance of the proposed system was tested on a database of 8000 samples collected from 250 people, and the equal error rate is 0.048%, which is comparable to the best area camera-based systems.
Conference Paper
Biometrics, which is one of the technologies playing a progressively more significant part when it comes to identity management, has begun to invade our everyday life. The human physical features like fingerprints, face, hand geometry, voice and iris are known as biometrics. These types are said to be responsible for authentication of computerized based security systems. In this paper, the main motivation is on the numerous biometrics, their uses and the current biometrics recognition systems.
Article
Biometric recognition systems have been widely used globally. However, one effective and highly accurate biometric authentication method, palmprint recognition, has not been popularly applied as it should have been, which could be due to the lack of small, flexible and user-friendly acquisition systems. To expand the use of palmprint biometrics, we propose a novel palmprint acquisition system based on the line-scan image sensor. The proposed system consists of a customized and highly integrated line-scan sensor, a self-adaptive synchronizing unit, and a field-programmable gate array controller with a cross-platform interface. The volume of the proposed system is over 94% smaller than the volume of existing palmprint systems, without compromising its verification performance. The verification performance of the proposed system was tested on a database of 8000 samples collected from 250 people, and the equal error rate is 0.048%, which is comparable to the best area camera-based systems.
Article
Local binary pattern (LBP) is popular for the texture representation owing to its discrimination ability and computational efficiency, but when used to describe the sparse texture in palm vein images, the discrimination ability is diluted, leading to lower performance, especially for contactless palm vein matching. In this paper, an improved mutual foreground LBP method is presented for achieving a better matching performance for contactless palm vein recognition. First, the normalized gradient-based maximal principal curvature algorithm and k -means method are utilized for texture extraction, which can effectively suppress noise and improve accuracy and robustness. Then, an LBP matching strategy was adopted for similarity measurements on the basis of extracted palm veins and their neighborhoods, which include the vast majority of useful distinctive information for identification while eliminating interference by excluding the background. To further improve the LBP performance, the matched pixel ratio was adopted to determine the best matching region (BMR). Finally, the matching score obtained in the process of finding the BMR was fused with results of LBP matching at the score level to further improve the identification performance. A series of rigorous contrast experiments using the palm vein data set in the CASIA multispectral palmprint image database were conducted. The obtained low equal error rate (0.267%) and comparisons with the most state-of-the-art approaches demonstrate that our method is feasible and effective for contactless palm vein recognition.
Article
This paper presents a novel multimodal biometric system based on contactless multi-spectrum finger images, which aims to deal with the limitations of unimodal biometrics. The chief merits of the system are the richness of the permissible texture and the ease of data access. We constructed a multi-spectrum instrument to simultaneously acquire three different types of biometrics from a finger: contactless fingerprint, finger vein, and knuckleprint. On the basis of the samples with these characteristics, a moderate database was built for the evaluation of our system. Considering the real-time requirements and the respective characteristics of the three biometrics, the block local binary patterns algorithm was used to extract features and match for the fingerprints and finger veins, while the Oriented FAST and Rotated BRIEF algorithm was applied for knuckleprints. Finally, score-level fusion was performed on the matching results from the aforementioned three types of biometrics. The experiments showed that our proposed multimodal biometric recognition system achieves an equal error rate of 0.109%, which is 88.9%, 94.6%, and 89.7% lower than the individual fingerprint, knuckleprint, and finger vein recognitions, respectively. Nevertheless, our proposed system also satisfies the real-time requirements of the applications.
Article
Full-text available
This paw presents fusion detection technique comparisons based on support vector machine and its variations for a bimodal biometric verification system that makes use of face images and speech utterances. The system is essentially constructed by a face expert, a speech expert and a fusion decision module. Each individual expert has been optimized to operate in automatic mode and designed for security access application. Fusion decision schemes considered are linear, weighted Support Vector Machine (SVM) and linear SVM with quadratic transformation. The conditions tested include the balanced and unbalanced conditions between the two experts in order to obtain the optimum fusion module from these techniques best suited to the target application.
Article
Full-text available
Hand geometry biometric system is well-known for its flexible, non-intrusive and low-cost features for personal authentication purpose. In this paper, a prototype of an automated hand geometry v erification system is developed. The proposed hand geometry system is different from the others [1]-[4] in which the users are allowed to place their hands freely within certain degrees on the scanner's surface when scanned. This can increase the user-friendliness and flexibility of the system. The hand image acquired is processed to obtain the binary hand contour based on the RGB-color image. The border tracing algorithm is then used to pinpoint the salient points from the hand contour image. The salient points are the crucial features that are located in the finger tips and in the root locations in between two fingers. Then, hand geometry features will be extracted based upon the salient points obtained. The proposed system has been tested on a small size database using the Euclidean distance classifier. The high verification rates from the result show that the proposed system is a promising solution for hand geometry verification system.
Conference Paper
Full-text available
Current two-dimensional face recognition approaches can obtain a good performance only under constrained environments. However, in the real applications, face appearance changes significantly due to different illumination, pose, and expression. Face recognizers based on different representations of the input face images have different sensitivity to these variations. Therefore, a combination of different face classifiers which can integrate the complementary information should lead to improved classification accuracy. We use the sum rule and RBF-based integration strategies to combine three commonly used face classifiers based on PCA, ICA and LDA representations. Experiments conducted on a face database containing 206 subjects (2,060 face images) show that the proposed classifier combination approaches outperform individual classifiers.
Conference Paper
Full-text available
We present an overview of research at IDIAP on speech & face based biometric authentication. This paper covers user-customised passwords, adaptation techniques, confidence measures (for use in fusion of audio & visual scores), face verification in difficult image conditions, as well as other related research issues. We also overviewed the open source Torch library, which has aided in the implementation of the above mentioned techniques.
Conference Paper
Full-text available
In this study, two techniques that can improve the authentication process are examined: (i) multiple samples and (ii) multiple biometric sources. We propose the fusion of multiple samples obtained from multiple biometric sources at the score level. By using the average operator, both the theoretical and empirical results show that integrating as many samples and as many biometric sources as possible can improve the overall reliability of the system. This strategy is called the multi-sample multi-source approach. This strategy was tested on a real-life database using neural networks trained in one-versus-all configuration.
Conference Paper
Full-text available
Microcalcification (MC) clusters in mammograms can be an indicator of breast cancer. In this work we propose for the first time the use of support vector machine (SVM) learning for automated detection of MCs in digitized mammograms. In the proposed framework, MC detection is formulated as a supervised-learning problem and the method of SVM is employed to develop the detection algorithm. The proposed method is developed and evaluated using a database of 76 mammograms containing 1120 MCs. To evaluate detection performance, free-response receiver operating characteristic (FROC) curves are used. Experimental results demonstrate that, when compared to several other existing methods, the proposed SVM framework offers the best performance.
Article
Biometric identification is an emerging technology that can solve security problems in our networked society. A few years ago, a new branch of biometric technology, palmprint authentication, was proposed (Pattern Recognition 32(4) (1999) 691) whereby lines and points are extracted from palms for personal identification. In this paper, we consider the palmprint as a piece of texture and apply texture-based feature extraction techniques to palmprint authentication. A 2-D Gabor filter is used to obtain texture information and two palmprint images are compared in terms of their hamming distance. The experimental results illustrate the effectiveness of our method.
Book
List of Algorithms. Preface. Possible Course Outlines. 1. Introduction. 2. The Image, Its Representations and Properties. 3. The Image, Its Mathematical and Physical Background. 4. Data Structures for Image Analysis. 5. Image Pre-Processing. 6. Segmentation I. 7. Segmentation II. 8. Shape Representation and Description. 9. Object Recognition. 10. Image Understanding. 11. 3d Geometry, Correspondence, 3d from Intensities. 12. Reconstruction from 3d. 13. Mathematical Morphology. 14. Image Data Compression. 15. Texture. 16. Motion Analysis. Index.
Article
We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically