Content uploaded by Hlaing Htake Khaung Tin
Author content
All content in this area was uploaded by Hlaing Htake Khaung Tin on Dec 09, 2014
Content may be subject to copyright.
Hlaing Htake et al. / IJAIR ISSN: 2278-7844
© 2012 IJAIR. ALL RIGHTS RESERVED 57
Effective Method of Face Recognition and Palmprint
Biometrics for Personal Identification
Hlaing Htake Khaung Tin
University of Computer Studies, Yangon, Myanmar
hlainghtakekhaungtin@gmail.com
Abstract- Face and palmprint are two biometric characteristics
with the highest user-acceptance. This paper proposes a method
which integrates face recognition and palmprint performs the
fusion at score level. Two biometric traits are collected and
stored into database. The YCbCr and HSV colors paces are
applied to skin detection and face extraction. In identification
stage images will be compared against the stored templates and
match result is generated. To compare Face images PCA analysis
is used. Palmprint matching result can be generated using PCA
analysis. The approach is based on Principal Component
Analysis (PCA). Eigen face approach is used for both palmprint
and face recognition.
Keywords: Biometrics; face recognition; palmprint recognition;
fusion; Principal Component Analysis (PCA); personal
identification.
I. INTRODUCTION
Biometrics helps to provide the identity of the user based on
his/her physiological or behavioral characteristics. The
physiological characteristics signifies using human body parts
for authentication like fingerprint, iris, ear, palm print, face
etc. The behavioral characteristics include action done using
body parts like voice, signature and gait etc. for
authentication. Authentication based on a token and password
etc. can be stolen or forgotten. Person’s friends or relatives
can easily access token and can guess the password. It is
necessary to add some features that can almost eliminate the
limitation of token-based and knowledge based methods [1].
Nowadays, as increasing security is being demanded in
everywhere of the human society, so face recognition has
become a very active in human biometrics research area,
Numerous approaches of face recognition and considerable
successes have been proposed. However, it is still a difficult
task to recognize the human faces accurately in real-time,
especially under variable circumstances such as variations in
illumination, pose, facial expression, makeup, etc. In face
recognition, the similarity of human faces and the
unpredictable variations are the greatest obstacles for a huge
database. The age based face recognition system is developed
to provide solution to these problems. The face database is
divided in to 11 groups depending on age. Each aging group
contains at least 20 face images. Both the face image and its
personal record are included in each aging database group [2].
Face recognition includes one of the biometric systems.
Some examples of biometric features of humans are:
Signature- studies the pattern, speed, acceleration and pressure
of the pen when writing ones signature. Fingerprint- studies
the pattern of ridges and furrows on the surface of the
fingertip. Voice- studies way humans generate sound from
vocal tracts, mouth, nasal cavities and lips. Iris- studies the
annular region of the eye bounded by the pupil and the sclera.
Retina- studies the pattern formed by veins beneath the retinal
surface in an eye. Hand Geometry- measures the
measurements of the human hand. Ear Geometry- measures
the measurements of the human ear. Facial thermo gram-
concerns the heat that passes through facial tissue. Among
them face is the most natural and well known biometric [3].
The inner surface of the normally contains three flexion
creases, secondary creases and ridges. The flexion creases are
also called principal lines and secondary creases are called
wrinkles. The flexion and the major secondary creases are
formed between the 3rd and 5th months of pregnancy [4] and
superficial lines appear after we born. Although the three
major flexions are genetically dependent, most of other
creases are not [5]. Even identical twins have different palm
prints. These non-genetically deterministic and complex
patterns are very useful in personal identification.
There are two types of palm print recognition research, high
resolution and low resolution approaches. High resolution
approach employs high resolution images while low resolution
approach employs low resolution images. High resolution
approach is suitable for forensic applications such as criminal
detection [6]. Low resolution images are more suitable for
civil and commercial applications such as access control.
Generally speaking, high resolution refers to 400 dpi or more
and low resolution refers to 150 dip or less.
In high resolution images, researchers can extract ridges,
singular points and minutia points as features while in low
resolution images, they generally use principal lines, wrinkles
and texture. At the beginning of palm print research, the high-
resolution approach was the focus [7-8] but almost all current
research is focused on the low resolution approach because of
the potential applications. In this paper, we concentrate only
on the low resolution approach since it is the current focus.
For civil and commercial applications, low-resolution palm
print images are more suitable than high-resolution images
because of their smaller file sizes, which results in shorter
computation times during preprocessing and feature
extraction. Therefore, they are useful for many real-time palm
print applications [9].
J . R . Scolar and P . Navarreto [10] proposed a face
recognition algorithm based on Eigen space. J.Yang and et
al.[11] introduced the a new approach to appearance-based
Hlaing Htake et al. / IJAIR ISSN: 2278-7844
© 2012 IJAIR. ALL RIGHTS RESERVED 58
face representation and recognition. Most of the research in
this area is very limited by the size and quality of the database
used.
II. FACE RECOGNITION
A. Face Region Extraction
To provide the building of database and face recognition,
face region extraction has been done based on the skin
detection approach. Most existing skin segmentation
techniques involve the classification of individual image
pixels into skin and non-skin categories on the basis of pixel
color. For extracting the face region, three colors paces, RGB,
YCbCr and HSV colors pace are applied for skin detection
after de-noising. One important factor that should be
considered to detect skin colored regions becomes robust only
if chrominance component is used in analysis. Therefore, the
variation of luminance component is eliminated as much as
possible by choosing the CbCr plane (chrominance) of the
YCbCr color space to build the model. The skin
color region can be identified by the presence of a certain set
of chrominance (i.e., Cr and Cb) values narrowly and
consistently distributed in the YCbCr color space. RCr and
RCb are denoted as the respective ranges of Cr and Cb values
that correspond to skin color.
(1)
Where x = 1,2,…., M and y = 1,2,…. N. The suitable range of
the RCr and RCb are applied to detect the skin region.
B. Preprocessing
The input image, extracted face region and resized face
image, respectively. Next, the gray scale converting,
histogram equalization and re-sizing processes are preformed.
Histogram equalization maps the input image’s intensity
values so that the histogram of the resulting image will have
an approximately uniform distribution [12-15]. The histogram
of a digital image with gray levels in the range [0, L- 1] is a
discrete function.
=
(2)
Where L is the total number of gray levels, is the th gray
level, is the number of pixels in the image with that gray
level, is the total number of pixels in the image, and
k=0,1,2,…, L-1). gives an estimate of the probability
of the occurrence of gray level .
By histogram equalization, the local contrast of the object in
the image is increased, especially when the usable data of the
image is represented by close contrast values. Through this
adjustment, the intensity can be better distributed on the
histogram. This allows the areas of lower local contrast to gain
a higher contrast without affecting the global contrast.
C. Subspace Face Recognition
The Principal Component Analysis (PCA) can do prediction,
redundancy removal, feature extraction, data compression, etc.
Because PCA is a classical technique which can do something
in the linear domain, applications having linear models are
suitable. Let us consider the PCA procedure in a training set
of M face images. Let a face image be represented as a two
dimensional N by N array of intensity values, or a vector of
dimension N2. Then PCA tends to find a M-dimensional
subspace whose basis vectors correspond to the maximum
variance direction in the original image space.
This new subspace is normally lower dimensional (M<<
M<< N2). New basis vectors are defined a subspace of face
images called face space. All images of known faces are
projected onto the face space to find sets of weights that
described the contribution of each vector. By comparing a set
of weights for the unknown face to sets of weights of known
faces, the face can be identified. PCA basis vectors are defined
as eigenvectors of the scatter matrix S defined as:
. (3)
Where is the mean of all images in the training set and 1is
the ith face image represented as a vector i. The eigen vector
associated with the largest eigenvlaue is one that reflects the
greatest variance in the image. This is, the smallest eigenvalue
is associated with the eigenvector that finds the least variance.
A facial image can be projected onto ( ) dimensions by
computing
=1, 2,
(4)
The vectors are also images, so called, eigenimages, or
eigenfaces. They can be viewed as images and indeed look
like faces. Face space forms a cluster in image space and PCA
gives suitable representation.
III. PALMPRINT
There are three key issues to be considered in developing
palm print identification system.
1) Palm print Acquisition: How do we obtain a good
quality palm print image in a short time interval, such
as I second? What kind of device is suitable for data
acquisition?
2) Palm print Feature Representation: Which types of
palm print features are suitable for identification? How
to represent different palm print features?
3) Palm print Identification: How do we search for a
queried palm print in a given database and obtain a
response within a limited time?
Hlaing Htake et al. / IJAIR ISSN: 2278-7844
© 2012 IJAIR. ALL RIGHTS RESERVED 59
So far, several companies have developed special scanners
to capture high-resolution palm print images [16, 17]. These
devices can extract many detailed features, including minutiae
points and singular points, for special applications. Although
these platform scanners can meet the requirements of on-line
systems, they are difficult to use in real-time applications
because a few seconds are needed to scan a palm. To achieve
on-line palm print identification in real-time, a special device
is required for fast palm print sampling [9]. Fig. 1. illustrates a
part of a high resolution palm print image and a low resolution
palm print image.
(a) (b)
Fig. 1. Palm print features in (a) a high resolution image and (b) a low
resolution image
A. Pre-processing extraction of Palm print Images
Image is pre-processed to get the region of interest. Pre-
processing includes image enhancement, image binarization,
boundary extraction, cropping of palm print/ ROI. The ROI
size is 64*64 pixels. Sample of ROI is shown in Fig. 2.
Fig. 2.Sample of ROI
B. Normalization of Palm print Images
The extracted palm print images are normalized to have pre-
specified mean and variance. The normalization is used to
reduce the possible imperfections in the image due to sensor
noise and non-uniform illumination. Let the gray level at
(,), in a palm print image be represented by ,. The
mean and variance of image, and , respectively, can be
computed from the gray levels of the pixels. The normalized
image is computed using the pixel-wise operations as
follows:
(5)
Where and are the desired values for mean and
variance, respectively. These values are pre-tuned according to
the image characteristics, i.e., . In all our experiments,
the values of and were fixed to 100. Fig. 3. Show a
typical palm print image before and after the normalization.
(a) (b)
Fig. 3.Palm print feature extraction; (a) segmented image, (b) image after
normalization
IV. FUSION OF FACE AND PALMPRINT
At the time of fusion stage, face and palmprint images will
be acquired. Feature vectors are generated for each biometric
trait and stored separately in the system database. At the time
of identification, when user wants to prove his/her identity
face and palmprint image will be captured using web camera.
These images again will undergo image preprocessing and
feature extraction stage.
Euclidean distance formula is used to compute the distance
between the eigenpalm coefficients of the template and the
query palm image. This will generate result 1. Euclidean
distance formula is used to compute the distance between the
template and query face image. Match result 2 is generated.
All these two results will be passed to the fusion stage.
Face images are represented using eign-coefficients, and the
output of the face matcher is a distance result. Palmprint
images are represented using eigenpalm-coefficients, and the
output of the palmprint matcher is a distance result. Finally,
total result will be compared against the set threshold value.
This will decide whether the person is correct or incorrect.
Fig. 4.The extracted record for the recognized person
Hlaing Htake et al. / IJAIR ISSN: 2278-7844
© 2012 IJAIR. ALL RIGHTS RESERVED 60
Fig. 5.Recognition Errors
V. CONCLUSIONS
The proposed technique so can be used for much real time
applications like face recognition in crowded public places,
banking, airport, station, highway gate, border trade, etc. The
advantages of this paper are less processing time than only PC
better features detection rate Fig. 4 The extracted record for
the recognized person Fig. 5 Recognition Errors than
conventional method that has achieved a recognition rate of
99.5% with acceptable processing time (0.36 sec). In
summary, we conclude that our face and palmprint personal
identification system can achieve good performance in terms
of speed and accuracy.
ACKNOWLEDGMENT
The author is grateful to her family who specifically offered
strong moral and physical support, care and kindness.
REFERENCES
[1] Jain A.K., Ross A., Prabhakar S, “An introduction to biometric
recognition”, IEEE Trans. Circuits Syst. Video Tehnology, 14, (1), pp.
4-20, 2004.
[2] Hlaing Htake Khaung Tin and Myint Myint Sein, “ Developing the Age
Dependent Face Recognition System”, International Journal of
Intelligent Engineering and Systems,vol.4, no.4, 2011.
[3] Hlaing Htake Khaung Tin and Myint Myint Sein, “Effective Method of
Age Dependent Face Recognition”, International Journal of Computer
Science and Informatics, vol.1, issue.2, 2011.
[4] M. Cannon, M. byrne, D. Cotter, P.Sham, C. Larkin, E. O’Callaghan,
“Futher evidence for anomalies in the hand-prints of patients with
schizophrenia : a study of secondary creases”. Schizophrenia Research,
vol.13, pp. 179-184, 1994.
[5] A. Kong, d. Zhang and G. Lu, “A study of identical twins palm print for
personal verification”, Pattern Recognition, vol. 39, no. 11, pp. 2149-
2156, 2006.
[6] NEC Automated Palmprint Identification System
http://www.necmalaysia.com.my/Solutions/PID/products/ppi.html
[7] N. Duta, A.K. Jain and K.V. Mardia, “Matching of palm prints”, Pattern
Recognition Letters, vol. 23, no. 4, pp. 477-486, 2002.
[8] W. Shu and D. Zhang, “Automated personal identification by
palmprint”, Optical Engineering, vol. 38, no.8, pp. 2359-2362, 1998.
[9] David Zhang, Wai-Kin Kong and Jane You, “On-Line Palmprint
Identification”.
[10] J.R. Scolar, P. Navarreto, “Eigen space-based face recognition: a
comparative study of different approaches IEEE Tran, system man and
cybernetics-part C: applications, vol.35, no.3, 2005.
[11] J.Yang, D.Zhang, A.F.Frangi, J.Y.Yang, Two-dimensional PCA: a new
approach to appearance-based face representation and recognition, IEEE
Trans on Pattern Analysis and Machine Intelligences 26(1), 2004.
[12] Kwon, Y.H. and da Vitoria Lobo, N.1993. “Locating Facial Features for
Age Classification”, In Proceedings of SPIE-the International Society
for Optical Engineering Conference. 62-72.
[13] Hayashi, J., Yasumoto, M., Ito, H., Niwa, Y.and Koshimizu, H.2002.
“Age and Gender Estima-tion from Facial Image Processing”. In
Proceed-ings of the 41 st SICE Annual Conference. 13-18.
[14] Lanitis, A. 2002. “On the Significance of Dif-ferent Facial Parts for
Automatic Age Estima-tion”. In Proceedings of the 14 the International
Conference on Digital Signal Processing. 1027-1030.
[15] J.R. Scolar , P. Navarreto , “Eigen space-based FACE recognition : a
comparative study of dif-ferent approaches IEEE Tran”, System man
and Cybemetics-part C: applications, Vol-35, No.3,2005.
[16] http://www.nectech.com/afis/download/PalmprintDtsht.q.pdf - NEC
automatic palm print identification system.
[17] http://www.printrakinternational.com/omnitrak.htm - Printrak automatic
palm print identification system.