ArticlePDF Available

Lightweight and Privacy-Preserving Template Generation for Palm-Vein Based Human Recognition

Authors:

Abstract and Figures

The use of human biometrics is becoming widespread and their major application is human recognition for controlling unauthorized access to both digital services and physical localities. However, the practical deployment of human biometrics for recognition poses a number of challenges, such as template storage capacity, computational requirements, and privacy of biometric information. These challenges are important considerations, in addition to performance accuracy, especially for authentication systems with limited resources. In this paper, we propose wave atom transform (WAT) based palm-vein recognition scheme. The scheme computes, maintains, and matches palm-vein templates with less computational complexity and less storage requirements under secure and privacy-preserving environment. First, we extract palm-vein traits in the WAT domain, which offers sparser expansion and better capability to extract texture features. Then, randomization and quantization are applied to the extracted features to generate a compact, privacy-preserving palm-vein template. We analyze the proposed scheme for its performance and privacy-preservation. The proposed scheme obtains equal error rates (EER) of 1.98%, 0%, 3.05%, and 1.49 for PolyU, PUT, VERA and our palm-vein datasets, respectively. The extensive experimental results demonstrate comparable matching accuracy of the proposed scheme with a minimum template size and computational time of 280 bytes and 0.43 seconds, respectively.
Content may be subject to copyright.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
AbstractThe use of human biometrics is becoming widespread
and their major application is human recognition for controlling
unauthorized access to both digital services and physical localities.
However, the practical deployment of human biometrics for
recognition poses a number of challenges, such as template storage
capacity, computational requirements, and privacy of biometric
information. These challenges are important considerations, in
addition to performance accuracy, especially for authentication
systems with limited resources. In this paper, we propose wave
atom transform (WAT) based palm-vein recognition scheme. The
scheme computes, maintains, and matches palm-vein templates
with less computational complexity and less storage requirements
under secure and privacy-preserving environment. First, we
extract palm-vein traits in the WAT domain, which offers sparser
expansion and better capability to extract texture features. Then,
randomization and quantization are applied to the extracted
features to generate a compact, privacy-preserving palm-vein
template. We analyze the proposed scheme for its performance
and privacy-preservation. The proposed scheme obtains equal
error rates (EER) of 1.98%, 0%, 3.05%, and 1.49% for PolyU,
PUT, VERA and our palm-vein datasets, respectively. The
extensive experimental results demonstrate comparable matching
accuracy of the proposed scheme with a minimum template size
and computational time of 280 bytes and 0.43 seconds,
respectively.
Index Terms Feature vector, palm-vein recognition, personal
authentication, privacy-preserving template, vascular biometrics,
wave atom transform.
I. INTRODUCTION
HE consensus on unique nature of biometric features
among human beings has made human biometrics evolve
as a dominant means to prove individual’s identity in secure
border management, financial transactions, forensics and
access control etc. [1]. For greater security needs in some
systems, biometrics can be utilized to complement other
factors, such as passwords, personal identification numbers
(PINs) and smart cards etc., in multi-factor authentication [2].
Human biometrics can be categorized into extrinsic and
intrinsic biometrics. Extrinsic biometrics, such as fingerprint,
face, iris and palmprint, are regarded as susceptible to forgery
attacks, which can lead to compromise user’s privacy and
system security [3]. In contrast, intrinsic biometrics, such as
finger-vein [4], [5], palm-vein [6], [7], sclera-vein [8], dorsal-
F. Ahmad and L.-M. Cheng are with the Department of Electronic
Engineering, City University of Hong Kong, Kowloon, Hong Kong (e-mail:
fahmad4-c@my.cityu.edu.hk; lm.cheng@cityu.edu.hk).
Asif Khan is with the Faculty of Computer Science and Engineering,
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Swabi,
KP, Pakistan (e-mail: asifspecial@gmail.com).
hand-vein [9], [10] and human DNA [11], are inherently
resilient against imitation of the genuine biometrics to deploy
spoofing attacks [12], [13]. Palm-vein being an internal
biometric of live human bodies has attracted a noteworthy share
of biometrics research in recent years [3], [6], [14], [15]. Palm-
vein traits are difficult to forge or duplicate due to live palm
image acquisition along with the requirement of user’s consent
[3], [16]. It involves capturing vein patterns by detecting live
palm under near infrared (NIR) illumination. NIR illumination
uses above 700nm of wavelength and requires circulation of
blood to capture the vein pattern. It penetrates well inside skin
layers than visible light. In addition to vein patterns, NIR
illumination can capture partial palm line information as well
[17].
Most of the existing palm-vein based recognition schemes
focus only on recognition accuracy. However, storage
requirements, matching speed, and security of biometric
templates (compact representations of distinctive biometric
features) are also important considerations to constitute a
comprehensive and practical authentication system. The server
or controlling unit, which performs verification process and
maintains a database of registered biometric templates, is
overwhelmed with big-size and computationally complex
templates. Especially, resource-constrained devices, which
conduct biometric verification in remote and distributed set-up,
reduce applicability of these schemes.
Furthermore, since biometric template is derived from
original biological features, exposure of unsecured template to
adversaries can compromise personal privacy rights [18]. First,
the exposed template can be used to reconstruct the original-
like features and use them in identity theft to impersonate the
legitimate user. Second, unprotected templates across different
databases can be cross-matched to track an individual. Third,
due to unique nature of biometrics in each individual, the
compromised original biometric information cannot be revoked
and renewed [19]. The following properties can ensure a
privacy-preserving biometric template:
1) Non-invertibility: Biometric template should be
irreversible to the original biometric features of an individual.
2) Cancelability: Features identifying an individual could
be changed, withdrawn and newly generated.
3) Performance: Security of biometric features should not
result in degradation of recognition performance.
To satisfy the above mentioned requirements, in this paper
we propose an efficient and privacy-preserving wave atom
transform (WAT) based palm-vein recognition technique.
WAT is a mathematical transform which adapts to arbitrary
Lightweight and Privacy-Preserving Template
Generation for Palm-Vein Based Human Recognition
Fawad Ahmad, Lee-Ming Cheng, Senior Member, IEEE, and Asif Khan
T
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
2
local directions of a pattern with sparser expansion of wave
equations and sharp frequency localization. It is well-suited to
represent the texture features in human palm-vein images. Vein
and texture features in a palm-vein image are extracted using
WAT, processed and then represented as a binary sequence.
First of all, we segment a fixed and uniform region of interest
(ROI) from the acquired palm-vein images, which minimizes
the effect of intra-subject variations. Then, palm-vein features
in the ROI image are extracted using middle scale band of the
wave atom coefficients tiling. Middle scale band maintains
insensitivity to intra-subject variations and also distinguishes
inter-subject distinct vein patterns. Finally, randomization
using Rényi chaotic map [20], followed by quantization, are
applied to feature vector to ensure a privacy-preserving,
compact and fast matching template. We calculate normalized
Hamming distance (NHD) as a similarity index between two
palm-vein templates. This paper conducts experiments to
rigorously evaluate performance of the proposed scheme using
two touch-based and two contactless palm-vein datasets. We
perform experiments, in both authentication and identification
scenarios, with a varying number of registered palm-vein
samples in the template and analyze its effect on recognition
accuracy. Rigorous experimental results validate that the
proposed scheme obtains decent recognition accuracy with a
small size, speedy, and secure palm-vein template.
The rest of the paper is organized as follows. Section II
reports a brief background of the related work. Section III,
explains methodology of the proposed scheme. Experimental
analysis and privacy analysis are presented in Section IV and
Section V, respectively. Lastly, Section VI concludes the paper.
II. RELATED WORK
Mainly, there are two types of techniques for creating
biometric templates after feature extraction from the captured
biometric signal. The first type transforms biometric features
into a quantized feature vector, aimed at low-power matching.
The extracted features are processed and manipulated into a
new representation [2], [21]-[25]. The second type is based
purely on feature extraction which stores original extracted
features as templates [6], [26], [27]. These schemes mostly
result in computationally complex and big-size templates.
Furthermore, they require sophisticated matching techniques to
detect inter-subject differences and resist intra-subject
variations.
Han and Lee in [21] extracted palm-vein features based on
optimized algorithm of single 2-D Gabor filter. The Gabor
filtered features are encoded into a feature vector. The scheme
achieves good recognition performance with a small size and
relatively simple template. However, it does not analyze
template security and requires to estimate the optimum
parameters of Gabor filter using a set of training images
beforehand. In another study by Lee [22], directional coding
technique using 2-D Gabor filter has been proposed, which
achieves similar performance as in [21]. Li and Leng [24]
constructed a cancelable 2DPalmHash code using palm-vein
and palmprint as multi-modal biometric. They deployed Gabor
filter for feature extraction from palm-vein and palmprint
images. Some of the schemes in the literature on palm-vein
utilize scale invariant feature transform (SIFT) features and its
variations. For example, [28] proposed to use SIFT matching
for palm-vein based verification. The detected feature points are
represented using SIFT descriptors. Each descriptor is a 128-
dimension vector, and the total size of the feature template
depends on the number of descriptors. In [29], Yan et al.
extracted and fused local invariant features from multiple
samples using SIFT. Kang et al. [30] used a variation of SIFT
features i.e. RootSIFT for feature extraction and matching,
which is a more stable local invariant feature extraction
technique. SIFT and RootSIFT based schemes encounter
increased template size and computational complexity.
Furthermore, Zhou and Kumar in [3] proposed two methods,
i.e. Hessian phase based method and neighborhood matching
Radon transform (NMRT) based method for palm-vein based
identification. The Hessian phase method extracts vein patterns
by utilizing eigenvalues of Hessian matrix of palm-vein image.
Among the existing schemes, NMRT based scheme
demonstrates one of the best recognition performance with a
small size template. In [17], Zhang et al. utilizes palm images
captured under both visible and NIR spectra to achieve high
recognition performance. However, this scheme also suffers
from large size of the template. In another study [6], Kang and
Wu presented an improved mutual foreground local binary
pattern (LBP) based scheme. The scheme achieves high
verification accuracy; however, it does not report template’s
efficiency in terms of size, speed and privacy-preservation.
Wang et al. [31] fused palm-vein and palmprint features using
wavelet transform. The scheme does not extract local textured
features well, due to which it does not achieve high recognition
performance.
Most of the aforementioned schemes emphasize on
performance accuracy alone but are deficient to provide fast
matching, small-size and secure palm-vein templates,
simultaneously. Some of the schemes also proposed cancelable
templates based recognition schemes [24], [32]; however, these
schemes do not guarantee irreversibility and high performance
accuracy in both touch-based and contactless setups. Template
security is ignored by majority of the conventional methods on
palm-vein biometrics.
Two kinds of approaches can be deployed for the generation
of a privacy-preserving template. The first kind transforms the
extracted biometric features into a different format and applies
some random projections to hide the relationship between the
extracted features [2], [19], [33]-[35]. The second kind uses
biometric cryptosystem, such as Advanced Encryption
Standard (AES) and RSA encryption techniques [36], and
secure multiparty computation such as homomorphic
cryptosystem [37], [38]. The encrypted templates are no more
protected if the secret decryption key is disclosed to
adversaries. Also, the encrypted templates need to be decrypted
before every matching attempt [34]. Secure multiparty
computation allows templates matching in encrypted domain.
However, these techniques suffer from computational
complexity, which limits their use in practical applications [1],
[39].
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
3
III. PROPOSED SCHEME
This section explains implementation of the proposed palm-
vein recognition scheme in detail. Block diagram illustration of
the major steps involved in our method is presented in Fig. 1.
A. Image Preprocessing
The proposed scheme is applicable in both touch-based and
contactless scenarios. Ensuring high recognition accuracy is
particularly challenging in contactless setup, due to the intra-
subject variations during image acquisition, such as hand
rotation, translation, scaling and differences in hand postures
etc. This necessitates an effective and stable coordinate system
to segment a fixed, uniform and rotation invariant ROI from the
acquired palm image. The fixed ROI extraction will allow to
minimize intra-subject variations in different palm images by
capturing the same palm-vein features. The segmented ROI is
further processed for enhancing the clarity of palm-vein
features. The key steps involved in ROI segmentation and
enhancement are shown in Fig. 2.
1) ROI Segmentation
In order to curtail the amount of intra-subject disparities and
normalize the rotation variations, we develop an automatic
palm-vein image segmentation and normalization technique, as
shown in Fig. 3. The technique is based on locating two valley
points, i.e., one between index finger and middle finger and the
second between ring finger and little finger, on the hand. These
two valley points are used to find the rotation angle for rotation
normalization and define the size of ROI. Selection of the valley
points as a reference for ROI segmentation allows to extract an
approximately similar palm region at different times for the
same person. This is because the rest of the palm region remains
approximately at same position with respect to the valley
points.
First of all, binarization is applied to clearly separate hand
region pixels from the background, as shown in Fig. 3(b). Then,
we trace pixels of hand contour by employing Moore-Neighbor
tracing algorithm modified by Jacob's stopping criteria in [40].
The traced pixels of hand contour are shown in Fig. 3(c). A
reference point (Rf) is defined for locating the valley points on
hand contour. We choose Rf in the middle of palm, slightly
above the bottom margin of the palm boundary. Then, we trace
the distance between Rf and selected points on hand contour
(Px(i), Py(i)) by calculating Euclidean distance Ed(i) among
them. Ed(i) is calculated as given in the equation below.
 (1)
where (Px(i), Py(i)) represents x and y coordinates of selected
(ith) pixels on hand contour, (Rfx, Rfy) represents x and y
coordinates of reference point Rf, and Ed(i) is the Euclidean
distance between ith pixel on hand contour and Rf.
The calculated distances are depicted in a distance
distribution graph in Fig. 3(d). The graph resembles geometric
shape of the hand with five local maximums and four local
minimums. The local minimums on distance distribution graph
correspond to the four valley points on the palm. Valley points
are detected by finding the pixels on hand contour at local
minimums. The points P1 and P2, i.e., second and fourth valley
points of the hand, are selected to be the reference points in our
valley points based coordinate system, as shown in Fig. 3(e).
We specify line 
between P1 and P2 and a horizontal line
which crosses any of the two points. If rotation of the palm is
not normalized, 
makes an angle θ with the horizontal
line, which is the required angle to be calculated for rotation
normalization. The angle θ can be computed in the triangle by
calculating inverse tangent of the opposite side divided by
adjacent horizontal line, as in the equation given below:
 
 (2)
where (P1x, P1y) represent x and y coordinates of the valley
point P1, (P2x, P2y) represent x and y coordinates of the valley
point P2, and θ is the angle of rotation.
Fig. 1. General framework of the proposed scheme
Fig. 2. Key steps in ROI segmentation and enhancement
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
4
Fig. 3. Illustration of ROI segmentation: (a) original palm image, (b) binary
palm image, (c) palm boundary detection, (d) distance between middle of palm
and palm boundary, (e) two valley points selection and finding angle θ for
rotation normalization, and (f) a square ROI extraction.
Fig. 4. Examples of rotation normalization. Row 1: Captured palm images with
varying rotations. Row 2: Images are normalized according to the required
amount of rotations. Row 3: Segmented ROI images.
As shown in Fig. 3(f), the original palm image is rotated by
angle θ to specify the new reference valley points P1 and P2,
which eventually locates the ROI. The location and size of ROI
are defined based on the position and distance between P1 and
P2. We locate ROI few pixels away from the reference
line
. The size of ROI is defined as follows:
 (3)
  (4)
where  is the distance between P1and P2, SROI denotes
the size of ROI, and α is the controlling parameter to expand or
shrink the ROI. Finally, the ROI, specified by SROI, is extracted
from the full hand image and scaled to fixed dimensions. Fig. 4
shows some of the examples of rotation normalization of same
palm with different rotation angles. The extracted ROIs, in the
third row, correspond to almost same region of the palm after
the rotation normalization technique.
2) Segmented Image Enhancement
The extracted ROI image is further processed to enhance the
contrast and clarity of vein texture. The captured palm images
often exhibit low contrast and blurred vein texture [41] due to
low sensitivity of NIR sensor. To enhance vein texture contrast
as compared to its background, we first estimate the average
background intensities of image blocks and use nonlinear
interpolation for expanding to the block size. We divide the
image into 16 × 16 size non-overlapped blocks and calculate
their average intensities. These average intensities are expanded
to the block size by adopting nonlinear interpolation. The
estimated background profile is then subtracted from the
original ROI image to produce a clearer ROI image. Finally, we
apply histogram equalization to further enhance the image
contrast. Image enhancement results are shown in Fig. 5.
Fig. 5. Enhancement of segmented ROI: (a) original ROI, (b) after subtracting
the estimated background intensities, (d) ROI after histogram equalization.
B. Overview of Wave Atom Transform
A promising palm-vein based biometric authentication
system requires extraordinary feature extraction capabilities to
efficiently represent the salient palm-vein features. Wave
atoms, introduced by Demanet and Ying [42], are a variant of
two-dimensional wavelet packets that obey the parabolic
scaling law, wavelength ~ (diameter)2. Wave atoms are capable
of sparsely representing oriented textures and oscillatory
functions, such as fingerprint, seismic profile and engineering
surfaces etc. They interpolate between Gabor atoms and
wavelets. WAT is well-matched for modelling oscillatory
patterns and oriented textures in the image than other
representations, such as wavelet transform, Gabor filter, and
curvelet transform. Wave atoms are described as  with
subscript  where j, m1, m2,
n1, n2 are integers. A point () is indexed in phase-space as
  

 (5)
Here, C1 and C2 represent two positive constants, is position
vector and the center of , and represents wave vector
determining the center of both bumps of
 as .
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
5
1) One dimensional wave atoms
Wave atoms are created from tensor products of one
dimension (1D) wave packets. 1D wave packet is represented
as 
 where j ≥ 0, m ≥ 0, and n Z, centered in frequency
around   with two constants C1, C2, such that
C12j m C22j, and centered around   in space. The
basis function for 1D wave packet is given below.


(6)
here
is defined as follows

 
 
  
  (7)
where

, , and g is a real value
compact-support (C) bump function, such that
.
2) Two dimensional wave atoms
Wave atoms are based on generalization of the 1D wrapping
strategy to two dimensions (2D). Basis function for 2D wave
atoms with four bumps can be formed by individually taking
products of 1D wave packets. 2D wave atoms are indexed by
 and their basis function is
defined by modifying (6) as in the equation given below.

 (8)
Likewise, a dual orthonormal basis function can be defined
from Hilbert-transformed wavelet packet as follows.


 (9)
By combining (8) and (9), wave atom frames
and
,
with two bumps each, are generated in frequency domain. So,
directional wave packets oscillate in one direction. The two
frames
 and
, which join to form a wave atom frame
, are individually represented as follows.
 


 (10)
As illustrated in Fig. 6, transformation of an image into 2D
wave atoms yields two frames
and
, consisting of
several scale bands. Each band consists of a number of sub-
blocks containing numerous wave atom coefficients. For a 256
× 256 and 128 × 128 images (sizes of images in the databases
used in this paper), the number of scale bands generated in each
frame are five and four, respectively. The number of
coefficients in sub-blocks of a higher scale band is multiple of
its lower scale band. The first sub-block in each frame of all the
bands, except the first band, is left empty for compatibility with
the previous band.
Fig. 6. Wave atom coefficients distribution in each scale band of 256 × 256
size image
C. Feature Extraction in WAT domain
We extract palm-vein image features from 3rd scale band of the
wave atom coefficients tiling. Invariance strength of wave atom
coefficients vary among different scale bands. Coefficients in
low-frequency bands, i.e. 1st and 2nd bands, are less sensitive to
various types of changes in an image than in the high-frequency
scale bands, i.e. 4th and 5th bands [43]. Furthermore, the lower
scale bands capture less image details and mostly represent
global features of an image, such as overall luminance and
energy of image pixels. On the other hand, the higher scale
bands preserve details and local descriptors in an image, such
as key points, distinct structures and localized textures etc. The
3rd scale band, being the middle band, exhibits a special
characteristic of keeping a balance between the properties of
lower and higher bands [44]. As compared to lower bands, it
represents a descent amount of image details to capture enough
vein pattern information. Moreover, it possess higher
invariance strength than the upper bands and offers better
desensitization against the inevitable image variations. For
better palm-vein recognition, the scheme needs to be insensitive
to intra-subject variations as well as capture distinct vein
patterns in different images. Scale band 3 maintains the
required tradeoff between minimizing the sensitivity to intra-
subject variations and distinguishing inter-subject distinct vein
patterns. Therefore, the proposed method utilizes wave atom
coefficients in the 3rd scale band to generate the feature vector.
Fig. 7 illustrates the proposed WAT based palm-vein feature
extraction strategy. As mentioned in the previous subsection,
applying WAT to an image yields two frames
and
 of
wave atom coefficients C(j, m1, m2, n1, n2). Here, j represents
the scale, m1, m2 indicate phase of each sub-block, and n1, n2
indicate phase of coefficients. For a 256 × 256 size image, each
frame consists of five scale bands. Instead of computing all the
five scale bands, we compute only the necessary scale band 3.
This avoids computational overhead of calculating the extra
bands. 3rd scale band consists of 35 non-empty sub-blocks in
both the frames. The first blocks in both the frames are left
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
6
Fig. 7. Illustration of the proposed palm-vein feature extraction and feature
vector generation strategy
empty for compatibility purposes. Each sub-block consists of 8
rows and 8 columns, i.e. 8 × 8 = 64, of wave atom coefficients.
Individual wave atom coefficients are sensitive to image
changes; therefore, we calculate a relatively invariant statistic,
i.e. column-mean, by computing the mean intensities of wave
atom coefficients in each column of every sub-block. This also
results in dimension reduction from 8 × 8 elements to 1 × 8 (1
row by 8 columns) column-means in each sub-block. Column-
mean is calculated as follows.

 
 (11)
where  represents mean energy of i-th column, l
indicates length of each sub-block, and indices i and k locate
each column and wave atom coefficients in rows of each
column, respectively. Here, k [1,2,3,…, l] and i [1,2,3,…,t],
where t is the total number of column-means.
Column-means of total 70 non-empty sub-blocks are
concatenated to form a feature vector (VF) of size 70 × 8 = 560,
represented as:
 (12)
To generate privacy-preserving palm-vein template,
randomizations based on user-specific secret key are applied to
VF. We use Rényi chaotic map [19] as the pseudo-random
number generator (PRNG), which is employed to randomize the
extracted features. Rényi chaotic map is described by the
following equation:
  (13)
where l is the length of the mapping, α represents an odd
positive integer, u and v represent selected positive integers, and
x is the initial value, known as seed, to Rényi map . One of the
most important properties of synthetic chaotic system is its
sensitivity to initial parameters. Rényi map has demonstrated to
produce chaotic mappings based on variable initial parameters.
The seed is employed as a secret key, which is user-specific and
is different for each user, in the proposed scheme. The original
arrangement of extracted features is shuffled according to the
random mapping produced by the Rényi chaotic map. The
randomized feature vector is represented as  and the newly
mapped column-mean values as .  does not only
represent user-specific distinct vein features but it is also
dependent on the user-specific secret key. This provides
protection to the original captured features by allowing
regeneration of a new biometric template, with a different user-
specific key, if the previous one is compromised.
To make the feature vector non-invertible and further reduce
its dimension, we quantize it into a binary representation,
namely palm-vein code (PVC). The quantized palm-vein code
is generated by comparing column-mean values at adjacent
positions, as shown in the following equation.
  
 (14)
Comparison of one pair of adjacent column-mean values
generates one bit of PVC. Therefore, a total of 559 bits are
generated from comparisons of 560 column-means. PVC
ensures non-invertibility because the real-value features are
lost, while only the relationships between column-mean values
are preserved as features.
D. Template Formation and Matching
Palm-vein codes, generated from a set of registration images,
constitute the template for authentication and identification
purposes. In the proposed method, palm-vein template is
generated by concatenating two or more PVCs as follows.
  (15)
Registering a varying number of PVCs in the template
advocates different recognition performances, which is
explained later in experimental analysis section. In this work,
we consider palm-vein template with PVCs of one to four
registration images.
After the feature extraction and palm-vein template
formation, the matching process involves only distance
computation and thresholding. We calculate normalized
Hamming distance as a similarity index between two PVCs.

 
 (16)
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
7
where PVCsam represents palm-vein code of sample image,
PVCreg represents palm-vein code of a registration image, L is
the length of PVC, i represents bit index, and
 
  (17)
PVC generated from sample image is compared with every
PVC registered in the template. This results in the same number
of NHD values as the number of PVCs registered in the
template. Let the template consists of more than one registered
PVC, the decision rule considers sample image Sm to be
genuine if
 (18)
where θ is the set threshold and  represents the
normalized Hamming distance calculated between Sm and the
first PVC in palm-vein template. The employed methodology is
computationally simple to allow speedy template matching.
IV. EXPERIMENTAL ANALYSIS
In this section, we evaluate proposed scheme’s recognition
performance under both touch-based and contactless setups.
We conduct rigorous experiments using three public palm-vein
datasets and one our created dataset. We also evaluate
computational complexity of our scheme and present an overall
performance comparison with some of state-of-the-art schemes.
A. Palm-vein Datasets
The publicly available palm-vein datasets utilized in this
paper include PolyU multispectral palmprint database [45],
PUT vein dataset [46] and VERA palm-vein dataset [47].
PolyU multispectral palmprint database acquired palm images
with constrained and touch-based acquisition equipment under
red, green, blue and NIR illuminations. In our experiments, we
only use the images that were acquired under NIR illumination.
PolyU database contains a total of 6,000 images acquired from
both hands of 250 subjects. 12 images were acquired from each
hand in two sessions. In PUT vein dataset palm-vein images
were acquired also under a touch-based setup. It consists of a
total of 1,200 palm-vein images from both hands of 50 persons,
acquired in three sessions. VERA palm-vein is a contactless
dataset which acquired palm-vein images using their own
contactless palm-vein prototype sensor. The dataset contains
2,200 images from both hands of 110 volunteers. 10 images per
palm were acquired in two sessions from volunteers with ages
between 18 and 60.
As a second contactless palm-vein database, we have created
our own dataset [48] of palm-vein images under NIR
illumination. The equipment comprises of a hardware terminal
of Fujitsu, which includes Fujitsu OEM sensor and ARM
processing to capture palm-vein images under NIR
illumination. The hardware terminal allows to position the palm
above the NIR sensor for contactless capture of the vein
structure, as shown in Fig. 8. The palm is positioned around 7
cm above the sensor to allow a uniform spread of the NIR light
over the entire palm region. NIR sensor and NIR light source
are fixed at the bottom in the hardware terminal. Palm-vein
recognition under contactless setup is more user acceptable and
hygienic; however, it is more challenging due to high intra-
subject variations. In our database, 1,200 palm images were
acquired from a total of 100 left and right hands of 50 subjects.
A total of 12 images were acquired from each hand in two
sessions with six images in each session.
Fig. 8. Acquisition of contactless palm-vein images
B. Authentication Analysis
Authentication refers to a one-to-one matching scenario
where the query template, along with an individual’s specifier,
is compared with the registered template. We perform
experiments on images from both hands and regard them as
belonging to different individuals. We generate genuine and
imposter scores for all the four datasets by comparing images
from same palms and different palms, respectively. Fig. 9
shows distribution and frequency curves of genuine and
imposter scores from the four datasets. Most part of the genuine
scores and imposter scores distribution of each dataset
demonstrate a clear separation. This indicates capability of the
proposed scheme to differentiate genuine samples than
imposter samples. The separation also gives a hint regarding the
decidability point, in terms of Hamming distance, to mark a
sample image as authentic or imposter.
To evaluate authentication accuracy of the proposed scheme,
we adopt performance parameters false acceptance rate (FAR)
and false rejection rate (FRR). We calculate several pairs of
FAR and FRR for different values of NHD as system threshold.
To observe the change in authentication performance by
varying the number of registration samples in the palm-vein
template, we conduct experiments when the number of
registered samples per subject is one, two, three and four. The
FAR and FRR curves for each dataset are shown in Fig. 10.
FRR represents denial of genuine images, whereas FAR results
in authentication of imposter images by the system. FRR and
FAR are defined as:
 
  (19)
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
8
 
 ×100% (20)
Fig. 9. Comparison of NHD distribution of genuine and imposter scores: (a)
PolyU dataset, (b) PUT dataset, (c) VERA dataset and (d) our dataset
Fig. 10. FRR and FAR under different values of threshold θ with varying
number of registered samples: (a) PolyU dataset, (b) PUT dataset, (c) VERA
dataset and (d) our dataset
Curves presented in Fig. 10 help to select threshold values
for the system under different number of registration samples.
The point of intersection of FAR and FRR curves relates to the
optimal threshold value and equal error rate (EER). EER is an
important performance assessment parameter, which
corresponds to the tradeoff between FAR and FRR. Fig. 10
shows that intersection of FAR and FRR curves occurs at
slightly different threshold values and error rates as the number
of registered samples per subject varies. Furthermore, FAR and
FRR curves are utilized to plot receiver operating characteristic
(ROC) curves for each dataset, as shown in Fig. 11. As a single-
factor system performance measure, EER for the varying
number of registered samples is represented as the dotted line
in Fig. 11. It can be observed that, due to more intra-subject
variations under contactless setup, ROC curves for VERA and
our datasets spread wider than in PolyU and PUT datasets. EER
values for up to four registered samples in a template for each
dataset are presented in Table I. Overall, template with four
registered samples provides recognition accuracy of around
97%. By observing the trend, recognition performance can be
further increased by utilizing more than 4 registration samples
in a template with an expense of slight increase in the template
size.
Fig. 11. ROC curves: (a) PolyU, (b) PUT, (c) VERA and (d) our dataset
TABLE I
EQUAL ERROR RATE WITH VARYING THE NUMBER OF REGISTERED
SAMPLES IN THE TEMPLATE FROM ONE TO FOUR
Reg. samples
Dataset
1
2
3
4
PolyU dataset
PUT dataset
VERA dataset
2.98%
0.9%
6.05%
2.04%
0.45%
5.12%
2.02%
0%
3.61%
1.98%
0%
3.05%
Our dataset
5.23%
4.62%
3.03%
1.49%
C. Identification Analysis
In this section, we conduct experiments to evaluate
identification performance of the proposed scheme.
Identification is a one-to-many matching scenario where the
query template is searched in a database or collection of a
number of registered subjects. Identification is a time
consuming process as compared to authentication process
because the query template is compared with each registered
template to calculate the similarities or distances between them.
The least of distances of the query template with each registered
template is checked against the threshold. If it is smaller than
the threshold, the corresponding registered template is
identified as a match.
We perform identification experiments on PolyU touch-
based database and our contactless database with varying
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
9
number of registered samples per subject. We calculate and
compare rank one identification rates with registered samples
varying from one to four. Table II presents average rank one
identification rates with PolyU database and our database. It can
be observed that PolyU database produces better identification
rates due to less intra-subject variations. Identification rates
improves in both databases as the number of registered samples
per subject increases. However, in PolyU database, the
improvement in identification rate is less when the number of
registered samples goes above two or three. This is due to small
contribution by the third and fourth sample because of less
image variations from the same person in touch-based scenario.
Also, identification rate in PolyU database is already higher, so
the potential for further improvement is low.
TABLE II
RANK ONE IDENTIFICATION RATES WITH VARYING NUMBERS OF
REGISTERED SAMPLES PER SUBJECT
Reg. samples
Dataset
1
2
3
4
PolyU database
95.26%
97.71%
98.35%
98.78%
Our database
94.40%
94.95%
96.80%
98.49%
D. Comparison of Performance Effectiveness
To comprehensively demonstrate efficiency of the proposed
scheme, we present comparison of the proposed scheme with
some of the classical schemes on palm-vein recognition.
Recognition performance not only depends on the quality of the
feature extraction but it also depends on the size of template. A
large size template shows good recognition performance.
However, good recognition performance along with a small size
and speedy template constitutes an effective recognition
system. Table III presents comparison of template size,
estimated matching speed and recognition performance in terms
of EER among different schemes. Our proposed template’s
size, with four registered samples, is 280 bytes (559 × 4 = 2,236
bits) and the matching speed is 0.00084 seconds. It is clear that
the proposed scheme generates most compact and fastest
matching template. The small size of template ensures light
storage requirement and fast matching. Our scheme’s
recognition performance is comparable with most of state-of-
the-art schemes. Comparing recognition accuracy using PolyU
dataset, proposed scheme outperforms the schemes in [17] and
[28]. NMRT and Hessian phase based schemes in [3] achieve
relatively high recognition performance than the proposed
scheme but at an expense of larger template size. Especially,
[29] and [30] run into huge template sizes (varying with the
number of feature points detected), which restrict their use in
systems with limited resources.
Furthermore, we measure computational complexity of the
proposed scheme by calculating computational time consumed
during each stage. We implemented our scheme on a desktop
computer (CPU) with 3.2 GHz processer and 8.0 GB RAM
using MATLAB platform. Table IV presents average execution
time of different stages of the proposed scheme. It is clear that
the proposed scheme exhibits very low computational
complexity with a total execution time of just around 0.4318
seconds. Better computational simplicity makes the proposed
scheme an ideal candidate for real-time recognition. To
summarize, the proposed scheme offers comparable recognition
performance with a significant decrease in the template size and
computational complexity.
TABLE III
COMPARISON OF TEMPLATE SIZE, MATCHING SPEED, AND EER (DERIVED
FROM EXISTING METHODS) AMONG DIFFERENT SCHEMES
Method
Template
size (bytes)
Matching
speed
(seconds)
EER %
(PolyU)
Databases used
NMRT [3]
486
0.01097
0.21
PolyU, CASIA
Hessian
phase [3]
Multispectral
[17]
SIFT [28]
Multi-
sampling [29]
RootSIFT
[30]
Proposed
scheme
2,592
7,766
77,184*
(approx.)
141,488†
(approx.)
70,744*
(approx.)
280
(4 registered
samples)
0.03873
0.08062
0.21648
0.42887
0.19532
0.00084
1.17
4.98
5.58
-
-
1.98
PolyU, CASIA
PolyU
Private, PolyU
Private, CASIA
Private, CASIA
PolyU, PUT,
VERA, our
dataset
* Template size is calculated using an average number of feature descriptors,
which vary with image size.
† Template size is estimated using an average number of feature points detected
as a result of feature-level fusion of multiple samples.
TABLE IV
COMPUTATIONAL TIME OF THE PROPOSED SCHEME (IN SECONDS)
ROI
segmentation
ROI
enhancement
Template
generation
Template
matching
Total
0.35
0.017
0.061
0.00084
0.4318
V. PRIVACY AND SECURITY ANALYSIS
A. Privacy Analysis
Privacy-preservation or template security refers to strength
of a biometric template to guarantee privacy of original
features. This paper ensures privacy of user’s palm-vein
features by designing a cancelable and non-invertible template.
Randomizations applied to the extracted features, using user-
specific secret key, allow to cancel and renew the templates,
with new user-specific keys, in cases when the templates are
compromised.
The proposed scheme provides non-invertibility at two
levels. First and foremost, the quantization stage, which
transforms the real-value features into binary PVC, is
specialized to ensure non-invertibility. PVC of the palm-vein
ROI image shown in Fig. 12(b) is visualized in Fig. 12(a),
where bit value 0 is represented as black and 1 as white. The
first goal of an adversary is to recover the 560 column-mean
values, which serves as real-value feature vector (VF). Consider
PolyU database as an example, the values of column-means
roughly occur between 15 and -20. Therefore, a total of 35
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
10
(25) possible values exist to predict one feature location. This
means, up to    attempts are required to
guess the correct VF. However, individual bits (0 or 1) show if
the mean value at current index is smaller or greater than the
next value. This implies that the number of possible values for
one feature location can be reduced roughly into half (
24) i.e. a positive real value is predicted if the current bit is 1.
This would reduce the number of attempts to 
  for an approximate prediction. This number of
attempts is still impractical and computationally complex to
reconstruct VF.
As an inner level of non-invertibility, it is also very hard to
reverse back the column-means in VF to get the correct wave
atom coefficients. Given the 560 column-means to accurately
predict the wave atom coefficients, each column-mean is
utilized to predict the 8 wave atom coefficients for
corresponding columns of sub-blocks in scale band 3.
Furthermore, since we formulated VF using only band 3, wave
atom coefficients of the other three bands would also be
predicted using the same column-means of band 3. This further
complicates and degrades the invertibility attack. For the image
shown in Fig. 12(b), column-means range is between 15 and -
20 and the maximum and minimum values of wave atom
coefficients are 44 and -53, respectively. If wave atom
coefficients of all the bands are interpolated based on the
column-means and undergone through inverse wave atom
transform, images similar to the one shown in Fig. 12(c) can be
obtained. It is clear that even if the quantization stage deterrence
is ignored, the inverted image using VF still does not represent
the real palm-vein features.
Fig. 12. Non-invertibility analysis of the proposed template: (a) binary PVC of
the image (b), (b) palm-vein ROI image and (c) reconstructed image
B. Security Analysis
As we discussed in the previous section, the privacy-
preserving template offers deterrence against reconstruction of
the original biometric features to deploy imitation attacks.
However, recently generative adversarial networks (GAN) and
other generative models have been demonstrated to generate
real-looking fake samples that can attack biometric systems.
Here, we investigate possibility of such kind of attack which
tries to get a successful match between fake palm-vein sample’s
quantized codes and the registered quantized codes. To generate
counterfeit palm-vein image, we train a Wasserstein GAN
(WGAN) model [49] and a Variational Auto-Encoder (VAE)
[50] using palm-vein images of the datasets used in this paper.
The generated images by WGAN in no way resemble the
training images. GAN models are generally trained with high
quality and detailed images, whereas the palm-vein images
have mostly less details and low quality. The limited palm-vein
data for training can also limit the learning process.
Furthermore, the arbitrary noise and no constraints at the input
of a GAN model are some of its shortcomings. On the other
hand, VAE produced better images than WGAN model, as
shown in Fig. 13. Since VAE involves constraint-based
encoding and decoding scheme, the output images can be
compared with the originals. However, the output images are
mostly blurred to accommodate vein pattern details; therefore,
they are unsuccessful to get a successful match. This suggests
that both WGAN and VAE are not sufficiently capable to
reproduce the delicate vein patterns inside a palm-vein image.
To conclude, the current generative models and the palm-vein
data size and quality limit the feasibility of such kind of attack
at present.
Fig. 13. Counterfeit palm-vein images generated by VAE: (a) can be compared
with PolyU dataset and (b) can be compared with VERA dataset
VI. CONCLUSION
This paper investigated WAT based palm-vein recognition
under both touch-based and contactless setups. We proposed a
privacy-preserving palm-vein template, with less storage and
computational requirements, which captures vein texture from
a segmented and normalized palm-vein image. In spite of the
small size and computationally simple template, the proposed
scheme demonstrated competent recognition performance due
to the sparse feature extraction capability of WAT. Privacy of
the original biometric features is ensured by designing a
cancelable and non-invertible template.
The low-power processing and small size template make the
proposed scheme suitable to provide instant recognition using
portable devices. Small size template, in particular, makes it
possible to be stored in a smart card, which is generally limited
with storage capacity of around 1 kilobytes. Template size in
our proposed scheme with four registered samples is 280 bytes,
this means that a smart card can easily store templates of even
both the hands of a user.
REFERENCES
[1] M. Barni, G. Droandi, and R. Lazzeretti, “Privacy protection in biometric-
based recognition systems: A marriage between cryptography and signal
processing,” IEEE Signal Process. Mag., vol. 32, no. 5, pp. 66-76, Sep.
2015.
[2] S. H. Khan, M. A. Akbar, F. Shahzad, M. Farooq, and Z. Khan, "Secure
biometric template generation for multi-factor authentication," Pattern
Recognit., vol. 48, no. 2, pp. 458-472, Feb. 2015.
[3] Y. Zhou, and A. Kumar, "Human identification using palm-vein
images," IEEE Trans. Inf. Forensics Security, vol. 6, no. 4, pp 1259-1274,
Dec. 2011.
[4] H. Qin, and M. A. El-Yacoubi, "Deep representation-based feature
extraction and recovering for finger-vein verification," IEEE Trans. Inf.
Forensics Security, vol. 12, no. 8, pp. 1816-1829, Aug. 2017.
[5] J. Yang, and Y. Shi, "Towards finger-vein image restoration and
enhancement for finger-vein recognition," Inf. Sci., vol. 268, pp. 33-52,
Jun. 2014.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
11
[6] W. Kang, and Q. Wu, "Contactless Palm Vein Recognition Using a
Mutual Foreground-Based Local Binary Pattern," IEEE Trans. Inf.
Forensics Security, vol. 9, no. 11, 1974-1985, Nov. 2014.
[7] A. M. Al-Juboori, X. Wu and Q. Zhao, "Biometric Authentication System
Based on Palm Vein," Int. Conf. Comput. Sci. Appl., 2013, pp. 52-58.
[8] Y. Lin, E. Y. Du, Z. Zhou, and N. L. Thomas, "An efficient parallel
approach for sclera vein recognition," IEEE Trans. Inf. Forensics
Security, vol. 9, no. 2, pp. 147-157, Feb. 2014.
[9] Y. Wang, K. Zhang, and L.-K. Shark. "Personal identification based on
multiple keypoint sets of dorsal hand vein images." IET Biometrics, vol.
3, no. 4, pp. 234-245, Feb. 2014.
[10] S. Joardar, A. Chatterjee and A. Rakshit, "A Real-Time Palm Dorsa
Subcutaneous Vein Pattern Recognition System Using Collaborative
Representation-Based Classification," IEEE Trans. Instrum. Meas, vol.
64, no. 4, pp. 959-966, April 2015.
[11] J. A. Unar, W. C. Seng, and A. Abbasi. "A review of biometric technology
along with trends and prospects." Pattern Recognit., vol. 47, no. 8, pp.
2673-2688, Aug. 2014.
[12] T. Chugh, C. Kai, and A. K. Jain. "Fingerprint spoof buster: Use of
minutiae-centered patches." IEEE Trans. Inf. Forensics Security, vol. 13,
no. 9, pp. 2190-2202, Sep. 2018.
[13] Y. Liu, A. Jourabloo, & X. Liu, “Learning deep models for face anti-
spoofing: Binary or auxiliary supervision,” In Proceedings of IEEE Conf.
Comput. Vis. Pattern Recognit., pp. 389-398, 2018.
[14] Y. P. Lee. "Palm vein recognition based on a modified (2D)2 LDA,"
Signal, Image and Video Process., vol. 9, no. 1, pp. 229-242, Jan. 2015.
[15] R. Fuksis, A. Kadikis and M. Greitans, "Biohashing and Fusion of
Palmprint and Palm Vein Biometric Data," Int. Conf. Hand-Based
Biometrics, pp. 1-6, 2011.
[16] R. Das, E. Piciucco, E. Maiorana, and P. Campisi, “Convolutional neural
network for finger-vein-based biometric identification,” IEEE Trans. Inf.
Forensics Security, vol. 14, no. 2, pp. 360-373, Feb. 2019.
[17] D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zuo, "An online system of
multispectral palmprint verification," IEEE Trans. Instrum. Meas., vol.
59, no. 2, pp. 480-490, Feb. 2010.
[18] I. Natgunanathan, A. Mehmood, Y. Xiang, G. Beliakov, and J. Yearwood,
"Protection of privacy in biometric data," IEEE Access, vol. 4, pp. 880-
892, Mar. 2016.
[19] M. Lim, A. B. J. Teoh and J. Kim, "Biometric Feature-Type
Transformation: Making templates compatible for secret protection,"
IEEE Signal Process. Mag., vol. 32, no. 5, pp. 77-87, Sept. 2015.
[20] T. Addabbo, M. Alioto, A. Fort, A. Pasini, S. Rocchi and V. Vignoli, "A
Class of Maximum-Period Nonlinear Congruential Generators Derived
From the Rényi Chaotic Map," IEEE Trans. Circuits Syst. I, Reg. Papers,
vol. 54, no. 4, pp. 816-828, April. 2007.
[21] W.-Y. Han, and J.-C. Lee. "Palm vein recognition using adaptive Gabor
filter." Expert Syst. Appl., vol. 39, no. 18, pp. 13225-13234, Dec. 2012.
[22] J.-C. Lee, "A novel biometric system based on palm vein image." Pattern
Recognit. Letters, vol. 33, no. 12, pp. 1520-1528, April. 2012.
[23] H. Kaur, and P. Khanna, "Cancelable features using log-gabor filters for
biometric authentication." Multimed Tools Appl., vol. 76, no. 4, pp. 4673-
4694 Feb. 2017.
[24] L. Leng, M. Li, L. Leng and A. B. J. Teoh, "Conjugate 2DPalmHash code
for secure palm-print-vein verification," in Proceedings 6th Int. Congress
Image Signal Process., 2013, pp. 1705-1710.
[25] T. Connie, A. Teoh, M. Goh, and D. Ngo, "Palmhashing: a novel approach
for cancelable biometrics." Inf. Process. Letters, vol. 93, no. 1, pp. 1-5,
Jan. 2005.
[26] Y. Zhou, Y. Liu, Q. Feng, F. Yang, J. Huang, and Y. Nie. "Palm-vein
classification based on principal orientation features." PLOS One, vol. 9,
no. 11, pp. e112429, Nov. 2014.
[27] M. Greitans, M. Pudzs, and R. Fuksis. "Palm vein biometrics based on
infrared imaging and complex matched filtering," in Proceedings 12th
ACM Workshop Multimed Security, pp. 101-106, ACM, Sep. 2010.
[28] P. O. Ladoux, C. Rosenberger, and B. Dorizzi, “Palm vein verification
system based on SIFT matching,” Lecture Notes in Computer Science,
vol. 5558, 2009.
[29] X. Yan, W. Kang, F. Deng, and Q. Wu, "Palm vein recognition based on
multi-sampling and feature-level fusion," Neurocomput., vol. 151, pp.
798-807, Mar. 2015.
[30] W. Kang, Y. Liu, Q. Wu, and X. Yue, "Contact-free palm-vein
recognition based on local invariant features," PLOS one, vol. 9, no. 5,
e97548, May. 2014.
[31] J.-G. Wang, W.-Y. Yau, A. Suwandy, and E. Sung, “Person recognition
by fusing palmprint and palm vein images based on “Laplacianpalm”
representation,” Pattern Recognit., vol. 41, no. 5, pp. 1514-1527, 2008.
[32] L. Leng, J. S. Zhang, M. K. Khan, X. Bi, and M. Ji, "Cancelable
PalmCode generated from randomized Gabor Filters for palmprint
protection," in Proceedings Int. Conf. Image Vis Comput., 2010, pp. 1-6.
[33] H. Kaur, and P. Khanna, "Biometric template protection using cancelable
biometrics and visual cryptography techniques," Multimed Tools Appl.,
vol. 75, no. 23, pp. 16333-16361, Dec. 2016.
[34] V. M. Patel, N. K. Ratha, and R. Chellappa, "Cancelable Biometrics: A
review," IEEE Signal Process. Mag., vol. 32, no. 5, pp. 54-65, Sept. 2015.
[35] Y. Wang, J. Wan, J. Guo, Y. Cheung, and P. C. Yuen, "Inference-Based
Similarity Search in Randomized Montgomery Domains for Privacy-
Preserving Biometric Identification," IEEE Transactions on Pattern Anal.
Mach. Intell., vol. 40, no. 7, pp. 1611-1624, 1 July 2018.
[36] K. Nandakumar, and A. K. Jain. "Biometric template protection: Bridging
the performance gap between theory and practice," IEEE Signal Process.
Mag., vol. 32, no. 5, pp. 88-100, Sep. 2015.
[37] J. Bringer, H. Chabanne, and A. Patey. "Privacy-preserving biometric
identification using secure multiparty computation: An overview and
recent trends." IEEE Signal Process. Mag., vol. 30, no. 2, pp. 42-52, Mar.
2013.
[38] R. L. Lagendijk, Z. Erkin, and M. Barni, "Encrypted signal processing for
privacy protection: Conveying the utility of homomorphic encryption and
multiparty computation," IEEE Signal Process. Mag., vol. 30, no. 1, pp.
82-105, Jan. 2013.
[39] Y. Rahulamathavan, and M. Rajarajan, "Efficient Privacy-Preserving
Facial Expression Classification," IEEE Trans. Dependable Secure
Comput., vol. 14, no. 3, pp. 326-338, May-June 2017.
[40] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing
Using MATLAB. Vol. 624. Upper Saddle River, New Jersey: Pearson-
Prentice-Hall, 2004.
[41] F. Ahmad, A. Khan, I. U. Islam, M. Uzair, and H. Ullah, “Illumination
normalization using independent component analysis and filtering.”
Imaging Sci. J., vol. 65, no. 5, pp. 308-313, Jun. 2017.
[42] L. Demanet, and L. Ying, “Wave atoms and sparsity of oscillatory
patterns”, Appl. Comput. Harmon. Anal, vol. 23, pp. 368-387, Feb. 2007.
[43] F. Liu, L.-M. Cheng, H.-Y. Leung, and Q.-K. Fu, "Wave atom transform
generated strong image hashing scheme," Optics Comm., vol. 285, pp.
5008-5018, Aug. 2012.
[44] F. Ahmad, and L.-M. Cheng, "Authenticity and copyright verification of
printed images," Signal Process., vol. 148, pp. 322-335, Jul. 2018.
[45] PolyU Multispectral Palmprint Database [Online]. Available:
http://www.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MS
P.htm
[46] PUT vein database [Online]. http://biometrics.put.poznan.pl/vein-dataset
[47] The Idiap Research Institute VERA Palmvein Database [Online].
Available: https://www.idiap.ch/dataset/vera-palmvein
[48] CityU Contactless Palm-Vein Dataset. Will shortly be available at:
www.ee.cityu.edu.hk/~lcheng/palm-dataset
[49] G. Ishaan, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville,
“Improved training of wasserstein gans.” Advances in Neural Inf.
Process. Systems, pp. 5767-5777, 2017.
[50] D. P. Kingma and M. Welling. "Auto-encoding variational bayes." arXiv
preprint arXiv:1312.6114, 2013.
Fawad Ahmad is currently pursuing the Ph.D. degree at the
Department of Electronic Engineering, City University of Hong
Kong. His research interests include privacy-preserving
biometric recognition and multimedia security.
Lee-Ming Cheng received the Ph.D. degree from King’s
College London, University of London, U.K, in 1982. He is
currently an Associate Professor with the City University of
Hong Kong. His research interests include image processing,
information security, and neural networks.
Asif Khan received the Ph.D. degree jointly from Alpen-Adria-
Universität, Austria, and the Queen Mary University of
London, U.K, in 2015. He is an Asst. Professor with the Ghulam
Ishaq Khan Institute of Eng. Sci’s. and Technology, Pakistan.
... The transformation-based approach has recently been adopted for palm-vein template protection. For example, Ahmad et al. [17] adopted a mathematical transformation based on the directions of the detected patterns. More recently, Nayar et al. [18] developed a LEE ET AL. palm-vein graph template transformation based on the partitioning of palm-vein lines into blocks according to random numbers. ...
... Qin et al. [13] Toh et al. [14] Ahmad et al. [17] Babalola et al. [19] Rizki et al. [15] Nayar et al. [18] El-Ghandour et al. [20] Mirmohamadsadeghi et al. [16] Qin et al. [21] Palmprint Svoboda el al. [22] Zhang et al. [23] Li et al. [26] Fei et al. [30] Connie et al. [24] Wu et al. [27] Zhao et al. [31] Han et al. [25] Deshmukh et al. [28] Yulin et al. [32] Naeem et al. [29] Jia et al. [33] Palm-vein þ palmprint Wu et al. [34] Yang et al. [37] Proposed system Zhong et al. [35] Gupta el al. [38] Dong et al. [36] Wang et al. [39] ACKNOWLEDGEMENTS The authors would like to thank the anonymous reviewers for their very constructive comments to improve the manuscript. ...
Article
Full-text available
A novel method based on the cross‐modality intersecting features of the palm‐vein and the palmprint is proposed for identity verification. Capitalising on the unique geometrical relationship between the two biometric modalities, the cross‐modality intersecting points provides a stable set of features for identity verification. To facilitate flexibility in template changes, a template transformation is proposed. While maintaining non‐invertibility, the template transformation allows transformation sizes beyond that offered by the conventional means. Extensive experiments using three public palm databases are conducted to verify the effectiveness the proposed system for identity recognition.
... Despite the tremendous potential of palm-vein recognition technology, existing systems often face challenges in terms of implementation costs, accuracy, and efficiency (Zhong et al., 2019;Dargan and Kumar, 2020). Enhancing the performance and feasibility of palm-vein recognition systems is crucial for their widespread adoption in biometric security and access control applications (Ahmad et al., 2019;Zhou et al., 2020;Adedeji et al., 2021). This research aims to address these challenges by developing a robust palm-vein recognition system using an enhanced Convolutional Neural Network architecture optimized through the Gravitational Search Algorithm. ...
Article
Full-text available
Biometric authentication systems have gained significant attention in access control applications due to their ability to provide enhanced security and convenience. Among various biometric modalities, palm-vein recognition has emerged as a promising approach, offering high accuracy, reliability, and resistance to forgery. However, existing palm-vein recognition systems often face challenges in implementation costs, computational efficiency, and performance limitations. This research aimed to develop an enhanced palm-vein recognition system for access control applications by optimizing a Convolutional Neural Network (CNN) architecture. A palm-vein dataset comprising 1000 images from 200 LAUTECH students was acquired, with 5 images per individual. The dataset was split into 700 training images and 300 testing images. The acquired images were pre-processed for quality enhancement and region of interest extraction. A Gravitational Search Algorithm (GSA) optimized CNN (GSA-CNN) was then employed to extract deep features from the pre-processed images, which were classified using a SoftMax layer. Experimental results revealed that the CNN technique achieved a specificity, sensitivity, False Positive Rate (FPR), accuracy of 74.60%, 79.89%, 25.40%, 77.67% at 117.52 seconds, respectively. In contrast, the proposed GSA-CNN technique demonstrated superior performance, achieving a specificity, sensitivity, FPR, accuracy of 92.06%, 92.53%, 7.94%, 92.33% at 97.14 seconds, respectively. The GSA-CNN system outperformed the conventional CNN approach in terms of accuracy, specificity, sensitivity, FPR, and processing time, demonstrating its potential for reliable and efficient palm-vein recognition in access control applications. The findings have significant implications for developing robust and secure access control systems, contributing to enhanced privacy and security across various domains.
... As such, possible issues like forgotten passwords can be resolved with the application of digital biometric technologies. Numerous biometric techniques, including identification of faces [1], identification of fingerprints [2], palm print identification [3], recognition of the iris [4], and palm vein recognition [5], have been proposed. Typically, a visible light camera is used to take pictures for facial recognition systems. ...
Article
Lately, automated biometric recognizing evidence system has wide applications including modified ID and data get, which integrates customized security checking, affirming individual character to forestall information divulgence or character coercion, and so forth. With the movement of biotechnology, recognizing verification structures considering biometrics have emerged keeping watch. These systems require high precision and ease of use. Palm vein conspicuous confirmation is a sort of biometric that perceives palm vein features. Differentiated and various features, palm vein affirmation gives definite results and has gotten noteworthy thought. It encouraged a cunning unrivalled execution and noncontact palm vein affirmation system by using better execution flexible establishment filtering than obtain palm vein photos of the locale of interest. After that, at that point, used a changed convolution mind association to conclude the best affirmation model through getting ready and testing
... Fawad Ahmad et al. proposed Lightweight and Privacy-Preserving Template Generation for Palm-Vein-Based Human Recognition [21]. This research aimed to achieve a higher accuracy rate in palm vein recognition schemes. ...
Article
Full-text available
Palm vein identification relies on unique patterns within the palm veins, illuminated by Near Infrared Light (NIR) with wavelengths from 760 nm to 820 nm, penetrating the skin up to 5mm. Absorption of NIR by deoxygenated blood in veins creates distinct dark patterns. However, this high wavelength light may cause skin and tissue infection. Vein networks are captured via infrared-sensitive cameras, with captured images pre-processed to remove noise and features extracted for recognition. Feature extraction primarily involves network segmentation, creating reference maps for subsequent recognition. These feature maps serve as blueprints for neural networks, facilitating streamlined identification processes.
... Ahamad et al. [21] suggested the Lightweight and Privacy-Preserving Template Generation for Palm-Vein-Based Human Recognition. This research developed a wave atom transform approach (WATA) to get a higher accuracy rate over the palm vein recognition scheme. ...
Article
Full-text available
Contactless palm vein recognition plays a significant role in biometric application because of its high stability, non-intrusive, flexibility and unique nature. Thus, different neural approaches were proposed to identify and segment the vein from the contactless palm image. But the traditional techniques face challenging issues in vein tracking and segmentation. Thus, a novel hybrid optimized deep network named Monkey-based Elman Neural Vein Recognition Framework was developed in this article. First, the dataset is pre-processed, and the palm region is extracted. Then, the extracted features are matched with the saved ground truth features. Further, the veins are tracked and segmented in the classification phase. The spider monkey fitness function is integrated into the developed model, which tracks and segments the vein from the palm image. The presented work was implemented, and the results are estimated for the palm image dataset. Furthermore, the results are verified with a comparative analysis. The highest accuracy score for the Contactless Palm-Vein Recognition by the proposed model is 99.76, and the lowest error rate is 0.0089%. Hence, the comparative analysis shows that the developed model earned better outcomes than the existing approaches.
Article
Vein-based biometric technology offers secure identity authentication due to the concealed nature of blood vessels. Despite the promising performance of deep learning-based biometric vein recognition, the scarcity of vein data hinders the discriminative power of deep features, thus affecting overall performance. To tackle this problem, this paper presents a generative self-supervised contrastive learning (GSCL) scheme, designed from a data-centric viewpoint to fully mine the potential prior knowledge from limited vein data for improving feature representations. GSCL first utilizes a style-based generator to model vein image distribution and then generate numerous vein image samples. These generated vein images are then leveraged to pretrain the feature extraction network via self-supervised contrastive learning. Subsequently, the network undergoes further fine-tuning using the original training data in a supervised manner. This systematic combination of generative and discriminative modeling allows the network to comprehensively excavate the semantic prior knowledge inherent in vein data, ultimately improving the quality of feature representations. In addition, we investigate a multi-template enrollment method for improving practical verification accuracy. Extensive experiments conducted on public finger vein and palm vein databases, as well as a newly collected finger vein video database, demonstrate the effectiveness of GSCL in improving representation quality.
Chapter
Recently, biometrics have become more important in military and defence applications. This research implements a multimodal biometric system based on ear and palm vein biometric features. The images of the palm vein and ear that were obtained are of poor quality. The proposed image boosting approach is used to preprocess these low-quality images. Images of the palm veins and ears are used to extract feature patterns. Adaptive 2D Gabor filter and modified curvature line detection technique were utilized to recover the vein patterns from the palm area. Using an adaptive 2D Gabor filter and morphological operation-based approach, the structural characteristics are extracted from the ear image. We applied sensor-level and feature-level fusion techniques. The ear and palm vein biometrics have been combined at the sensor level using the mosaicing approach. To combine the features of the palm and the ear, the sum rule is used. The database contained the fused feature vector for comparison. The performance of the suggested system is evaluated using a support vector machine classifier and several distance metrics. The PUT Vein database and our own Ear database are used in the tests. With an accuracy of 97.65% and EER of 2.15, the multimodal biometric system delivers superior accuracy over the unimodal biometric system.
Article
With the advancement of deep learning technology, the palm vein authentication technology based on convolutional neural network (CNN) has been greatly developed. Among many methods based on CNN, contrastive learning stands out for its excellent performance in various visual tasks. It enables machines to better understand how objects differ from each other within a given category, which is well suited to the task of fine-grained identification. This inspires us to apply contrastive learning to the palm vein authentication task. In response, we propose Focal Contrastive Palm Vein Network (FCPVN) for palm vein authentication. First, label information was introduced into self-supervised contrastive learning to create a palm vein authentication paradigm based on supervised contrastive learning. On this basis, we design a novel loss named Focal Contrastive Loss, which employs a hard example mining strategy by introducing two factors to make the model pay more attention to hard examples. Extensive experiments on five public palm vein databases show that FCPVN has competitive performance compared to existing palm vein authentication methods.
Article
Full-text available
The primary purpose of a fingerprint recognition system is to ensure a reliable and accurate user authentication, but the security of the recognition system itself can be jeopardized by spoof attacks. This study addresses the problem of developing accurate, generalizable, and efficient algorithms for detecting fingerprint spoof attacks. Specifically, we propose a deep convolutional neural network based approach utilizing local patches centered and aligned using fingerprint minutiae. Experimental results on three public-domain LivDet datasets (2011, 2013, and 2015) show that the proposed approach provides state-of-the-art accuracies in fingerprint spoof detection for intra-sensor, crossmaterial, cross-sensor, as well as cross-dataset testing scenarios. For example, in LivDet 2015, the proposed approach achieves 99.03% average accuracy over all sensors compared to 95.51% achieved by the LivDet 2015 competition winners. Additionally, two new fingerprint presentation attack datasets containing more than 20,000 images, using two different fingerprint readers, and over 12 different spoof fabrication materials are collected. We also present a graphical user interface, called Fingerprint Spoof Buster, that allows the operator to visually examine the local regions of the fingerprint highlighted as live or spoof, instead of relying on only a single score as output by the traditional approaches.
Article
Full-text available
Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives.
Article
Full-text available
In this work, we separate the illumination and reflectance components of a single input image which is non-uniformly illuminated. Considering the input image and its blurred version as two different combinations of illumination and reflectance components, we use the conventional independent component analysis (ICA) to separate these two components. The separated reflectance component, which is an illumination normalized version of the input image, can then be used as an effective pre-processed (illumination normalized) image for different computer vision tasks e.g. face recognition. To this end, we present simulation results to show that our proposed pre-processing method called illumination normalization using ICA increases the accuracy rate of several baseline face recognition systems (FRSs). The proposed method showed improved performance of baseline FRSs when using the Extended Yale-B databases.
Article
Full-text available
Finger-vein biometrics has been extensively investigated for personal verification. Despite recent advances in fingervein verification, current solutions completely depend on domain knowledge and still lack the robustness to extract finger-vein features from raw images. This paper proposes a deep learning model to extract and recover vein features using limited a priori knowledge. Firstly, based on a combination of known state of the art handcrafted finger-vein image segmentation techniques, we automatically identify two regions: a clear region with high separability between finger-vein patterns and background, and an ambiguous region with low separability between them. The first is associated with pixels on which all the segmentation techniques above assign the same segmentation label (either foreground or background), while the second corresponds to all the remaining pixels. This scheme is used to automatically discard the ambiguous region and to label the pixels of the clear region as foreground or background. A training dataset is constructed based on the patches centered on the labeled pixels. Secondly, a Convolutional Neural Network (CNN) is trained on the resulting dataset to predict the probability of each pixel of being foreground (i.e. vein pixel) given a patch centered on it. The CNN learns what a fingervein pattern is by learning the difference between vein patterns and background ones. The pixels in any region of a test image can then be classified effectively. Thirdly, we propose another new and original contribution by developing and investigating a Fully Convolutional Network (FCN) to recover missing fingervein patterns in the segmented image. The experimental results on two public finger-vein databases show a significant improvement in terms of finger-vein verification accuracy.
Article
Full-text available
Wide spread use of biometric based authentication requires security of biometric data against identity thefts. Cancelable biometrics is a recent approach to address the concerns regarding privacy of biometric data, public confidence, and acceptance of biometric systems. This work proposes a template protection approach which generates revocable binary features from phase and magnitude patterns of log-Gabor filters. Multi-level transformations are applied at signal and feature level to distort the biometric data using user specific tokenized variables which are observed to provide better performance and security against information leakage under correlation attacks. A thorough analysis is performed to study the performance, non-invertibility, and changeability of the proposed approach under stolen token scenario on multiple biometric modalities. It is revealed that generated templates are non-invertible, easy to revoke, and also deliver good performance.
Article
The use of human finger-vein traits for the purpose of automatic user recognition has gained a lot of attention in the recent years. Current state-of-the-art techniques can provide relatively good performance, yet they are strongly dependent upon the quality of the analyzed finger-vein images. In this paper, we propose a convolutional-neural-network-based fingervein identification system and investigate the capabilities of the designed network over four publicly-available databases. The main purpose of this work is to propose a deep-learning method for finger-vein identification, able to achieve stable and highlyaccurate performance when dealing with finger-vein images of different quality. The reported extensive set of experiments show that the accuracy achievable with the proposed approach can go beyond 95% correct identification rate for all the four considered publicly-available databases.
Article
Perceptual image hashing and digital watermarking are two of the extensively investigated techniques for content authentication and copyright verification of digital images, respectively. One of the significant challenges of perceptual image hashing is a better tradeoff between robustness and discrimination. On the other hand, a major objective of digital watermarking techniques is to provide enhanced robustness and imperceptibility. In this paper, we present extensive experimental analysis, detailed laboratory forensic investigations, and application of the wave atom transform (WAT) based perceptual image hashing and digital watermarking techniques for printed images. Authenticity and copyright of printed images are verified via image hashing and digital watermarking techniques which are resilient against print-scan process distortions. To fulfill the requirement of a good balance between print-scan robustness and discrimination capabilities, we propose a compact, hybrid image hash using lower and middle frequency scale bands in the wave atom coefficients tiling. The hybrid image hash generation strategy is also advantageous due to its computational simplicity and reduced size of the hash value. Furthermore, we propose an enhanced multipurpose scheme which can support multiple applications, such as image authentication and copyright protection, simultaneously, for both digital-only and print-scanned images. Experimental analysis demonstrates the ability of wave atom based image hashing and watermarking schemes to provide authenticity and
Conference Paper
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
Article
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes significant progress toward stable training of GANs, but can still generate low-quality samples or fail to converge in some settings. We find that these training failures are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to pathological behavior. We propose an alternative method for enforcing the Lipschitz constraint: instead of clipping weights, penalize the norm of the gradient of the critic with respect to its input. Our proposed method converges faster and generates higher-quality samples than WGAN with weight clipping. Finally, our method enables very stable GAN training: for the first time, we can train a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data.