ArticlePDF Available

Human recognition system through major and minor finger knuckle pattern fusion by symmetric-sum method

Authors:
RESEARCH PAPER
Human recognition system through major
and minor finger knuckle pattern fusion
by symmetric-sum method
Felix Olanrewaju Babalola,aShehu Hamidu ,band Önsen Toygar b,*
aFinal International University, Department of Computer Engineering, Faculty of Engineering,
North Cyprus, Mersin, Turkey
bEastern Mediterranean University, Department of Computer Engineering, Faculty of Engineering,
North Cyprus, Mersin, Turkey
ABSTRACT. Biometric feature-based personal identification is gaining popularity these days
since it is more trustworthy than previous approaches and has a wide range of
applications. Furthermore, hand-based biometrics have better user acceptability,
making finger knuckles a relatively acceptable trait due to their location and the
added benefit of being less susceptible to harm. Finger knuckle prints are the
inherent skin patterns that form at the knuckles of the back of the hand and have
been proven to be extremely rich in textures, hence they may be utilized to uniquely
identify a person. Our study seeks to explore the innate advantage of finger knuckles
and the benefit of combining multiple features from knuckle areas, minor and major,
using symmetric sum algorithms to fuse scores from each of the traits. The proposed
methodology is tested on a modified AlexNet model as well as the original AlexNet,
modified ResNet50, binarized statistical image features, and principal component
analysis for comparison. Experimental results are obtained using PolyUKnuckleV1
finger knuckle datasets provided by Hong Kong Polytechnic University. The results
reinforced the idea that multimodal biometric systems are stronger and that finger
knuckles are unique to each person.
© 2023 SPIE and IS&T [DOI: 10.1117/1.JEI.32.5.053020]
Keywords: biometrics; finger knuckles; minor knuckles; major knuckles; symmetric-
sum; score-level fusion
Paper 230377G received Mar. 28, 2023; revised Aug. 23, 2023; accepted Sep. 7, 2023; published Sep.
21, 2023.
1 Introduction
Biometric features are increasingly being used in personal authentication systems due to their
reliability in comparison to more conventional approaches, such as token passwords and PIN
numbers. Fingerprints, palm prints, face recognition, DNA, palm veins, signatures, hand geom-
etry, voice, iris recognition, retina, gait, and other behavioral or physical features have long been
employed in biometric systems. Recent studies have demonstrated that the image pattern of
wrinkles or lines on the skin, together with the texture pattern generated by the knuckles of
each individuals fingers, makes the surface a useful instrument for biometric identification.
The knuckle image has a random roughness that makes each person stand out. The restored
knuckle image is noted to have exceptionally steady local and global properties. The data can
then be utilized to verify the identity of specific users.1In the literature, the researchersacquis-
ition procedures and strategies for finger-knuckle-based recognition systems are detailed.2
*Address all correspondence to Önsen Toygar, onsen.toygar@emu.edu.tr
1017-9909/2023/$28.00 © 2023 SPIE and IS&T
Journal of Electronic Imaging 053020-1 SepOct 2023 Vol. 32(5)
The fingerprint at the knuckles is a relatively new biometric and it has been subjected to
several image processing techniques previously utilized in personal identity biometric systems,
with promising results. Furthermore, low-cost gadgets are being employed to capture the image
since they are easily accessible; thereby offering a plethora of merits and limits.3Acquisition of
the finger simultaneously captures two knuckles, minor knuckle at the distal interphalangeal joint
and major knuckle at the metacarpophalangeal joint as shown in Fig. 1. These can be combined
to create a stronger biometric system in a multimodal mode.
Biometric systems can be broadly categorized as unimodal(i.e., establishing identity
utilizing a single biometric data source) and multibiometric systems(i.e., utilizing several
biometric data sources).5There are a number of flaws in unimodal biometric systems, including
a high error rate, poor usability, the potential for these systems to be hacked, and other issues
(e.g., aging).6Multibiometrics provides alternatives to unimodal systems by combining data
from various biometric sources. The data sources may include many implementations of identical
modality, a number of biometric techniques, several different sensor prototypes for identical
modality, or multiple feature extraction techniques for a single modality. Numerous studies, both
theoretical and practical, have demonstrated that multimodal biometric systems outperform
unimodal systems when assessing performance.7Enhancing recognition rates is the primary
practical motivation for researching and combining various biometric modalities. Similarly,
the use of finger knuckles in multibiometric systems has drawn a lot of interest in the litera-
ture because of the ease in feature acquisition and the capability to combine more than one
knuckle area.
The contributions of this paper are as follows. This study presents a multimodal system
where minor and major knuckles of the finger are taken as individual biometric trait, and these
traits are combined to obtain a more robust system. This study uses symmetric sum algorithms at
the score level for trait combination as an improvement on traditional score-level fusion of
modalities. This study also examines the proposed system with a variant of convolutional neural
network (CNN) model, namely modified AlexNet, while also comparing the performance with
other models, such as the original AlexNet, ResNet50, and hand-crafted methods, such principal
component analysis (PCA) and binarized statistical image features (BSIF).
The remainder of this paper is organized as follows. Section 2presents a review of related
studies; Sec. 3gives the details of the proposed methodology along with feature descriptors used;
and Sec. 4presents the results of the experiments carried out on the proposed methods as well
as similar methods. It also showcases the dataset used in the experiments. Section 5gives the
conclusions inferred from this study.
2 Literature Review
Biometric systems have been in use for quite some time, and they typically make use of finger-
prints, palm veins, hand geometry, DNA, palm print, iris identification, facial recognition, retina,
voice, gait, and signature, among other physical or behavioral aspects. It has only recently been
found that the texture pattern formed by the finger knuckle is particularly distinct in each user,
making the surface a unique trait for biometric identification consisting of wrinkles and lines.
Fig. 1 Pattern regions of majorminor dorsal finger knuckle.4
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-2 SepOct 2023 Vol. 32(5)
The knuckle image has a random roughness that makes each person stand out. The restored
knuckle image is noted to have exceptionally steady local and global properties. The identity
of some users may then be verified using finger knuckle data.1
The finger knuckle print (FKP) has been subjected to several image processing techniques
previously utilized in person identity biometric systems with promising results and it has been
proven to be rich in textures and can be utilized to uniquely identify a person. Furthermore, hand-
based biometrics has a relatively better user acceptability rate and it is also less susceptible to
harm or changes over time. The acquisition methods are also quite simple as illustrated in Ref. 2.
Some of the techniques used with finger knuckles in the literature include the angular geo-
metric analysis method used to produce the geometric features, which uses extracted angular-
based features for unique identification. This is done using a combination of 2D log Gabor filter
and Fourier-scale invariant feature transform methods.8Rathod et al.9used local binary pattern
(LBP) for feature extraction and employed the Bernoulli classifier as a coordinating classifier.
Two-layer serial fusion method has also been recommended using a technique that combines
both global and local properties. The technique uses PCA for global feature extraction and
LBP for local feature.10
Deep rule-based (DRB) classifier with multiple layers of neural networks (FKP) has also
been proposed for person identification based on the finger knuckle pattern. The approach is fully
automated and data-driven. The classifier for DRB is generic and may be used to solve a wide
range of classification and prediction issues. The results of the experiment prove that the DRB
classifier can be useful in FKP-basedbiometric identification systems.11
Modified magnetotatic bacterial optimization approach was introduced as a feature-selection
algorithm for finger knuckle detection in Ref. 12. The algorithm picks out relevant and useful
characteristics to improve classification precision. The unique feature of the bacteria in the tech-
nique has an impact on the development of a new optimization technique.
Heidari and Chalechale13 introduced a deep learning (AlexNet) strategy for human authen-
tication using dorsal aspects of the hand. This included fingernail (FN) and FKP impressions
from the ring, middle, and index fingers. They evaluated the method on a variety of hand skin
identification, denoising, and knuckle and FN extraction tasks. Using a multimodal biometric
method, the suggested systems authentication performance is enhanced and it becomes more
resistant to spoofing attempts.
Al-Nima et al.14 presented a detailed review of the relevant finger textures (FTs) investiga-
tions. They also discussed the major limitations and challenges of using FTs as a biometric feature,
as well as practical suggestions for improving FTs research.
For the purpose of increasing the efficiency of score-level fusion technique, some studies
have been reviewed in literature, such as symmetric addition of scores using s-norm and t-
norm.15 Another method is the weighted quasi-arithmetic mean (WQAM) fusion at the score
stage proposed for multi-biometric systems. WQAM involves the mathematical functions, such
as trigonometry functions, cosine, sine, and tangent along with some defined weight for score
level fusion (SLF).16
3 Methodology
This study proposes the combination of scores obtained from knuckles from different areas of the
same finger using a symmetric sum algorithm. Figure 2shows the proposed architecture of the
Feature extracon
Final decision
Major knuckle
Minor knuckle
Feature extracon
Fusion
algorithm
Matching
module
Matching
module
Fig. 2 Proposed finger knuckle biometric identification system.
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-3 SepOct 2023 Vol. 32(5)
biometric system, where the ROI is fed into the system to be extracted using one of the feature
extraction methods used in this study. The structure is explained below.
3.1 Feature Extraction
The main feature extraction method in this study is the modified AlexNet CNN model proposed
in Ref. 17. For the sake of comparison, similar deep learning models, i.e., the original AlexNet
and ResNet50 models, were also examined in this study. Additionally, hand-crafted feature
descriptors, namely, PCA and BSIF, were also used to verify the efficiency of the proposed
system. These methods are explained in the following sections.
3.1.1 CNN models
AlexNet is a CNN model composed of five convolution layers followed by the activation func-
tions ReLU, batch normalization, and Maxpool. The framework is a series of convolution layers
whose filter sizes range from to 11 ×11 kernel sizes. The number of kernels in each convolution
layer also ranges from 96 to 384. The structure is finished off with a Dropout layer, a fully
connected layer, and a Softmax function.18 However, the model has been modified to 3×3have
a consistent filter size of 3×3throughout all convolution layers, as opposed to the original
AlexNet models usage of variable kernel sizes,17,19 as shown in Fig. 3. The number of kernels
has also been reduced throughout the five convolution layers present in AlexNet as shown in
Table 1to reduce training time. Experimental results in Ref. 17 show that the modified version
works quite as well as the original model.
The ResNet50 CNN model is also used in this study in order to show how the proposed
system compares with other systems. ResNet50 is a variant of ResNet with a different structure
compared to AlexNet. ResNet50 contains shortcuts to avoid some layers, thereby allowing them
to train very deep networks free of the vanishing gradient problem. The layers are made up of
convolution and identity blocks, as shown in Fig. 4. Each convolution block has two parallel lines
that are appended together at the end. The first line is composed of three sequential convolution
layers with ReLU activation and batch normalization in each layer. Each identity block also has
Fig. 3 Modified AlexNet model.19
Table 1 Convolution layersfilter sizes and depths in AlexNet versus modified AlexNet.17
Convolution layer AlexNet Modified-AlexNet Reduction (%)
First 11 ×11 ×96 3 ×3×32 66.67
Second 5 ×5×256 3 ×3×64 75.00
Third 3 ×3×384 3 ×3×128 66.67
Fourth 3 ×3×384 3 ×3×128 66.67
Fifth 3 ×3×256 3 ×3×64 75.00
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-4 SepOct 2023 Vol. 32(5)
a line of three sequential convolution layers with ReLU activation and batch normalization in
each layer and a second line that takes the initial input and appends to it the output of the initial
line.20
A modified version of ResNet50 with a reduced number of filters as proposed in Ref. 17 and
similar to the modified AlexNet used in this study, in terms of filter number reduction, is used in
the experiments for appropriate comparison.
3.1.2 Hand-crafted descriptors
Two hand-crafted methods were employed in this study to compare the accuracy of the proposed
method. The first is PCA, which entails three steps: calculating eigenvalues, eigenvectors, and
feature vector covariance matrices. These templates can be made smaller while still retaining the
most critical features using an approach that combines feature extraction with dimension
reduction.21 To calculate PCA, the following formula is used:
EQ-TARGET;temp:intralink-;e001;117;393meanð¯
AÞ¼1
nX
n
i¼1
ai;(1)
where aiis the pixel value in iand nis the total number of pixels in an image.
In order to obtain ðai¯
AiÞand ðbi¯
BiÞ, the image mean is divided by the image vector,
respectively, and each vector is represented as a mean-centered image. In order to generate the
cov vectors of the covariance matrix, the following formula is used:
EQ-TARGET;temp:intralink-;e002;117;304covða; bÞ¼Pn
i¼1ðai¯
AiÞðbi¯
BiÞ
n;(2)
where ¯
Aiand ¯
Birepresent the average vector value and the two parameters aiand birepresent the
current values of ¯
aand ¯
b. As a whole, there are nrows. To obtain eigenvalues of the covariance
matrix, the following equation is applied:
EQ-TARGET;temp:intralink-;e003;117;231 detðcovða;bÞIÞ¼0:(3)
The eigenvector Vis then calculated for each eigenvalue λas follows:
EQ-TARGET;temp:intralink-;e004;117;195 detðcovða;bÞλIÞV¼0:(4)
Finally, the eigenvectors of the covariance matrix form the features extracted by the PCA
algorithm.
The second hand-crafted method used is BSIFT. It is a local image descriptor that requires
binarizing the outputs of linear convolution filters. BSIF descriptor emphasizes the importance of
both the filters size and its length. It is possible that a single fixed-length filter would not be able
to appropriately generalize finger knuckle patterns of varying intensities, sizes, and orientations.7
BSIF filter response is obtained when input image is fed into the system as input image Iof size
m×nand a filter of the same size, output image, Fias given by the following equation:
Fig. 4 ResNet50 structure.17
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-5 SepOct 2023 Vol. 32(5)
EQ-TARGET;temp:intralink-;e005;114;506xi¼X
m;n
Iðm; nÞFiðm; nÞ:(5)
Binary representation of the convolution filter with parameters i¼f1;2;3;::::::mg, where bi
is an integer representing a statistically independent filterwhose outputs may be computed in
parallel, yields the string as follows:
EQ-TARGET;temp:intralink-;e006;114;441bi¼1;if xi>0
0;otherwise;(6)
where xiis the filter response at the point i.
Figure 5shows examples of features extracted using different sizes of BSIF filters on a
dorsal finger knuckle image. The primary input in a region of interest (ROI) form for the finger
knuckle image is shown in Fig. 5(a). Figure 5(b) displays the results of a BSIF filter with output
dimensions of 9×9and 15 ×15 with bit lengths of 12, whereas Fig. 5(c) displays the input ROI
of a minor finger knuckle image. Figure 5(d) shows the outcomes of the minor dorsal finger
knuckle images separate convolution ROI using BSIF filters, size of 9×9and 15 ×15 with
lengths of 12 bits.
3.2 Matching System
For the purpose of matching in this study, the k-nearest-neighbor classifier is employed, which
uses the cosine Mahalanobis distance where k¼1. The similarity or dissimilarity between the
query trait and the saved template is measured by comparing the two sets to the minimum pos-
sible distance (score).
Considering Xiand Vjto be the query and template featurevectors for the images in the
database,
EQ-TARGET;temp:intralink-;e007;114;213dMaðXi;X
jÞ¼ðXiXjÞTC1ðXiXjÞ;(7)
where Cis the covariance matrix and Xiis the value of a score. Matching score was converted to
[0, 1] using the minmax normalizationmodel approach before decision was made. A set of
matching scores VKare calculated from the given normalization scores, where K¼1;2;3::::;n
as follows:
EQ-TARGET;temp:intralink-;e008;114;142V0
K¼VKmin
max min :(8)
3.3 Symmetric Sum for Score-Level Fusion
Cheniti et al. presented a framework for SLF based on symmetric sums (S-sums) that are gen-
erated via triangular norms. They tested the framework on two publicly available benchmark
Fig. 5 ROI image for (a) major and (c) minor finger knuckles and (b), (d) their respective BSIF
filters output (9×9_12 bit and 15 ×15_12 bit).
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-6 SepOct 2023 Vol. 32(5)
databases; NIST-multimodal database and NIST-fingerprint database where they recorded an
accuracy of 99.80% and 90.75%, respectively, for the best performing methods.15
S-sums are a class of binary functions introduced in 1979 by Silvert used as a rule for com-
bining fuzzy sets.22 An autoduality quality, or the invariance of the outcome of the operation by
inverting the scale of values to combine, is what distinguishes S-sums. They are formed from S-
norm (and T-norm). S-norm Sða; bÞis a function mapping [0,1] ×[0,1] to [0,1] that satisfies the
following conditions for any a,b,c,d½0;1.
Basic requirements. S½0;1×½0;1½0;1.
Boundary.Sð1;1Þ¼1,Sða; 0Þ¼Sð0;aÞ¼a.
Monotonicity.Sða; bÞ<Sðc; dÞif a<cand b<d.
Commutativity.Sða; bÞ¼Sðb; aÞ.
Associativity. Sða; Sðb; cÞÞ ¼ SðSða; bÞ;cÞ.
Tab le 2shows some of the t-norms that have been used to generate S-sums that are used in
this study.
4 Experiments and Results
In this section, Hong Kong Polytechnic Universitys (v1.0) publicly available finger knuckle
images database is used with the proposed methodology and the results are demonstrated using
different experiments. The first set of experiments is conducted on major finger knuckle recog-
nition system, whereas the second set is based on minor finger knuckle recognition system.
Fusion-based experiments are performed based on feature-level fusion of major and minor finger
knuckle recognition system, score-level fusion of major and minor finger knuckle recognition
system and symmetric sum fusion of scores of major and minor finger knuckles recognition
systems. Afterward, comparison of experimental results using PCA, BSIF, and AlexNet is
presented.
4.1 Database Descriptions
The evaluation of the proposed methodology was performed using the freely available knuckle
image database (version 1.0) from Hong Kong Polytechnic University. 2515 dorsal images of
fingers from 503 individuals are stored in the database. Each dorsal finger image has its major
and minor knuckles annotated as regions of interest by the owner of the database. Nearly 88% of
the people in this dataset are under the age of 30. Bitmap (*.bmp) is the format used by these
images. There are five images for each variety of fingers. However, we used 5030 images for
major and minor knuckles each with 503 ×5¼2515 images.4
However, finger knuckle images need its own unique coordinate system. These coordinates
can be used to trim an area of interest (ROI) from the source image in preparation for feature
extraction.24 To improve personal identification methods based on majorminorfinger knuckle
shapes, particular ROI image extraction is required. The major and minor areasROI traits are
presented in the database that was used for this study. Segmenting the major/minor knuckle
region of the finger dorsal image into a fixed-size chunk of 160 ×180 pixels has been done
as shown in Fig. 6, where Fig. 6(b) shows the major knuckle and Fig. 6(c) shows minor knuckle
area of the finger shown in Fig. 6(a).
Meanwhile, for the purpose of reducing training error, deep learning often needs training
samples in thousands. However, the Hong Kong Polytechnic University dataset does not have as
many as the required quantity; consequently, we created more images from the datasets using
Table 2 Examples of t-norms used to generate S-sum.23
T-norm Formulation
Probabilistic Xy
Hamacher ðxyðxþyxyÞÞ
Yager (p>0) max½1ðð1xÞpþð1yÞpÞð1pÞ;0
Schweizer and Sklar (p>0)½maxðxpþyp1;0Þð1pÞ
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-7 SepOct 2023 Vol. 32(5)
Keras data generator, which makes tiny but noticeable changes to the images, such as shear-0.15,
rotation-100, shift_width-0.05, zoom-0.2 and shift_height-0.02, and brightness. As a result, the
total number of images increases to 12,575 (503 ×5×5¼12;575) images. Table 3also shows
the dataset augmentation details used in AlexNet and modified AlexNet models.
4.2 Experimental Results
The experiments carried out can be categorized into two phases: the first phase is the recognition
system using unimodal finger knuckles, namely major and minor, respectively. The second phase is
the combination of the two systems at the score level. The feature extraction is performed using a
modified AlexNet CNN model while also experimenting with other CNN models, AlexNet and
ResNet50, and hand-crafted feature extraction descriptors, namely PCA and BSIF, for comparison.
Cross-validation was performed throughout the experiment to ensure that there was no over-
fitting. In BSIF and PCA, 40% of the dataset was used for testing, whereas 60% was used for
training, as shown in Table 3. The datasets used for training and testing were swapped for a
second session of training and testing, ending up with two results. The average of the two results
is taken as the true accuracy of the system.
Similarly, for CNN models, 80% of the dataset was used for training, whereas 20% was used
for testing, as shown in Table 3. Additionally, 20% of the training set was used for validation
during training for CNN models. Figures 79show the training process for all CNN models,
where the validation accuracy per epoch is used to show how the system gets more accurate with
increasing epoch. The results are shown in Table 4.
For cross-validation, a second set of training and testing datasets was formed from the origi-
nal dataset for training and testing. The average of the two results was also taken as the true
accuracy of the system, as in the BSIF and PCA experiments.
Experimental results shown in Table 4indicate that the major finger knuckle system pro-
duces better results in both PCA and BSIF, with 65.86% and 91.90%, respectively, accounting
for 3% better performance in both cases. Similarly, major knuckles performed slightly better than
minor knuckles in AlexNet, with an accuracy of 97.91%. Employing the proposed fusion scheme
reveals that combining the two images (that is, major and minor finger knuckles) offers better
results across the majority of S-sum fusion methods tested. Figure 7shows that the modified
AlexNet model favored in this study has a more consistent accuracy per epoch compared to
modified ResNet50 shown in Fig. 8. It is also highly comparable to the original AlexNet shown
in Fig. 9.
Table 3 Dataset description used with the feature extraction methods.
Dataset Model Subject Total images Train image Test image
PolyUKnuckleV1 PCA 503 503 ×5 = 2515 1509 1006
BSIF 503 503 ×5 = 2515 1509 1006
Augmented
PolyUKnuckleV1
Modified
AlexNet
503 503 ×5×5 = 12,575 10,060 2515
Fig. 6 Image of a sample of finger knuckles showing (a) the finger versus (b), (c) the ROIs.7
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-8 SepOct 2023 Vol. 32(5)
4.3 Comparison with the State-of-the-Art
The proposed system is also compared with the state-of-the-art, including those from Ref. 7,11,
and 2527, as shown in Table 5. Both unimodal and multimodal systems are compared in terms
of their recognition results in this table. The proposed method outperforms the systems devel-
oped in Refs. 7and 27, where the same database as this study was utilized, where 99.60% and
93.44% recognition accuracy were obtained, respectively. These systems employed different
approaches from the proposed system; BSIF was combined with PCA and LDA in Ref. 7,
whereas a deep learning model called PCAnet was used with SVM in Ref. 27. Therefore,
Table 5shows that the proposed methodology is better than the state-of-the-art using the same
dataset for finger knuckle recognition.
Fig. 8 Training and validation accuracy for modified ResNet50 per epoch: (a) major and (b) minor
finger knuckles.
Fig. 9 Training and validation accuracy for original AlexNet per epoch: (a) major and (b) minor
finger knuckles.
Fig. 7 Training and validation accuracy for modified AlexNet per epoch: (a) major and (b) minor
finger knuckles.
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-9 SepOct 2023 Vol. 32(5)
Other studies where other datasets were used are also shown in Table 5to show that the
proposed system compares favorably with other studies across different datasets. Generally, it
can be seen that the suggested approach performs better than the research in Refs. 7,11, and 25
27, with the exception of Ref. 26, which has a 100% recognition rate. The comparison of the data
in Table 5demonstrates that the proposed methodology performs favorably against state-of-the-
art methods in finger knuckle biometrics techniques.
5 Conclusion
Recent research has shown that the image pattern of the skins knuckles is made up of wrinkles
or lines and that each users fingers knuckle texture pattern is quite distinctive, making the sur-
face unique for biometric identification. It is common knowledge that using a combination of
traits can improve a biometric systems accuracy. Hence, this study explored the advantage of
using multiple human traits, which can be captured at the same time, in biometrics to create a
strong human recognition system. The minor and major finger knuckles found on the dorsal part
of the hand are used to build a multimodal biometric system in this study. Feature extraction
methods employed in this experiment include hand-crafted feature extraction descriptors, namely
PCA and BSIF; as well as deep learning-based CNN models, AlexNet, modified AlexNet, and
ResNet50.
Table 5 Comparison with the state-of-the-art.
Ref.
no
Publ.
year Biometric trait Method Database
Unimodal
accuracy
(max{major,
minor}) (%)
Multimodal
accuracy
(%)
25 2019 Knuckle and nail plate
of index, middle and
ring fingers
AlexNet ImageNet N/A 97.19
26 2020 Left index, left middle,
right index, and right
middle fingers
CNN PolyU-FKP 99.93 100
27 2020 Major and minor finger
dorsal knuckles
PCAnet + SVM PolyUKnuckleV1 88.27 93.44
11 2021 Finger knuckles from
three fingers
BSIF + DRB PolyU-FKP 97.95 99.65
72021 Major, minor and
dorsal finger knuckle
BSIF, PCA + LDA PolyUKnuckleV1 95.43 99.60
Proposed
system
2023 Major and minor
dorsal finger knuckle
Modified AlexNet PolyUKnuckleV1 97.91 99.68
Table 4 Major and minor finger knuckles experiments compared to fusion methods.
Strategy
Accuracy (%)
Modified
AlexNet AlexNet
Modified
ResNet50 BSIF PCA
Major knuckles 97.91 97.82 92.56 91.90 65.86
Minor knuckles 97.38 96.25 94.92 88.37 62.53
SLF 99.36 98.97 98.89 93.14 75.84
SLF: probabilistic 99.68 99.60 99.68 93.49 76.59
SLF: Hamacher 99.68 99.26 99.56 93.64 74.90
SLF: Yager (p¼10) 99.20 99.52 99.60 93.29 75.40
SLF: Schweizer and Sklar (p¼0.1) 99.28 99.32 99.68 93.14 76.04
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-10 SepOct 2023 Vol. 32(5)
Preliminary experimental results comparing the individual accuracy of the major finger
knuckle system and the minor finger knuckle system show that the former performed better
in both PCA and BSIF due to the presence of clearer patterns on the major finger knuckles.
However, this is not so in all CNN models. This study proposed fusing the two traits together
at the score level, using symmetric sum methods. Experimental results show significant improve-
ment in the system, especially in the case of PCA, where as much as 15.1% improvement was
made. The overall best accuracy achieved is a 99.68% accuracy reached by the probabilistic
symmetric sum algorithm for score-level fusion on the AlexNet model.
In the future work, other finger knuckle datasets may be employed to increase the validity of
the experimental results. Additionally, various deep learning architectures, such as other variants
of ResNet, VGG-19, and MobileNet, should be used for finger knuckle recognition.
Code, Data, and Materials Availability
The archived version of the code described in this manuscript can be freely accessed through
GitHub [https://github.com/babsfeoba/knuckles_biometrics.git].
References
1. A. Kumar and C. Ravikanth, Personal authentication using finger knuckle surface,IEEE Trans. Inf.
Forensics Security 4(1), 98110 (2009).
2. E. Rani and R. Shanmugalakshmi, Finger knuckle print recognition techniques-a survey,Int. J. Eng. Sci.
2(11), 6269 (2013).
3. L. Sathiya and V. Palanisamy, A survey on finger knuckle print based biometric authentication,Int. J.
Comput. Sci. Eng. 6(8), 236240 (2018).
4. A. Kumar, Importance of being unique from finger dorsal patterns: exploring minor finger knuckle patterns
in verifying human identities,IEEE Trans. Inf. Forensics Security 9, 12881298 (2014).
5. O. S. Adeoye, A survey of emerging biometric technologies,Int. J. Comput. Appl. 9(10), 15 (2010).
6. Z. Akhtar et al., Face recognition under ageing effect: a comparative analysis,Lect. Notes Comput. Sci.
8157, 308318 (2013).
7. A. Attia, Z. Akhtar, and Y. Chahir, Feature-level fusion of major and minor dorsal finger knuckle patterns for
person authentication,Signal Image Video Process. 15(4), 851859 (2021).
8. K. Usha and M. Ezhilarasan, Robust personal authentication using finger knuckle geometric and texture
features,Ain Shams Eng. J. 9(4), 549565 (2018).
9. S. M. Rathod, S. D. Sapkal, and R. R. Deshmukh, Finger knuckle print based biometric identification of
a person using LBP and Bernoulli classifier,Int. J. Sci. Dev. Res. 4(10), 1722 (2019).
10. W. Li, Biometric recognition of finger knuckle print based on the fusion of global features and local fea-
tures,J. Healthc. Eng. 2022, 6041828 (2022).
11. A. Attia et al., Deep rule-based classifier for finger knuckle pattern recognition system,Evol. Syst. 12(4),
10151029 (2021).
12. P. Jayapriya and K. Umamaheswari, Performance analysis of two-stage optimal feature-selection techniques
for finger knuckle recognition,Intell. Autom. Soft Comput. 32(2), 12931308 (2022).
13. H. Heidari and A. Chalechale, Biometric authentication using a deep learning approach based on different
level fusion of finger knuckle print and fingernail,Expert Syst. Appl. 191, 116278 (2022).
14. R. R. O. Al-Nima et al., Finger texture biometric characteristic: a survey,arXiv:2006.04193, pp. 117
(2020).
15. M. Cheniti, B. Eddine, and Z. Akhtar, Symmetric sums-based biometric score fusion,IET Biometr. 7,
391395 (2017).
16. A. Herbadji et al., Weighted quasi-arithmetic mean based score level fusion for multi-biometric systems,
IET Biometr. 9,9199 (2020).
17. F. O. Babalola, Ö. Toygar, and Y. Bitirim, Boosting hand vein recognition performance with the fusion of
different color spaces in deep learning architectures,Signal Image Video Process. 17, 43754383 (2023).
18. A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural
networks,Commun. ACM 60(6), 8490 (2017).
19. F. O. Babalola, Y. Bitirim, and Ö. Toygar, Palm vein recognition through fusion of texture-based and
CNN-based methods,Signal Image Video Process. 15, 459466 (2021).
20. H. Kaiming et al., Deep residual learning for image recognition,in IEEE Conf. Comput. Vis. and Pattern
Recognit. (CVPR), pp. 770778 (2016).
21. R. A. Rasool, Feature-level vs. score-level fusion in the human identification system,Appl. Comput. Intell.
Soft Comput. 2021,110 (2021).
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-11 SepOct 2023 Vol. 32(5)
22. I. Bloch and H. Matre, Fusion in image processing,in Information Fusion in Signal and Image Processing,
I. Bloch, Ed., pp. 4756, Wiley (2008).
23. M. Hanmandlu et al., Score level fusion of multimodal biometrics using triangular norms,Pattern
Recognit. Lett. 32(14), 18431850 (2011).
24. W. Li, Biometric recognition of finger knuckle print based on the fusion of global features and local
features,J. Healthc. Eng. 2022, 6041828 (2022).
25. S. Hom Choudhury, A. Kumar, and S. H. Laskar, Biometric authentication through unification of finger
dorsal biometric traits,Inf. Sci. 497, 202218 (2019).
26. S. Trabelsi et al., Finger-knuckle-print recognition using deep convolutional neural network,in CCSSP
20201st Int. Conf. Commun. Control Syst. Signal Process., April 2021, pp. 163168 (2020).
27. N. E. Chalabi, A. Attia, and A. Bouziane, Multimodal finger dorsal knuckle major and minor print rec-
ognition system based on PCANet deep learning,ICTACT J. Image Video Process. 10(3), 21532158
(2020).
Felix Olanrewaju Babalola received his BSc, MSc, and PhD degrees in computer engineering
from Eastern Mediterranean University (EMU), Northern Cyprus, in 2015, 2017, and 2022,
respectively. He worked as a research assistant between 2016 and 2022 at EMU and as a senior
instructor between 2022 and 2023. He is currently an assistant professor at the faculty of
Engineering at Final International University, Northern Cyprus. His research interests include
biometrics, bioinformatics, and artificial intelligence.
Shehu Hamidu received his BEng from Electrical and Computer Engineering Department of
Federal University of Technology, Minna, Nigeria, in 2010 and MS degree from Computer
Engineering Department of Eastern Mediterranean University, Northern Cyprus, in 2023.
Currently, he is working as an academic staff in Niger State Polytechnic Zungeru, Nigeria.
He has served in different academic positions. His current research interests are in the area
of biometrics, computer vision, and deep learning.
Önsen Toygar received her BS, MS, and PhD degrees from the Computer Engineering
Department of Eastern Mediterranean University, Northern Cyprus, in 1997, 1999, and 2004,
respectively. Since September 2004, she has worked in the Computer Engineering Department
of Eastern Mediterranean University. She is currently a professor in the department, where she
has served in different capacities. Her current research interests are in the areas of biometrics,
computer vision, image processing and digital forensics.
Babalola, Hamidu, and Toygar: Human recognition system through major and minor. . .
Journal of Electronic Imaging 053020-12 SepOct 2023 Vol. 32(5)
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Human recognition and authentication through biometrics generally rely on feature extraction from images of physiological traits. These images can be represented in different color models. This study represents human palm vein images in five different color spaces (RGB, XYZ, LAB, YUV, and HSV) and combines their decisions in CNN models for person recognition. Color spaces are generally represented in three channels. This study identifies the channel with the highest contribution to pattern recognition in images and proposes to use only this channel per color space in the identification process instead of all three channels. The experiments confirm that channels representing how humans perceive colors are generally mostly responsible for features extracted from vein pattern biometrics. The proposed architecture is tested using modified AlexNet, VGG-19, and ResNet-50 Convolutional Neural Network (CNN) models on palm vein datasets from the FYO, PUT, and VERA databases. Experimental results showed considerable improvement in palm vein recognition compared to similar studies.
Article
Full-text available
Compared with the most traditional fingerprint identification, knuckle print and hand shape are more stable, not easy to abrase, forge, and pilfer; in aspect of image acquisition, the requirement of acquisition equipment and environment are not high; and the noncontact acquisition method also greatly improves the users’ satisfaction; therefore, finger knuckle print and hand shape of single-mode identification system have attracted extensive attention both at home and abroad. A large number of studies show that multibiometric fusion can greatly improve the recognition rate, antiattack, and robustness of the biometric recognition system. A method combining global features and local features was designed for the recognition of finger knuckle print images. On the one hand, principal component analysis (PCA) was used as the global feature for rapid recognition. On the other hand, the local binary pattern (LBP) operator was taken as the local feature in order to extract the texture features that can reflect details. A two-layer serial fusion strategy is proposed in the combination of global and local features. Firstly, the sample library scope was narrowed according to the global matching result. Secondly, the matching result was further determined by fine matching. By combining the fast speed of global coarse matching and the high accuracy of local refined matching, the designed method can improve the recognition rate and the recognition speed.
Article
Full-text available
The design of a robust human identification system is in high demand in most modern applications such as internet banking and security, where the multifeature biometric system, also called feature fusion biometric system, is one of the common solutions that increases the system reliability and improves recognition accuracy. This paper implements a comprehensive comparison between two fusion methods, named the feature-level fusion and score-level fusion, to determine which method highly improves the overall system performance. The comparison takes into consideration the image quality for the six combination datasets as well as the type of the applied feature extraction method. The four feature extraction methods, local binary pattern (LBP), gray-level co-occurrence matrix (GLCM), principle component analysis (PCA), and Fourier descriptors (FDs), are applied separately to generate the face-iris machine vector dataset. The experimental results highlighted that the recognition accuracy has been significantly improved when the texture descriptor method, such as LBP, or the statistical method, such as PCA, is utilized with the score-level rather than feature-level fusion for all combination datasets. The maximum recognition accuracy is obtained at 97.53% with LBP and score-level fusion where the Euclidean distance (ED) is considered to measure the maximum accuracy rate at the minimum equal error rate (EER) value.
Article
Full-text available
The identification of individuals by their finger dorsal patterns has become a very active area of research in recent years. In this paper, we present a multimodal biometric personal identification system that combines the information extracted from the finger dorsal surface image with the major and minor knuckle pattern regions. In particular, first the features are extracted from each single region by BSIF (binarized statistical image features) technique. Then, extracted information is fused at feature level. Fusion is followed by dimensionality reduction step using PCA (principal component analysis) + LDA (linear discriminant analysis) scheme in order to improve its discriminatory power. Finally, in the matching stage, the cosine Mahalanobis distance has been employed. Experiments were conducted on publicly available database for minor and major finger knuckle images, which was collected from 503 different subjects. Reported experimental results show that feature-level fusion leads to improved performance over single modality approaches, as well as over previously proposed methods in the literature.
Article
Full-text available
In this paper, we proposed a novel finger knuckle pattern (FKP) based personal authentication system using multilayer deep rule based (DRB) classifier. The presented approach is completely data-driven and fully automatic. However, the DRB classifier is generic and can be used in variety of classification or prediction problems. In particular, from the input finger knuckle, two kinds of features (i.e., Binarized Statistical Image Features and Gabor Filer bank) are extracted, which are then fed to fuzzy rules based DRB classifier to determine whether the user is genuine or impostor. Experimental results in the form of accuracy, error equal rate (EER) and receiver operating characteristic (ROC) curves demonstrate that presented DRB classifier is a powerful tool in FKP based biometric identification system. Experiments are reported using publicly available FKP PolyU database provided by University of Hong Kong. Experiments using this database show that the presented framework, in this study, can attain performance better than previously proposed methods. Moreover, score level fusion of all FKP modalities with BSIF + DRB yielded an equal error rate of 0.19% and an accuracy of 99.65%.
Article
Full-text available
In this paper, we propose a palm vein recognition system that combines two approaches using a decision-level fusion strategy. The first approach employs Binarized Statistical Image Features (BSIF) descriptor method on five overlapping sub-regions of palm vein images and the second approach uses a convolutional neural networks (CNN) model on each palm vein image. In the first approach, texture-based features of five overlapping sub-regions on the palm vein image are extracted using the powerful BSIF method and the scores obtained by the matching step of the system are fused with score-level fusion strategy. In the second approach, a CNN model is used to train the system using the whole image. Afterwards, the decisions of two approaches are gathered separately and a final decision is obtained by fusing the two decisions. Experimental results on CASIA, FYO, PUT, VERA and Tongji Contactless Palm Vein databases showed that the proposed method compared favorably against other similar systems.
Article
Full-text available
Biometrics is now being principally employed in many daily applications ranging from the border crossing to mobile user authentication. In the high-security scenarios, biometrics require stringent accuracy and performance criteria. Towards this aim, multi-biometric systems that fuse the evidences from multiple sources of biometric have exhibited to diminish the error rates and alleviate inherent frailties of the individual biometric systems. In this article, a novel scheme for score-level fusion based on weighted quasi-arithmetic mean (WQAM) has been proposed. Specifically, WQAMs are estimated via different trigonometric functions. The proposed fusion scheme encompasses properties of both weighted mean and quasi-arithmetic mean. Moreover, it does not require any leaning process. Experimental results on three publicly available data sets (i.e. NIST-BSSR1 Multimodal, NIST-BSSR1 Fingerprint and NIST-BSSR1 Face) for multi-modal, multi-unit and multi-algorithm systems show that presented WQAM fusion algorithm outperforms the previously proposed score fusion rules based on transformation (e.g. t-norms), classification (e.g. support vector machines) and density estimation (e.g. likelihood ratio) methods
Article
This paper presents a deep learning method for human authentication based on hand dorsal characteristics. The proposed method uses the fingernail (FN) and the finger knuckle print (FKP) extracted from the ring, middle and index fingers. The proposed method was evaluated using a dataset of 1090 hand dorsal images (10 each from 109 persons) which are processed by the hand skin detection, the denoising method, and the procedure adopted for extraction of both finger knuckle and fingernail. A multimodal biometric scheme is used to improve the authentication performance of the proposed system and make it more resistant to spoofing attacks. A Deep learning-based approach using a convolutional neural network (CNN) with AlexNet as a pre-trained model is employed. Different features, extracted from hand images, were combined at different levels using normalization and fusion methods proposed by the authors. Experimental results demonstrate efficiency, robustness, and reliability of the proposed biometric system compared to existing alternatives. Consequently, it can be developed in many real-world applications.