Figure 2 - uploaded by Dane Lesley Brown
Content may be subject to copyright.
Source publication
The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in...
Context in source publication
Similar publications
Páramos host more than 3500 vascular plant species and are crucial water providers for millions of people in the northern Andes. Monitoring species distribution at large scales is an urgent conservation priority in the face of ongoing climatic changes and increasing anthropogenic pressure on this ecosystem. For the first time in this ecosystem, we...
Citations
Multimodal biometrics has become a popular means of overcoming the limitations of
unimodal biometric systems. However, the rich information particular to the feature level
is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms.
A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better
adaptation to dynamic lighting conditions, while reducing feature dimensionality to the
benefit of classifiers. A comprehensive set of unimodal experiments are carried out to
evaluate both verification and identification performance on a variety of datasets using
four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support
Vector Machine on various feature extraction methods. The recognition performance of
the proposed algorithms are shown to outperform the vast majority of related studies,
when using the same dataset under the same test conditions. In the unimodal compar-
isons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes.
A separate comprehensive set of experiments on feature fusion show that combining
modality data provides a substantial increase in accuracy, with only a few exceptions
that occur when differences in the image data quality of two modalities are substan-
tial. However, when two poor quality datasets are fused, noticeable gains in recognition
performance are realized when using the novel feature extraction approach.
Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage
the rich information effectively when fusing multiple biometric modalities at the feature
level. These guidelines serve as the foundation to better understand and construct bio-
metric systems that are effective in a variety of applications.