Content uploaded by Fanglin Chen
Author content
All content in this area was uploaded by Fanglin Chen
Content may be subject to copyright.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009 1665
Reconstructing Orientation Field From Fingerprint
Minutiae to Improve Minutiae-Matching Accuracy
Fanglin Chen, Jie Zhou, Senior Member, IEEE, and Chunyu Yang
Abstract—Minutiae are very important features for fingerprint repre-
sentation, and most practical fingerprint recognition systems only store
the minutiae template in the database for further usage. The conventional
methods to utilize minutiae information are treating it as a point set and
finding the matched points from different minutiae sets. In this paper, we
propose a novel algorithm to use minutiae for fingerprint recognition, in
which the fingerprint’s orientation field is reconstructed from minutiae and
further utilized in the matching stage to enhance the system’s performance.
First, we produce “virtual” minutiae by using interpolation in the sparse
area, and then use an orientation model to reconstruct the orientation field
from all “real” and “virtual” minutiae. A decision fusion scheme is used
to combine the reconstructed orientation field matching with conventional
minutiae-based matching. Since orientation field is an important global fea-
ture of fingerprints, the proposed method can obtain better results than
conventional methods. Experimental results illustrate its effectiveness.
Index Terms—Decisionfusion, fingerprint recognition, interpolation, ori-
entation field, polynomial model.
I. INTRODUCTION
Recently, biometric technologies have shown more and more impor-
tance in various applications. Among them, fingerprint recognition is
considered one of the most reliable technologies and has been exten-
sively used in personal identification. In recent years, this technology
has received increasingly more attention [1].
The minutiae are ridge endings or bifurcations on the fingerprints.
They, including their coordinates and direction, are most distinctive
features to represent the fingerprint. Most fingerprint recognition sys-
tems [1] store the minutiae template (sometimes with singular points to-
gether) in the database. This kind of minutiae-based fingerprint recog-
nition systems consists of two steps, i.e., minutiae extraction and minu-
tiae matching. In the minutiae matching process, the minutiae feature
of a given fingerprint is compared with the minutiae template, and the
matched minutiae will be found out. If the matching score exceeds a
predefined threshold, the two fingerprints can be regarded as belonging
to a same finger.
Such algorithms are representative ways to utilize the minutiae infor-
mation for fingerprint recognition. However, is it the best way? In [2],
the authors showed that these kind of methods cannot provide enough
distinguishing abilities for large-scale fingerprint identification tasks.
Obviously, a better usage of minutiae is very important for fingerprint
recognition systems. In this paper, we will propose a novel method to
Manuscript received October 30, 2007; revised February 23, 2009. First
published May 02, 2009; current version published June 12, 2009. This work
was supported in part by the National 863 Hi-Tech Development Program of
China under Grant 2008AA01Z123, in part by the Natural Science Foundation
of China under Grants 60205002 and 60875017, and in part by the Natural
Science Foundation of Beijing under Grant 4042020. The associate editor
coordinating the review of this manuscript and approving it for publication was
Dr. Gabriel Marcu.
The authors are with the Department of Automation, Tsinghua National
Laboratory for Information Science and Technology (TNList), and the State
Key Laboratory on Intelligent Technology and Systems, Tsinghua University,
Beijing 100084, China (e-mail: chen-fl06@mails.tsinghua.edu.cn; jzhou@ts-
inghua. edu.cn; yangchunyu@mails.thu.edu.cn).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2009.2017995
Fig. 1. Flowchart of the proposed algorithm.
use minutiae information for fingerprint recognition. The main idea is
reconstructing fingerprint’s orientation field from minutiae and further
utilizing it in the matching stage to enhance the system’s performance.
Its usage lies in the following aspects. 1) In order to reduce storage,
many practical fingerprint recognition systems only store the minutiae
feature in the database; and the original images are not saved. By using
the proposed method, the performance of the systems can be improved
without the original fingerprint images. 2) In some other practical sys-
tems, though the original images are saved, it’s also unsuitable for us
to compute the orientation fields and save them into the database. For
these systems, very large databases of the minutiae templates have been
established, then it will cost very much for a complete update of the
database (e.g., adding the additional orientation features into the data-
base). In this case, a better way is to compute the orientation field from
the saved minutiae template and use it to improve the performance of
the system.
As a global feature, orientation field describes one of the basic
structures of a fingerprint [3]–[6]. When it is complemented with
the minutiae, a local feature, we can get more information. Thus, a
better performance can be obtained by fusing the results of orientation
field matching with conventional minutiae-based matching. Some
studies [5], [7] showed that incorporating local (minutiae) and global
(orientation field) feature can largely improve the performance.
However, as stated above, in many practical fingerprint recognition
systems, the original images and orientation field images are not saved,
and we cannot compute the orientation field directly. In some other sys-
tems, additional orientation features cannot be saved into the existing
database easily, and we have to compute the orientation field by only
using the information of minutiae template. Ross et al. [8] proposed
an interpolation algorithm to estimate the orientation field from minu-
tiae template (they used it to predict the class of the fingerprint but
not for fingerprint matching), in which the orientation of a given point
was computed from its neighboring minutiae. To consider the global
information, we will use the orientation model [3] to reconstruct the
orientation field from minutiae. Firstly we interpolate a few “virtual”
minutiae in the sparse areas, and then apply the model-based method
on these mixed minutiae (including the “real” and “virtual” minutiae).
After that, the reconstructed orientation field is used into the matching
stage by combining with conventional minutiae-based matching. Fig. 1
shows the flowchart of the proposed matching algorithm.
1057-7149/$25.00 © 2009 IEEE
1666 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009
Fig. 2. Illustration of effective region estimation.
The rest of the paper is organized as follows: Section II introduces
the algorithm of reconstructing orientation field. In Section III, an al-
gorithm of fingerprint recognition by combining minutiae and orien-
tation field is described. The experimental results and the evaluation
of the algorithm’s performance are presented in Section IV. We finish
with conclusion and discussion on applications of our approach in Sec-
tion V.
II. RECONSTRUCTING ORIENTATION FIELD FROM MINUTIAE
A. Estimating the Effective Region
When only having minutiae feature, we can extract the effective re-
gion by only using minutiae information but the ridges and valleys. In
this situation, we can extract the effective region by finding the smallest
envelope that contains all the minutiae points. See Fig. 2 for an illus-
tration. Here, we put the original image together for the convenience to
give a visual sense.
B. Interpolation
The minutiae of a fingerprint always distribute “nonuniformly,”
which results in some sparse regions where there are few minutiae. If
we use an approximation method to reconstruct the orientation field
from these minutiae, it will result a poor performance in the sparse
regions. In order to give high weights to the minutiae in the sparse
area, we produce “virtual” minutiae by using interpolation [8] before
modeling.
Since the orientation field of fingerprints always change smoothly,
it is possible to estimate the direction of a point by examining the di-
rection of minutiae points in the local region. Therefore, by observing
the direction of a group of neighboring minutiae, we can get the orien-
tation field by interpolation. In order to interpolate “virtual” minutiae
in the sparse area, we choose three minutiae points to construct a tri-
angle, and estimate the orientation field in the triangle by these three
minutiae. The algorithm has the following two main steps.
1) Triangulation:
We divide the fingerprint into many triangles. Consider a set of
points in the plane, the simplest way to triangulate
them is to add to the diagonals from the first point to all of the
others. However, this has the tendency to create skinny triangles.
In this study, we want to avoid skinny triangles, or equivalently,
Fig. 3. Computation of a pixel in a triangle.
small angles in the triangulation. We use Delaunay triangulation
which minimizes the maximum angle over all possible triangu-
lations (refer to [9]–[11] for details). Delaunay triangulation is
pretty close, and it can be constructed in time. The trian-
gles have no intersection, so each point can be covered by only
one triangle, as shown in Fig. 4(b).
2) Producing “virtual” minutiae using interpolation:
Let ( and are coordinates of the corresponding
point) denotes the “virtual” minutiae located inside the triangle,
be the Euclidean distance of
this “virtual” minutiae from the th vertex . And let be the
direction corresponds to the vertex, . It is clear that a vertex
should affect more on the “virtual” minutiae if it is closer to
than other vertex. Thus, the direction of the pixel is
estimated as in (1)–(4)
(1)
Since , and are rotationally symmetric, we can assume
that . For example, if , we can
exchange the denotation: change to to , and to ,
then we have
otherwise
otherwise.
(2)
The ridge line orientation is defined in the range of . (2) takes
into account that phase jumps may appear in the estimation. For
example, . Then it is in case 1:
. If we do not apply (2), the
estimated will be in the range of according to (3) and
(4). Actually, it should be in the range of
(3)
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009 1667
Fig. 4. Interpolation step: (a) the minutiae image; (b) the triangulated image; (c) virtual minutiae by interpolation (the bigger red minutiae are “real,” while the
smaller purple ones are “virtual”).
is calculated as
(4)
The computation is illustrated in Fig. 3. After the interpolation
step, the minutiae distribute “uniformly” as shown in Fig. 4(c).
C. Reconstructing Using an Orientation Model
Many models have been proposed for orientation field. Gu and
Zhou [3] proposed a combination model which establishes a poly-
nomial model to globally represent the orientation field and uses a
point-charge model to improve the accuracy locally at each singular
point. When only having the information of minutiae, the polynomial
model is the best choice.
1) Polynomial model:
The orientation field is firstly mapped to a continuous complex
function. Denoting and as the orientation field
and the transformed function, respectively, the mapping can be
defined as
(5)
where and denote respectively the real part
and imaginary part of the complex function, . Obviously,
and are continuous with , in those regions.
The above mapping is a one-to-one transformation and can
be easily reconstructed from the values of and .
To globally represent and , two bivariate poly-
nomial models are established, which are denoted by
and , respectively. These two polynomials can be formu-
lated as
(6)
and
(7)
where , and .
In these two formulas, is the order of the polynomial model.
There are parameters of and which need
Fig. 5. Results of the proposed algorithm: (a) virtual minutiae by interpolation
(the bigger red minutiae are “real”, while the smaller purple ones are “virtual”);
(b) the reconstructed orientation field.
to be calculated. Computing the parameters is a fitting process.
Using square sum error for evaluation, the formula becomes
(8)
where is the set of the effective region, and is the orig-
inal orientation field.
2) Reconstructing the orientation field using polynomial model:
When only having the information of minutiae, the formula (8)
can only take operation at the points which are minutia. Fig. 5(b)
shows the orientation field reconstructed based on the polynomial
model.
Interpolation step and modeling step can both reconstruct the ori-
entation field independently. In order to compare these two methods
and the proposed method (interpolation-and-model: first, using inter-
polation and then model-based method), we compute the square sum
error which indicates the accuracy of the algorithm. It shows that the
model-based method works better than the Interpolation method, and
the interpolation-and-model algorithm performs the best.
As an example, in Fig. 6(a), there is a minutiae (marked with el-
lipse) whose direction is estimated wrongly, and Fig. 6(b) shows the
1668 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009
Fig. 6. Comparison result I: (a) minutiae image with a wrong direction (marked with ellipse); (b) the corresponding poor result by interpolation (marked with
ellipse); (c) the corresponding good result by the proposed IM method.
Fig. 7. Comparison result II: (a) minutiae image with a sparse region (marked with ellipse); (b) the corresponding poor result by model-based algorithm (marked
with ellipse); (c) the corresponding good result by the proposed IM method.
corresponding poor result (marked with ellipse) by interpolation. Ori-
entation model can consider all of the minutiae information, so the ef-
fect of the wrong direction of can be reduced by other correct minu-
tiae. Thus, using orientation model to reconstruct orientation field can
overcome the problem induced by to some extend by adding some
positive contribution of other minutiae. Fig. 6(c) shows the orientation
field reconstructed based on polynomial model which can overcome
the problem induced by the wrongly estimated minutiae (marked
with ellipse).
Another example is as in Fig. 7. There is a sparse region (marked
with ellipse) in (a), and (b) shows the corresponding poor result
(marked with ellipse). After the interpolation, the minutiae distribute
“uniformly”. With this improvement, the model-based method can get
a better performance as show in (c).
III. FINGERPRINT MATCHING USING MINUTIAE AND THE
RECONSTRUCTED ORIENTATION FIELD
A. Reconstructed Orientation Field Matching
To compare two fingerprints’ orientation field, the first step is align-
ment of these two fingerprints. It can be done in the same way as in
conventional fingerprint algorithms, in which the alignment is mainly
based on minutiae information [1]. In our study, we choose the Hough-
transform based approach [12] to finish the alignment due to its sim-
plicity. In the matching step, the correlation between two aligned ori-
entation fields, and , is computed as below. Let denotes the inter-
section of the two effective regions after alignment, and is the total
number of points in . The matching score between two orientation
fields is defined as
(9)
In (9), is the difference between the orientation values at the
point, in image and , which is formulated as follows:
if
otherwise (10)
and is defined as
(11)
where and are the direction of point, , in image
and . If the matching score is higher than a certain threshold,
we say the two orientation fields are “matched.”
B. Combine Reconstructed Orientation Field Matching With
Minutiae Matching
A variety of combination rules have been proposed. [13] has shown
that matching accuracy can be improved by combining indepen-
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009 1669
Fig. 8. ROCs of the minutiae matching scheme (solid line) and the proposed method (dash line), (a), (b), (c) for Algorithm I, while (d), (e), (f) for Algorithm II,
on FVC02 DB1, DB2, and THU testing database, respectively.
dent matchers using Neyman–Pearson rule. Here, we will also use
Neyman–Pearson rule for the task.
Let and denote the scores from the minutiae-based matcher
and proposed orientation field matcher. Let denotes the genuine
class, while denotes the imposter class; then, is the
genuine class-conditional probability density function for , and
denote the imposter’s. The error rates of two classes are
defined as
(12)
and
(13)
where and denote the distributed region of and , respec-
tively. Our goal is to minimize the ’s error rate (false rejection rate),
under a given prespecified ’s error rate (false acceptance rate). To do
that, we define the likelihood ratio for and as
(14)
1670 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009
According to the Neyman–Pearson rule for a given , the clas-
sification rule is as
if
otherwise (15)
where is the threshold to minimize FRR (False Rejection Rate)
under a given false acceptance rate (FAR). Then, the key point here
is to estimate the probability density functions: and
. We tackle this problem by using a Parzen density
estimation method on a training set. In our study, we use a 0.02 0.02
Parzen window to estimate the probability density functions which
can be saved for global usage.
IV. EXPERIMENTAL RESULTS
Our experiments are conducted on three databases, including two
public collections, FVC02 DB1 and DB2 [14], and the THU database
established by our lab [4].
3200 (400 8) fingerprints are randomly chosen from THU data-
base to form the training set for the fusion scheme. The rest fingerprints
in THU database and FVC02 DB1, DB2 are used as the testing set.
For THU database, the number of genuine-matching pairs is
and for the training set and
testing set, respectively. Since the matching number is much larger
than that of genuine pairs if we match all imposter pairs, so our strategy
is randomly choosing two fingerprints from the eight ones which come
from a same finger to form a subset, then, each imposter pair in this
subset has to be tested. So, the number of imposter matching in the
training set and testing set is and
, respectively. For DB1 or DB2, the
number of genuine matching pairs is and that of
imposter-matching pairs is .
Since the final fusion result is related with the verification algorithm
of minutiae extraction, we carry out different experiments using two
algorithms. Algorithm I is that used in [4], and Algorithm II is similar
with that reported in [15].
For Algorithm I and Algorithm II, we compare two fingerprint recog-
nition systems, viz., one is using conventional matching method and the
other is the proposed method. In these two systems, the information
of minutiae part is the same and the only difference lies in the recon-
structed orientation field information which is added to the proposed
scheme. Fig. 8 shows the receiver operating curves (ROC) plotting FAR
versus FRR of conventional minutiae matching scheme (solid line) and
the proposed scheme (dash line), (a), (b), (c) for Algorithm I, while
(d), (e), (f) for Algorithm II on FVC02 DB1, DB2, and THU testing
database, respectively. FRR is defined as the percentage of imposter
matches in all genuine pairs, while FAR is defined as the percentage of
genuine matches in all imposter pairs. The results show that: combining
the reconstructed orientation field information with minutiae matching
can largely improve the performance, either for the Algorithm I or Al-
gorithm II. FRR can be reduced a lot by using the fusion scheme against
the minutiae-based matching only.
Our system is implemented with C on an AMD 2000 Hz PC. Com-
pared with singly using minutiae-based matching scheme, the compu-
tational time of the fusion algorithm will be a little longer. The minu-
tiae-based matching time (one-to-one) is about 5 ms. Average addi-
tional computation cost for reconstructing the orientation field is about
15 ms; additional matching time (one-to-one) is less than 3 ms. It shows
a feasibility to utilize the reconstructed orientation field in real appli-
cations.
V. C ONCLUSION
Orientation field is important for fingerprint representation. In order
to utilize the orientation information in automatic fingerprint recogni-
tion systems which only stores minutiae feature, we propose a novel
method to utilize the minutiae for fingerprint recognition. We also
utilize the reconstructed orientation field information into the matching
stage. The proposed algorithm combines the interpolation method and
model-based method to reconstruct orientation field, and reduces the
effect of wrongly detected minutiae. A fingerprint matching based
on orientation field is used to combine with conventional minutiae
matching for real applications.
REFERENCES
[1] BIOMETRICS: Personal Identification in Networked Society.,A.K.
Jain, R. Bolle, and S. P. , Eds. New York: Kluwer, 1999.
[2] S. Pankanti, S. Prabhakar, and A. K. Jain, “On the individuality of
fingerprints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp.
1010–1025, 2002.
[3] J. Gu, J. Zhou, and D. Zhang, “A combination model for orientation
field of fingerprints,” Pattern Recognit., vol. 37, pp. 543–553, 2004.
[4] J. Zhou and J. Gu, “A model-based method for the computation of
fingerprints orientation field,” IEEE Trans. Image Process., vol. 13, pp.
821–835, 2004.
[5] J. Gu, J. Zhou, and C. Yang, “Fingerprint recognition by combining
global structure and local cues,” IEEE Trans. Image Process., vol. 15,
pp. 1952–1964, 2006.
[6] A. Jain, S. Prabhakar, and L. Hong, “A multichannel approach to fin-
gerprint classification,” IEEE Trans. Pattern Anal. Machine Intell., vol.
21, pp. 348–359, 1999.
[7] J. Qi, S. Yang, and Y. Wang, “Fingerprint matching combining the
global orientation field with minutia,” Pattern Recognit. Lett., vol. 26,
no. 15, pp. 2424–2430, 2005.
[8] A. Ross, J. Shah, and A. K. Jain, “From template to image: Recon-
structing fingerprints from minutiae points,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 29, pp. 544–560, 2007.
[9] S. Sloan, “A fast algorithm for constructing Delaunay triangulations in
the plane,” Adv. Eng. Softw., vol. 9, no. 1, pp. 34–55, 1987.
[10] L. Guibas, D. Knuth, and M. Sharir, “Randomized incremental con-
struction of Delaunay and Voronoi diagrams,” Algorithmica, vol. 7, no.
1, pp. 381–413, 1992.
[11] H. Edelsbrunner, “Incremental topological flipping works for regular
triangulations,” Algorithmica, vol. 15, no. 3, pp. 223–241, 1996.
[12] N. K. Ratha, K. Karu, S. Chen, and A. Jain, “A real-time matching
system for large fingerprint database,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 18, pp. 799–813, 1996.
[13] S. Prabhakar and A. Jain, “Decision-level fusion in fingerprint verifi-
cation,” Pattern Recognit., vol. 35, pp. 861–874, 2002.
[14] D. Maio, D. Maltoni, R. Cappelli, J. Wayman, and A. Jain, “FVC2002:
Second fingerprint verification competition,” in Proc. Int. Conf. Pattern
Recognition, 2002, vol. 16, pp. 811–814.
[15] A. Jain and L. Hong, “On-line fingerprint verification,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 19, pp. 302–314, 1997.