Conference PaperPDF Available

SVC2004: First International Signature Verification Competition

Authors:

Abstract and Figures

Handwritten signature is the most widely accepted biometric for identity verication. To facilitate objective evaluation and comparison of algorithms in the eld of automatic handwritten signature verication, we organized the First International Signature Verication Competition (SVC2004) recently as a step towards establishing common benchmark databases and benchmarking rules. For each of the two tasks of the com- petition, a signature database involving 100 sets of signature data was created, with 20 genuine signatures and 20 skilled forgeries for each set. Eventually, 13 teams competed for Task 1 and eight teams competed for Task 2. When evaluated on data with skilled forgeries, the best team for Task 1 gives an equal error rate (EER) of 2.84% and that for Task 2 gives an EER of 2.89%. We believe that SVC2004 has successfully achieved its goals and the experience gained from SVC2004 will be very useful to similar activities in the future.
Content may be subject to copyright.
SVC2004: First International Signature
Verification Competition
Dit-Yan Yeung1, Hong Chang1, Yimin Xiong1, Susan George2,
Ramanujan Kashi3, Takashi Matsumoto4, and Gerhard Rigoll5
1Hong Kong University of Science and Technology, Hong Kong
2University of South Australia, Australia
3Avaya Labs Research, USA
4Waseda University, Japan
5Munich University of Technology, Germany
Abstract. Handwritten signature is the most widely accepted biometric
for identity verification. To facilitate objective evaluation and comparison
of algorithms in the field of automatic handwritten signature verification,
we organized the First International Signature Verification Competition
(SVC2004) recently as a step towards establishing common benchmark
databases and benchmarking rules. For each of the two tasks of the com-
petition, a signature database involving 100 sets of signature data was
created, with 20 genuine signatures and 20 skilled forgeries for each set.
Eventually, 13 teams competed for Task 1 and eight teams competed for
Task 2. When evaluated on data with skilled forgeries, the best team for
Task 1 gives an equal error rate (EER) of 2.84% and that for Task 2 gives
an EER of 2.89%. We believe that SVC2004 has successfully achieved its
goals and the experience gained from SVC2004 will be very useful to
similar activities in the future.
1 Introduction
Handwritten signature verification is the process of confirming the identity of
a user based on the handwritten signature of the user as a form of behavioral
biometrics [1–3]. Automatic handwritten signature verification is not a new prob-
lem. Many early research attempts were reviewed in the survey papers [4, 5]. The
primary advantage that signature verification has over other types of biometric
technologies is that handwritten signature is already the most widely accepted
biometric for identity verification in daily use. The long history of trust over sig-
nature verification means that people are very willing to accept a signature-based
biometric authentication system.
However, there has not been any major international effort that aims at
comparing different signature verification methods systematically. As common
benchmark databases and benchmarking rules are often used by researchers in
such areas as information retrieval and natural language processing, researchers
in biometrics increasingly see the need for such benchmarks for comparative stud-
ies. For example, fingerprint verification competitions (FVC2000 and FVC2002)
2
have been organized to attract participants from both academia and industry to
compare their algorithms objectively. As inspired by these efforts, we organized
the First International Signature Verification Competition (SVC2004) recently.
The objective of SVC2004 is to allow researchers and practitioners to com-
pare the performance of different signature verification systems systematically
based on common benchmark databases and benchmarking rules. Since on-line
handwritten signatures collected via a digitizing tablet or some other pen-based
input device can provide very useful dynamic features such as writing speed,
pen orientation and pressure in addition to static shape information, only on-
line handwritten signature verification was included in this competition.
We made it clear to all participants from the very beginning that this event
should not be considered as an official certification exercise, since the databases
used in the competition were only acquired in laboratory rather than real en-
vironments. Moreover, the performance of a system can vary significantly with
how forgeries are provided. Furthermore, handwritten signature databases are
highly language dependent. Nevertheless, it is hoped that through this exercise,
researchers and practitioners could identify areas where possible improvements
to their algorithms could be made.
2 Participants
The Call for Participation announcement was released on 30 April 2003. By
the registration deadline (30 November 2003), 33 teams (27 from academia and
six from industry) had registered for the competition showing their intention
to participate in either one or both tasks of the competition. Of the 33 teams
registered, 16 teams eventually submitted their programs for Task 1 while 13
teams for Task 2 by the submission deadline (31 December 2003). Some teams
participated in both tasks. One team submitted a program that requires a li-
censed software to run it. Eventually this team withdrew. So we ended up having
a total of 15 teams for Task 1 and 12 teams for Task 2. All are academic teams
from nine different countries (Australia, China, France, Germany, Korea, Sin-
gapore, Spain, Turkey, and United States). Table 1 shows all the participating
teams, with nine decided to remain anonymous after the results were announced.
Team 19 submitted three separate programs for each task based on different al-
gorithms. To distinguish between them when reporting the results, we use 19a,
19b and 19c as their Team IDs.
3 Signature Databases
3.1 Database Design
SVC2004 consists of two separate signature verification tasks using two different
signature databases. The signature data for the first task contain coordinate in-
formation only, but the signature data for the second task also contain additional
information including pen orientation and pressure. The first task is suitable for
3
Table 1. SVC2004 participating teams
Team ID Institution Country Member(s) Task(s)
3 Australia V. Chandran 1 & 2
4anonymous 1 & 2
6 Sabanci University Turkey Alisher Kholmatov 1 & 2
Berrin Yanikoglu
8anonymous 2
9anonymous 1 & 2
12 anonymous 1
14 anonymous 1 & 2
15 anonymous 1
16 anonymous 1
17 anonymous 1 & 2
18 anonymous 1 & 2
19 Biometrics Research Laboratory, Spain Julian Fierrez-Aguilar 1 & 2
Universidad Politecnica de Madrid Javier Ortega-Garcia
24 Fraunhofer, Institut Sichere Telekooperation Germany Miroslav Skrbek 1
26 State University of New York at Buffalo USA Aihua Xu 1
Sargur N. Srihari
29 Institut National des T´el´ecommunications France Bao Ly Van 2
Sonia Garcia-Salicetti
Bernadette Dorizzi
on-line signature verification on small pen-based input devices such as personal
digital assistants (PDA) and the second task on digitizing tablets.
Each database has 100 sets of signature data. Each set contains 20 genuine
signatures from one signature contributor and 20 skilled forgeries from at least
four other contributors. Unlike physiological biometrics, the use of skilled forg-
eries for evaluation is very crucial to behavioral biometrics such as handwritten
signature. Of the 100 sets of signature data, only the first 40 sets were released
(on 25 October 2003) to participants for developing and evaluating their systems
before submission (by 31 December 2003). While the first 40 sets for the two
tasks are totally different, the other 60 sets (not released to participants) are the
same except that the pen orientation and pressure attributes are missing in the
signature data for Task 1. Although both genuine signatures and skilled forgeries
were made available to participants, user enrollment during system evaluation
accepted only five genuine signatures from each user, although multiple sets of
five genuine signatures each were used in multiple runs. Skilled forgeries were
not used during the enrollment process. They were only used in the matching
process for system performance evaluation. Evaluation of signature verification
performance for each user was only started after all users had been enrolled.
Therefore, participants could make use of genuine signatures from other users
to improve the verification accuracy for a user if they so wished.
3.2 Data Collection
Each data contributor was asked to contribute 20 genuine signatures. For privacy
reasons, the contributors were advised not to use their real signatures in daily
use. Instead, they were suggested to design a new signature and to practice the
4
writing of it sufficiently so that it remained relatively consistent over different
signature instances, just like real signatures. Contributors were also reminded
that consistency should not be limited to spatial consistency in the signature
shape but should also include temporal consistency of the dynamic features.
In the first session, each contributor contributed 10 genuine signatures. Con-
tributors were advised to write naturally on the digitizing tablet (WACOM In-
tuos tablet) as if they were enrolling themselves to a real signature verification
system. They were also suggested to practice thoroughly before the actual data
collection started. Moreover, contributors were provided the option of not accept-
ing a signature instance if they were not satisfied with it. In the second session,
which was normally at least one week after the first one, each contributor came
again to contribute another 10 genuine signatures.
The skilled forgeries for each data contributor were provided by at least four
other contributors in the following way. Using a software viewer, a contributor
could see the genuine signatures that he or she tried to forge. The viewer could
replay the writing sequence of the signatures on the computer screen. Contribu-
tors were also advised to practice the skilled forgeries for a few times until they
were confident to proceed to the actual data collection.
The signatures are mostly in either English or Chinese. Although most of
the data contributors are Chinese, many of them actually use English signatures
frequently in daily applications.
3.3 Signature Files
Each signature is stored in a separate text file. The naming convention of the
files is UxSy, where xis the user ID and yis the signature ID. Genuine signatures
correspond to yvalues from 1 to 20 and skilled forgeries from 21 to 40. However,
random re-numbering was performed during the evaluation process to avoid the
class information from being revealed by the file names.
In each signature file, the signature is represented as a sequence of points.
The first line stores a single integer which is the total number of points in the
signature. Each of the following lines corresponds to one point characterized by
features listed in the following order (the last three features are missing in the
signature files for the first task): x-coordinate, y-coordinate, time stamp, button
status, azimuth, altitude, and pressure.
4 Performance Evaluation
4.1 Testing Protocol
Both tasks used the same code submission scheme. For each task, each team was
required to submit two executable files, one for performing enrollment and the
other for matching. Executable files were for the Windows platform and could
run in command-line mode without any graphical user interface.
The testing protocol is as follows. Each program was evaluated on two sig-
nature databases. The first database, which was released to the participants,
5
consists of genuine signatures and skilled forgeries for 40 users. The second
database consists of similar signature data for 60 users. This set was not re-
leased to the participants. For each user from either database, 10 trials were run
based on 10 different random subsets of five genuine signatures each from files
S1-S10 for enrollment. After each enrollment trial, the program was evaluated on
10 genuine signatures (S11-S20), 20 skilled forgeries (S21-S40), and 20 random
forgeries selected randomly from genuine signatures of 20 other users. Whenever
randomness was involved, the same random sets were used for all teams.
For each signature tested, a program is expected to report a similarity score,
between 0 and 1, which indicates the similarity between the signature and the
corresponding template. The larger the value is, the more likely the signature
tested will be accepted as a genuine signature. Based on these similarity scores,
we computed false rejection rates (FRR) and false acceptance rates (FAR) for dif-
ferent threshold values. Equal error rates (ERR) and Receiver Operating Char-
acteristics (ROC) curves were then obtained separately for skilled forgeries and
random forgeries.
4.2 Results
The programs of some teams encountered problems during the evaluation pro-
cess. In particular, they failed to report similarity scores for some input sig-
natures. For fairness of comparison, EER statistics and ROC curves are not
reported for these programs. Besides reporting the average EER over all users
and all 10 trials for each team, we also report the standard deviation (SD) and
maximum EER values.
Tables 2 and 3 show the EER results for both tasks evaluated on signature
data from 60 users not released to participants. Figures 1 and 2 show the cor-
responding ROC curves for the evaluation with skilled forgeries. The results of
some teams (Teams 3 and 9 for Task 1 and Teams 3, 9 and 29 for Task 2) are not
included in the tables since their programs failed to report similarity scores for
some signatures. For both tasks, Team 6 from the Sabanci University of Turkey
gives the lowest average EER values when tested with skilled forgeries. Due to
page limit, some results are not included in this paper. Readers are referred to
http://www.cs.ust.hk/svc2004/results.html for more details.
5 Discussions
We have noticed that the EER values tend to have relatively large variations
as can be seen from the SD values. While behavioral biometrics generally have
larger intra-class variations than physiological biometrics, we speculate that this
is at least partially attributed to the way in which the signature databases were
created for SVC2004. Specifically, the signatures are not the real signatures of
the data contributors. Although they were asked to practice thoroughly before
signature collection, larger variations than expected were still expected.
6
Table 2. EER statistics for Task 1 (60 users)
10 genuine signatures + 10 genuine signatures +
Team ID 20 skilled forgeries 20 random forgeries
Average SD Maximum Average SD Maximum
6 2.84% 5.64% 30.00% 2.79% 5.89% 50.00%
24 4.37% 6.52% 25.00% 1.85% 2.97% 15.00%
26 5.79% 10.30% 52.63% 5.11% 9.06% 50.00%
19b 5.88% 9.21% 50.00% 2.12% 3.29% 15.00%
19c 6.05% 9.39% 50.00% 2.13% 3.29% 15.00%
15 6.22% 9.38% 50.00% 2.04% 3.16% 15.00%
19a 6.88% 9.54% 50.00% 2.18% 3.54% 22.50%
14 8.77% 12.24% 57.14% 2.93% 5.91% 40.00%
18 11.81% 12.90% 50.00% 4.39% 6.08% 40.00%
17 11.85% 12.07% 70.00% 3.83% 5.66% 40.00%
16 13.53% 12.99% 70.00% 3.47% 6.90% 52.63%
4 16.22% 13.49% 66.67% 6.89% 9.20% 48.57%
12 28.89% 15.95% 80.00% 12.47% 10.29% 55.00%
Table 3. EER statistics for Task 2 (60 users)
10 genuine signatures + 10 genuine signatures +
Team ID 20 skilled forgeries 20 random forgeries
Average SD Maximum Average SD Maximum
6 2.89% 5.69% 30.00% 2.51% 5.66% 50.00%
19b 5.01% 9.06% 50.00% 1.77% 2.92% 10.00%
19c 5.13% 8.98% 51.00% 1.79% 2.93% 10.00%
19a 5.91% 9.42% 50.00% 1.70% 2.86% 10.00%
14 8.02% 10.87% 54.05% 5.19% 8.57% 52.63%
18 11.54% 12.21% 50.00% 4.89% 6.65% 45.00%
17 12.51% 13.01% 70.00% 3.47% 5.53% 30.00%
4 16.34% 14.00% 61.90% 6.17% 9.24% 50.00%
We have also noticed that the results for Task 1 are generally slightly better
than those of Task 2. This seems to imply that additional dynamic information
including pen orientation and pressure is not useful and can lead to impaired
performance. While conflicting results have been seen in the literature, we believe
this is again due to the way of collecting our signature data, as discussed above.
The invariance of pen orientation and pressure is likely to be less than that of
other dynamic information used for Task 1.
From these findings, we are further convinced that establishing benchmark
databases that faithfully reflect the nature of signature data found in real-world
applications is of great importance to the research community. We hope SVC2004
can facilitate collaborative efforts in establishing such databases before long.
More performance criteria may be considered in the future. While this com-
petition considers only accuracy as measured by EER, it would be useful, partic-
ularly from the application perspective, to include other criteria such as running
time. Moreover, we may also allow a program to reject a signature during the
enrollment and/or testing phase.
7
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
FRR
FAR
Average ROC for 10 genuine signatures and 20 skilled forgeries (60 users)
4
6
12
14
15
16
17
18
19a
19b
19c
24
26
Fig. 1. ROC curves for Task 1 (60 users)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
FRR
FAR
Average ROC for 10 genuine signatures and 20 skilled forgeries (60 users)
4
6
14
17
18
19a
19b
19c
Fig. 2. ROC curves for Task 2 (60 users)
References
1. V.S. Nalwa. Automatic on-line signature verification. Proceedings of the IEEE,
85(2):215–239, 1997.
2. A. Jain, R. Bolle, and S. Pankanti. Biometrics: Personal Identification in Networked
Society. Kluwer Academic Publishers, Boston, MA, USA, 1999.
3. A.K. Jain, F.D. Griess, and S.D. Connell. On-line signature verification. Pattern
Recognition, 35(12):2963–2972, 2002.
4. R. Plamondon and G. Lorette. Automatic signature verification and writer identi-
fication – the state of the art. Pattern Recognition, 22(2):107–131, 1989.
5. F. Leclerc and R. Plamondon. Automatic signature verification: the state of the art –
1989–1993. International Journal of Pattern Recognition and Artificial Intelligence,
8(3):643–660, 1994.
... These features were fed to 1-NN with a cosine distance classifier to identify the tested signatures. The introduced architecture proved to be one of the best SOTA techniques in online signature recognition, where a 100% recognition accuracy was achieved with only three training samples (NGTS=3) when applied to two public datasets (SVC2004 TASK1 and TASK2) [3] and a new published dataset (FCITSig) [4]. Moreover, the technique was verified in [5] on the multimodal SCUT-MMSIG (TABLET and MOBILE) public datasets [6] giving 100% identification accuracy using only 3 training samples. ...
... In this research, we will perform the initial experiments on the SVC 2004 TASK 1 and TASK2 datasets published in the first signature verification competition in [3] because each dataset contains completely different users, which satisfies our data requirement to complete the experiments needed in this study. Each dataset contains signature data for 40 users, each having 20 genuine signatures, in addition to 20 skilled forgery signatures. ...
Article
Full-text available
In this paper, we embed a signature verification mechanism in a previously introduced architecture for signature recognition to detect in-distribution and out-of-distribution random forgeries. In the previous architecture, a CNN was trained on the genuine user training dataset and then used as a feature extraction module. A k-NN algorithm with cosine distance was then used to classify the unknown signatures based on the nearest cosine distance neighbor. This architecture led to higher than 99% accuracy, but without verification, because any unknown signature will converge to one of the identities of the training dataset’s users. To add a verification mechanism that differentiates between genuine and random forgeries, we use PCA to select the most discriminating features used in calculating the cosine distance between the training and testing signatures. A fixed parameter thresholding technique based on the training distances is introduced that best differentiates between the genuine and random-user signatures. Moreover, enhancement of the technique is carried out by combining the output of the Softmax layer and the last convolution layer of the ResNet18 model to get a highly discriminative representation of the handwritten signatures. Accordingly, the introduced verification mechanism resulted in very low false positive and negative rates for test signatures from inside and outside the main dataset, with an insignificant decrease in the high identification accuracy. The complete architecture has been tested on three publicly available datasets, showing superior results.
... The xLongSignDB dataset [41,42] comprises signatures written by 29 writers over 15 months, including every writer's 46 genuine signatures and 10 skilled forged signatures. The SVC 2004 dataset [43] consists of three subsets: Sample, Task1, and Task2. The Task1 subset was not chosen for use in this paper because it only includes coordinate and time information. ...
Article
Full-text available
Online handwritten signature verification is a crucial direction of research in the field of biometric recognition. Recently, many studies concerning online signature verification have attempted to improve performance using multi-feature fusion. However, few studies have provided the rationale for selecting a certain uni-feature to be fused, and few studies have investigated the contributions of a certain uni-feature in the multi-feature fusion process. This lack of research makes it challenging for future researchers in related fields to gain inspiration. Therefore, we use the uni-feature as the research object. In this paper, the uni-feature is one of the X and Y coordinates of the signature trajectory point, pen pressure, pen tilt, and pen azimuth feature. Aiming to solve the unequal length of feature vectors and the low accuracy of signature verification when using uni-features, we innovatively introduced the idea of correlation analysis and proposed a dynamic signature verification method based on the correlation coefficient of uni-features. Firstly, an alignment method of two feature vector lengths was proposed. Secondly, the correlation coefficient calculation formula was determined by analyzing the distribution type of the feature data, and then the correlation coefficient of the same uni-feature between the genuine signatures or between the genuine and forged signatures was calculated. Finally, the signature was verified by introducing a Gaussian density function model and combining it with the signature verification discrimination threshold. Experimental results showed that the proposed method could improve the performance of dynamic signature verification based on uni-features. In addition, the pen pressure feature had the best signature verification performance, with the highest signature verification accuracy of 93.46% on the SVC 2004 dataset.
... • SVC2004 in the version Task2 [29] • xLongSignDb [30]. ...
Conference Paper
Full-text available
Bioconvolving with Mixing transform is a cancelable biometric approach to protect biometric data and the user's privacy. This approach uses linear convolutions on biometric features to generate cancelable templates following the random transformation matrixes. This paper shows how the choice of the transformation matrixes impacts the protected system accuracy. Therefore, random matrix selection is not an optimal strategy. A heuristic algorithm is proposed to select the optimal transformation matrix that achieves the optimal protected system performance. The proposed heuristic is based on the minimum distance between the transformed mean template created by the EB-DBA and the transformed reference set. Two online signature verification systems have been protected by Bioconvonlving with Mixing transform to evaluate the proposed algorithm performance in terms of accuracy, False Negative Rate (FNR), and False Positive Rate (FPR). The experiments have been conducted on SVC2004, xLongSignDb, SUSig VisualSubCorpus, and SUSig BlindSubCorpus online signature datasets. The highest calculated Pearson index (r=0.87) shows a high correlation between the proposed heuristic and the system's accuracy. Therefore, the selected matrixes by the proposed heuristic allow for optimal system performance. The protected system accuracy improved to 11% using the selected transformation matrixes by the proposed heuristic compared to the random selection matrixes. Moreover, protecting the system using Bioconvolving, revised with the proposed heuristic, reduces accuracy at best by only 0.6 % compared to the unprotected system.
... The SVC2004 signature database is a benchmark dataset commonly used in online signature verification research [25]. Several studies have been conducted using this dataset. ...
Preprint
Full-text available
The order in which the trajectory is executed is a powerful source of information for recognizers. However, there is still no general approach for recovering the trajectory of complex and long handwriting from static images. Complex specimens can result in multiple pen-downs and in a high number of trajectory crossings yielding agglomerations of pixels (also known as clusters). While the scientific literature describes a wide range of approaches for recovering the writing order in handwriting, these approaches nevertheless lack a common evaluation metric. In this paper, we introduce a new system to estimate the order recovery of thinned static trajectories, which allows to effectively resolve the clusters and select the order of the executed pen-downs. We evaluate how knowing the starting points of the pen-downs affects the quality of the recovered writing. Once the stability and sensitivity of the system is analyzed, we describe a series of experiments with three publicly available databases, showing competitive results in all cases. We expect the proposed system, whose code is made publicly available to the research community, to reduce potential confusion when the order of complex trajectories are recovered, and this will in turn make the trajectories recovered to be viable for further applications, such as velocity estimation.
Article
Full-text available
This paper addresses issues concerning biometric authentication based on handwritten signatures. Our research aimed to check whether a handwritten signature acquired with a mobile device can effectively verify a user’s identity. We present a novel online signature verification method using coordinates of points and pressure values at each point collected with a mobile device. Convolutional neural networks are used for signature verification. In this paper, three neural network models are investigated, i.e., two self-made light SigNet and SigNetExt models and the VGG-16 model commonly used in image processing. The convolutional neural networks aim to determine whether the acquired signature sample matches the class declared by the signer. Thus, the scenario of closed set verification is performed. The effectiveness of our method was tested on signatures acquired with mobile phones. We used the subset of the multimodal database, MobiBits, that was captured using a custom-made application and consists of samples acquired from 53 people of diverse ages. The experimental results on accurate data demonstrate that developed architectures of deep neural networks can be successfully used for online handwritten signature verification. We achieved an equal error rate (EER) of 0.63% for random forgeries and 6.66% for skilled forgeries.
Article
Full-text available
Multimodal biometrics employs multiple modalities within a single system to address the limitations of unimodal systems, such as incomplete data acquisition or deliberate fraud, while enhancing recognition accuracy. This study explores score normalization and its impact on system performance. To fuse scores effectively, prior normalization is necessary, followed by a weighted sum fusion technique that aligns impostor and genuine scores within a common range. Experiments conducted on three biometric databases demonstrate the promising efficacy of the proposed approach, particularly when combined with Empirical Modal Decomposition (EMD). The fusion system exhibits strong performance, with the best outcome achieved by merging the online signature and fingerprint modalities, resulting in a normalized Min-Max score-based Equal Error Rate (EER) of 1.69%.
Article
Full-text available
This paper is a follow up to an article published in 1989 by R. Plamondon and G. Lorette on the state of the art in automatic signature verification and writer identification. It summarizes the activity from year 1989 to 1993 in automatic signature verification. For this purpose, we report on the different projects dealing with dynamic, static and neural network approaches. In each section, a brief description of the major investigations is given.
Article
Full-text available
In recent years, along with the extraordinary diffusion of the Internet and a growing need for personal verification in many daily applications, automatic signature verification is being considered with renewed interest. This paper presents the state of the art in automatic signature verification. It addresses the most valuable results obtained so far and highlights the most profitable directions of research to date. It includes a comprehensive bibliography of more than 300 selected references as an aid for researchers working in the field.
Article
This paper presents a survey of the literature on automatic signature verification and writer identification by computer, and an overview of achievements in static and dynamic approaches to solving these problems, with a special focus on preprocessing techniques, feature extraction methods, comparison processes and performance evaluation. In addition, for each type of approache special attention is given to requirement analysis, human factors, practical application environments, and appropriate definitions and terminology. Throughout the paper, new research directions are suggested.
Article
We describe a method for on-line handwritten signature verification. The signatures are acquired using a digitizing tablet which captures both dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Several approaches for obtaining the optimal threshold value from the reference set are investigated. The best result yields a false reject rate of 2.8% and a false accept rate of 1.6%. Experiments on a database containing a total of 1232 signatures of 102 individuals show that writer-dependent thresholds yield better results than using a common threshold.
Article
Automatic on-line signature verification is an intriguing intellectual challenge with many practical applications. I review the context of this problem and then describe my own approach to it, which breaks with tradition by relying primarily on the detailed shape of a signature for its automatic verification, rather than relying primarily on the pen dynamics during the production of the signature. I propose a robust, reliable, and elastic local-shape-based model for handwritten on-line curves; this model is generated by first parameterizing each on-line curve over its normalized arc-length and then representing along the length of the curve, in a moving coordinate frame, measures of the curve within a sliding window that are analogous to the position of the center of mass, the torque exerted by a force, and the moments of inertia of a mass distribution about its center of mass. Further I suggest the weighted and biased harmonic mean as a graceful mechanism of combining errors from multiple models of which at least one model is applicable but not necessarily more than one model is applicable, recommending that each signature be represented by multiple models, these models, perhaps, local and global, shape based and dynamics based. Finally, I outline a signature-verification algorithm that I have implemented and tested successfully both on databases and in live experiments