Figure 1 - uploaded by James Wayman
Content may be subject to copyright.
The General Biometric System 

The General Biometric System 

Contexts in source publication

Context 1
... are shown as Figure 1. The results of Vendor A are interesting in that they were independent of chosen threshold over a large range of threshold values. Vendor A had no false matches at any reasonable threshold, so increasing threshold had no effect on the FMR . Genuine distributions are often bimodal, with the second mode coincident with the single mode of the impostor distribution. The distributions of Vendor A were disjoint, except for the overlap of the second mode of the genuine distribution with the mode of the impostor distribution. Therefore, decreasing the decision threshold had no impact on the false non-match rate until the threshold was well inside the impostor distribution, thus driving the false match rate sky high. So for all reasonable values of the threshold, the number of false matches remained at zero with about 2% false non- matches. It can be stated with 95% statistical confidence that the false match rate of vendor A was under 0.01%. It might be that the false match rate is even lower, but lack of returned match scores prevent us from making that determination. Vendor B returned 16,000,000 cross comparisons with only one false match, indicating a 95% statistical confidence of a false match rate of under 3 in 10 million (3x10 -7 ), but with a false non- match rate approaching ...
Context 2
... but not all, biometric systems collect data at one location but store and/or process it at another. Such systems require data transmission. If a great amount of data is involved, compression may be required before transmission or storage to conserve bandwidth and storage space. Figure 1 shows compression and transmission occurring before the signal processing and image storage. In such cases, the transmitted or stored compressed data must be expanded before further use. The process of compression and expansion generally causes quality loss in the restored signal, with loss increasing with increasing compression ratio. The compression technique used will depend upon the biometric signal. An interesting area of research is in finding, for a given biometric technique, compression methods with minimum impact on the signal processing ...
Context 3
... an AFIS system, submitted fingerprints are binned, then compared only to enrolled prints placed in similar (communicating) bins. We might hypothesize that there is a greater probability for prints in communicating bins to be falsely matched than for prints in non-communicating bins. We computed the ROC for the test fingerprints in three ways: comparing communicating impostors only, comparing non-communicating impostors only, and comparing all impostors. Figures 1 and 2 show three ROCs each for right and left thumb comparisons. We note that the false match rate for the communicating comparisons is almost an order of magnitude greater than for the non- communicating comparisons at some points in the ROC. ...
Context 4
... "probability density" is a mathematical function which allows us to compute the chances of a measure lying between two values. A "bell curve" is an example of such a probability density. If student test scores follow a bell curve, the probability of a student's score lying between 50 and 60, for instance, is equal to the area under the curve between the score values of 50 and 60. Figure 1 illustrates this ...
Context 5
... this point we should mention fingerprint "filtering", often confused with "binning" because its goals are the same. "Filtering" involves additional partitioning of the database based on information, such as gender or age of the customer, which is not contained in the fingerprint image itself. Identification of the finger ("right thumb", for instance) cannot be made based on the fingerprint pattern, so the partitioning of the database by finger, as done in all AFIS, is a filtering process. Because filtering is based on exogenous information, it is not part of the signal processing process, but rather, is part of the data collection process accompanying the sampling of the customers. Flow of this information is not shown in Figure ...
Context 6
... We know of no existing or proposed civilian systems accepting more than four fingers, nor systems using fingers other than the forefingers and thumbs. Social service systems generally use forefingers because of the perceived association of thumb print collection with criminal investigations. Driver's licensing systems within the U.S. use either thumbs or forefingers, depending upon State. 8 H.R. 2202, Section 111.c.1.C. forbids law enforcement use of the "new verification system" except for direct enforcement of the provisions of the Act. State laws limiting law enforcement access to driver's licensing fingerprint records exist in California, Texas, and Georgia, as well. Figure 1 shows a generic biometric identification system. In previous papers 9 , 10 , we discussed in detail this diagram. In this section, we will focus on the storage, signal processing, matching, and decision policy subsystems in civilian ...
Context 7
... function of the pattern matching module in Figure 1 is to send to the decision subsystem a positive, scalar measure, D, for every comparison of a sample to a template. We can presume, without loss of generality, that D increases with increasing difference between sample and template. We will loosely call this measure a "distance", recognizing that it will technically be such only if resulting from a vector comparison in a metric space. The general biometric system does not require that sample and template features compose such a space 3 ...
Context 8
... biometric devices rely on widely different technologies, much can be said about them in general. Figure 1 shows a generic biometric identification system, divided into five sub-systems: data collection, transmission, signal processing, decision and data storage. The key subsystems ...
Context 9
... each N and each method, the 600 ROC's were sorted at each threshold to empirically establish the 0.025% upper and lower limits on their values. The region between these values corresponds to the 95% confidence bound. This approach to interval estimation is not very robust and may lead to substantial variation in estimates depending upon the particular 600 trials used. Figure 1 shows that the mean ROC over the 600 trials closely approximates the "true" ROC for each N. Figures 2-5 shows good agreement between the sampled confidence intervals on the false non-match rate with those calculated from (1) over the N independent comparisons. This verifies that our data editing produced the equivalent of independent transactions at a fixed error rate. This seems to support the claim that cross- comparisons produce unbiased estimates of the threshold-dependent false match error ...
Context 10
... "High/Low" abstraction is not the only way of viewing the required engineering trade-offs. Figure 1 depicts the overall architecture of a generic biometric API and suggests functions and interfaces that can and should be standardized. The four main components which would readily benefit from APIs include: ? Applications: all common applications, such as "Database Management Systems", or network servers, which might benefit by adding biometric authentication and identification (A&I) ...
Context 11
... 10-11 shows very poor agreement between the binomial confidence interval on the false match rate and sampling tests when the ? N(N-1) technique is used. For brevity, only the N=50 and N=100 cases are graphed, but the N=200 and N=400 differences between binomial and sampling uncertainty bounds are even more pronounced. Use of ' (1) (1), with N taken as the total number of comparisons, grossly underestimates the expected uncertainty. Only N=50 and N=400 cases are shown for brevity, and Figure 13 is rescaled for clarity. Comparing with Figures 6 and 7, we see that the sampling confidence interval decrease significantly when cross-comparisons are used. Therefore, the binomial confidence interval calculated with N taken as the number of samples, overestimates the uncertainty in the false match rate when cross comparisons are ...
Context 12
... information is often available to the system that can be used to limit the number of required comparisons. This information path is not indicated in Figure 1, as it can be highly variable between systems. In the case of "verification", the additional information may be a "claimed identity" in the form of a name or an identification number narrowing the comparison to a single stored pattern; or the database may be on a "smart card" containing but a single enrolled template. In the case of "identification", the number of required comparisons may be limited by external information, such as the age of the customer, or by information endogenous to the biometric sample itself, such as the fingerprint pattern type. In any case, the actual activity of the signal processing system is exactly the same: extraction of the feature vector, checking of quality, and comparison of the feature vector to some number of enrolled ...
Context 13
... arise from the data collection sub-system of Figure 1, perhaps owing to random variations in the biometric pattern, pattern presentation or the sensor. Errors owing to the transmission or compression processes of the transmission sub-system of Figure 1 may also be important. Assuming the errors to be uncorrelated, we can write ...
Context 14
... arise from the data collection sub-system of Figure 1, perhaps owing to random variations in the biometric pattern, pattern presentation or the sensor. Errors owing to the transmission or compression processes of the transmission sub-system of Figure 1 may also be important. Assuming the errors to be uncorrelated, we can write ...
Context 15
... determine the effect of using the inter-template score histogram as a proxy for the impostor histogram, we created a simulation model. We started with the random selection from the experimental data set of 300 nine-dimensional template vectors. We took these as our "anchors". Around each of these anchors in 9-space, we created a 150 simulated samples by adding a gaussian variable to each component using the isotropic assumption (identical variance of each component gaussian error model). We have no information available upon which to evaluate these distributional assumptions, but the error variance was set so that the "genuine" histogram would look approximately like that of the INSPASS histograms of Figure 1. Three of these samples at each anchor were added to create a simulated template. The "genuine" score distribution was calculated by comparing with the RSI algorithm the simulated samples to simulated "self" templates. The inter-template score histogram was also computed by comparing simulated templates. The sample vector (genuine) histogram was computed using the RSI algorithm for score assessment. The comparison scores between the sample vectors and randomly chosen "non-self" templates were also computed and used to create an "impostor" ...
Context 16
... features extracted by the signal processing sub-system of Figure 1 are generally vectors in a real or complex [6] metric space, with components generally taking on integer values over a bounded domain. In some systems [7], the domain of each component is restricted to the binary values of {0,1}. Fingerprint systems are the primary exception to this rule, using features not in a vector space. In this chapter, we will suppose that the components are any real ...
Context 17
... acquired and possibly transmitted a biometric characteristic, we must prepare it for matching with other like measures. Figure 1 divides the signal processing subsystem into three tasks: feature extraction, quality control, and pattern ...
Context 18
... plot of I x (a,b) for various values of (a, b) is given in figure 1. If both a and b are appreciably greater than one, then I x (a,b) is very nearly zero for small values of x, then rises very sharply at about x=a/(a+b) to very nearly unity. This is suggested by the graph for (a,b) = (8,10) in figure ...
Context 19
... plot of I x (a,b) for various values of (a, b) is given in figure 1. If both a and b are appreciably greater than one, then I x (a,b) is very nearly zero for small values of x, then rises very sharply at about x=a/(a+b) to very nearly unity. This is suggested by the graph for (a,b) = (8,10) in figure ...
Context 20
... the increasing literature on biometric identification technologies [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], including taxonomies of both applications and technologies [1,3], there has been no general description of the biometric system. Primary to the development of interface standards, standardized test methodologies and generalized performance equations, is an understanding of the normative system model. Such a model can illuminate the common structures and parallelisms between seemingly disparate methodologies. Certainly not all biometric technologies will fit any single model, but tremendous insight can be gained by noting where individual systems differ from the norm. Figure 1 shows a system diagram of the general biometric system. Five sub- systems are shown: data collection, transmission, signal processing, storage and decision. To first order approximation only, these sub-systems can be considered independent, with errors introduced at each to be independent and additive. At a more comprehensive level of analysis, these sub-systems will not be independent, with errors impacting sub- system performance downstream. In testing biometric devices, it will generally be easiest to test sub-systems independently, when possible. In the following sections, we will describe each sub-system in ...
Context 21
... problem with operational data is in creating the impostor distribution. Referring to Figure 1, the general biometric system stores feature templates in the database and, rarely, compressed samples, as well. If samples of all transactions are stored, our problems are nearly solved. Using the stored samples under the assumption that they are properly labeled (no impostors) and represent "good faith" efforts to use the system (no players, pranksters or clowns), we can compare the stored samples with non- like templates, in "off-line" computation, to create the impostor ...
Context 22
... are shown as Figure 1. The results of Vendor A are interesting in that they were independent of chosen threshold over a large range of threshold values. Vendor A had no false matches at any reasonable threshold, so increasing threshold had no effect on the FMR . Genuine distributions are often bimodal, with the second mode coincident with the single mode of the impostor distribution. The distributions of Vendor A were disjoint, except for the overlap of the second mode of the genuine distribution with the mode of the impostor distribution. Therefore, decreasing the decision threshold had no impact on the false non-match rate until the threshold was well inside the impostor distribution, thus driving the false match rate sky high. So for all reasonable values of the threshold, the number of false matches remained at zero with about 2% false non- matches. It can be stated with 95% statistical confidence that the false match rate of vendor A was under 0.01%. It might be that the false match rate is even lower, but lack of returned match scores prevent us from making that determination. Vendor B returned 16,000,000 cross comparisons with only one false match, indicating a 95% statistical confidence of a false match rate of under 3 in 10 million (3x10 -7 ), but with a false non- match rate approaching ...
Context 23
... but not all, biometric systems collect data at one location but store and/or process it at another. Such systems require data transmission. If a great amount of data is involved, compression may be required before transmission or storage to conserve bandwidth and storage space. Figure 1 shows compression and transmission occurring before the signal processing and image storage. In such cases, the transmitted or stored compressed data must be expanded before further use. The process of compression and expansion generally causes quality loss in the restored signal, with loss increasing with increasing compression ratio. The compression technique used will depend upon the biometric signal. An interesting area of research is in finding, for a given biometric technique, compression methods with minimum impact on the signal processing ...
Context 24
... an AFIS system, submitted fingerprints are binned, then compared only to enrolled prints placed in similar (communicating) bins. We might hypothesize that there is a greater probability for prints in communicating bins to be falsely matched than for prints in non-communicating bins. We computed the ROC for the test fingerprints in three ways: comparing communicating impostors only, comparing non-communicating impostors only, and comparing all impostors. Figures 1 and 2 show three ROCs each for right and left thumb comparisons. We note that the false match rate for the communicating comparisons is almost an order of magnitude greater than for the non- communicating comparisons at some points in the ROC. ...
Context 25
... this point we should mention fingerprint "filtering", often confused with "binning" because its goals are the same. "Filtering" involves additional partitioning of the database based on information, such as gender or age of the customer, which is not contained in the fingerprint image itself. Identification of the finger ("right thumb", for instance) cannot be made based on the fingerprint pattern, so the partitioning of the database by finger, as done in all AFIS, is a filtering process. Because filtering is based on exogenous information, it is not part of the signal processing process, but rather, is part of the data collection process accompanying the sampling of the customers. Flow of this information is not shown in Figure ...
Context 26
... "probability density" is a mathematical function which allows us to compute the chances of a measure lying between two values. A "bell curve" is an example of such a probability density. If student test scores follow a bell curve, the probability of a student's score lying between 50 and 60, for instance, is equal to the area under the curve between the score values of 50 and 60. Figure 1 illustrates this ...
Context 27
... We know of no existing or proposed civilian systems accepting more than four fingers, nor systems using fingers other than the forefingers and thumbs. Social service systems generally use forefingers because of the perceived association of thumb print collection with criminal investigations. Driver's licensing systems within the U.S. use either thumbs or forefingers, depending upon State. 8 H.R. 2202, Section 111.c.1.C. forbids law enforcement use of the "new verification system" except for direct enforcement of the provisions of the Act. State laws limiting law enforcement access to driver's licensing fingerprint records exist in California, Texas, and Georgia, as well. Figure 1 shows a generic biometric identification system. In previous papers 9 , 10 , we discussed in detail this diagram. In this section, we will focus on the storage, signal processing, matching, and decision policy subsystems in civilian ...
Context 28
... each N and each method, the 600 ROC's were sorted at each threshold to empirically establish the 0.025% upper and lower limits on their values. The region between these values corresponds to the 95% confidence bound. This approach to interval estimation is not very robust and may lead to substantial variation in estimates depending upon the particular 600 trials used. Figure 1 shows that the mean ROC over the 600 trials closely approximates the "true" ROC for each N. Figures 2-5 shows good agreement between the sampled confidence intervals on the false non-match rate with those calculated from (1) over the N independent comparisons. This verifies that our data editing produced the equivalent of independent transactions at a fixed error rate. This seems to support the claim that cross- comparisons produce unbiased estimates of the threshold-dependent false match error ...
Context 29
... function of the pattern matching module in Figure 1 is to send to the decision subsystem a positive, scalar measure, D, for every comparison of a sample to a template. We can presume, without loss of generality, that D increases with increasing difference between sample and template. We will loosely call this measure a "distance", recognizing that it will technically be such only if resulting from a vector comparison in a metric space. The general biometric system does not require that sample and template features compose such a space 3 ...
Context 30
... biometric devices rely on widely different technologies, much can be said about them in general. Figure 1 shows a generic biometric identification system, divided into five sub-systems: data collection, transmission, signal processing, decision and data storage. The key subsystems ...
Context 31
... "High/Low" abstraction is not the only way of viewing the required engineering trade-offs. Figure 1 depicts the overall architecture of a generic biometric API and suggests functions and interfaces that can and should be standardized. The four main components which would readily benefit from APIs include: • Applications: all common applications, such as "Database Management Systems", or network servers, which might benefit by adding biometric authentication and identification (A&I) ...
Context 32
... 10-11 shows very poor agreement between the binomial confidence interval on the false match rate and sampling tests when the ½ N(N-1) technique is used. For brevity, only the N=50 and N=100 cases are graphed, but the N=200 and N=400 differences between binomial and sampling uncertainty bounds are even more pronounced. Use of ' (1) (1), with N taken as the total number of comparisons, grossly underestimates the expected uncertainty. Only N=50 and N=400 cases are shown for brevity, and Figure 13 is rescaled for clarity. Comparing with Figures 6 and 7, we see that the sampling confidence interval decrease significantly when cross-comparisons are used. Therefore, the binomial confidence interval calculated with N taken as the number of samples, overestimates the uncertainty in the false match rate when cross comparisons are ...
Context 33
... arise from the data collection sub-system of Figure 1, perhaps owing to random variations in the biometric pattern, pattern presentation or the sensor. Errors owing to the transmission or compression processes of the transmission sub-system of Figure 1 may also be important. Assuming the errors to be uncorrelated, we can write ...
Context 34
... arise from the data collection sub-system of Figure 1, perhaps owing to random variations in the biometric pattern, pattern presentation or the sensor. Errors owing to the transmission or compression processes of the transmission sub-system of Figure 1 may also be important. Assuming the errors to be uncorrelated, we can write ...
Context 35
... information is often available to the system that can be used to limit the number of required comparisons. This information path is not indicated in Figure 1, as it can be highly variable between systems. In the case of "verification", the additional information may be a "claimed identity" in the form of a name or an identification number narrowing the comparison to a single stored pattern; or the database may be on a "smart card" containing but a single enrolled template. In the case of "identification", the number of required comparisons may be limited by external information, such as the age of the customer, or by information endogenous to the biometric sample itself, such as the fingerprint pattern type. In any case, the actual activity of the signal processing system is exactly the same: extraction of the feature vector, checking of quality, and comparison of the feature vector to some number of enrolled ...
Context 36
... determine the effect of using the inter-template score histogram as a proxy for the impostor histogram, we created a simulation model. We started with the random selection from the experimental data set of 300 nine-dimensional template vectors. We took these as our "anchors". Around each of these anchors in 9-space, we created a 150 simulated samples by adding a gaussian variable to each component using the isotropic assumption (identical variance of each component gaussian error model). We have no information available upon which to evaluate these distributional assumptions, but the error variance was set so that the "genuine" histogram would look approximately like that of the INSPASS histograms of Figure 1. Three of these samples at each anchor were added to create a simulated template. The "genuine" score distribution was calculated by comparing with the RSI algorithm the simulated samples to simulated "self" templates. The inter-template score histogram was also computed by comparing simulated templates. The sample vector (genuine) histogram was computed using the RSI algorithm for score assessment. The comparison scores between the sample vectors and randomly chosen "non-self" templates were also computed and used to create an "impostor" ...
Context 37
... acquired and possibly transmitted a biometric characteristic, we must prepare it for matching with other like measures. Figure 1 divides the signal processing subsystem into three tasks: feature extraction, quality control, and pattern ...
Context 38
... features extracted by the signal processing sub-system of Figure 1 are generally vectors in a real or complex [6] metric space, with components generally taking on integer values over a bounded domain. In some systems [7], the domain of each component is restricted to the binary values of {0,1}. Fingerprint systems are the primary exception to this rule, using features not in a vector space. In this chapter, we will suppose that the components are any real ...
Context 39
... plot of I x (a,b) for various values of (a, b) is given in figure 1. If both a and b are appreciably greater than one, then I x (a,b) is very nearly zero for small values of x, then rises very sharply at about x=a/(a+b) to very nearly unity. This is suggested by the graph for (a,b) = (8,10) in figure ...
Context 40
... plot of I x (a,b) for various values of (a, b) is given in figure 1. If both a and b are appreciably greater than one, then I x (a,b) is very nearly zero for small values of x, then rises very sharply at about x=a/(a+b) to very nearly unity. This is suggested by the graph for (a,b) = (8,10) in figure ...
Context 41
... the increasing literature on biometric identification technologies [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], including taxonomies of both applications and technologies [1,3], there has been no general description of the biometric system. Primary to the development of interface standards, standardized test methodologies and generalized performance equations, is an understanding of the normative system model. Such a model can illuminate the common structures and parallelisms between seemingly disparate methodologies. Certainly not all biometric technologies will fit any single model, but tremendous insight can be gained by noting where individual systems differ from the norm. Figure 1 shows a system diagram of the general biometric system. Five sub- systems are shown: data collection, transmission, signal processing, storage and decision. To first order approximation only, these sub-systems can be considered independent, with errors introduced at each to be independent and additive. At a more comprehensive level of analysis, these sub-systems will not be independent, with errors impacting sub- system performance downstream. In testing biometric devices, it will generally be easiest to test sub-systems independently, when possible. In the following sections, we will describe each sub-system in ...
Context 42
... problem with operational data is in creating the impostor distribution. Referring to Figure 1, the general biometric system stores feature templates in the database and, rarely, compressed samples, as well. If samples of all transactions are stored, our problems are nearly solved. Using the stored samples under the assumption that they are properly labeled (no impostors) and represent "good faith" efforts to use the system (no players, pranksters or clowns), we can compare the stored samples with non- like templates, in "off-line" computation, to create the impostor ...

Citations

... Biometric based techniques are the most promising option recognizing individuals. "Biometric technologies "are automated methods of verifying or recognizing the identity of a living person based on a physiological or behavioural characteristic [24,16].different biometrics currently used for automatic identification include fingerprints, voice, iris, retina, hand, face, handwriting, keystroke, finger shape, DNA, gait, signature and palm print etc. ...
... Biometrics offers a natural identity management tool that is characterized by greater security strength, robustness, speed, effectiveness and convenience than the traditional methods of personal recognition [2]. This is because in biometrics, the identification or identity verification of a person is based on the physiological and behavioral characteristics of the person [3] [4]. Biometric security is advantageous as every individual has unique traits that cannot be forged, stolen or lost [2]. ...
Conference Paper
Full-text available
Today, several institutions of higher learning are using access cards as access control measure to gain access to their institutions and facilities. Though, these cards are simple and convenient in terms of usage, they offer the lowest security strength as they are often prone to lost, theft, forget and clone. If compromised, valuable information and asset can be stolen or destroyed. However, every institutional security goal is to protect the students, staff, information and assets. Thus, to strengthen the security level, institutions should provide security measure that is difficult if not impossible to compromise. This paper therefore, proposes an approach to reinforce the security in universities using biometric authentication. We designed and implemented a system prototype called Institutional Biometric Authentication System (IBAS) to provide security to students, staff and assets. Additionally, IBAS is generic and can be used to manage attendance, prevent impersonation and other valuable benefits.
... Examples of such investigations follow. Note that many of these studies use a gure of merit called the equal-error rate (EER) [33] . This is a one-number summary of how well a detection system performs, derived from an ROC curve by noting the point at which the false-alarm rate is the same as the miss rate. ...
Conference Paper
Keystroke dynamics is the process of identifying individual users on the basis of their typing rhythms, which are in turn derived from the timestamps of key-press and key-release events in the keyboard. Many researchers have explored this domain, with mixed results, but few have examined the relatively impoverished territory of digits only, particularly when restricted to using a single finger - which might come into play on an automated teller machine, a mobile phone, a digital telephone dial, or a digital electronic security keypad at a building entrance. In this work, 28 users typed the same 10-digit number, using only the right-hand index finger. Employing statistical machine-learning techniques (random forest), we achieved an unweighted correct-detection rate of 99.97% with a corresponding false-alarm rate of 1.51%, using practiced 2-of-3 encore typing with outlier handling. This level of accuracy approaches sufficiency for two-factor authentication for passwords or PIN numbers.
... cost of equipment, staff training, facilities, and web or phone-based user assistance) should be considered for a complete cost analysis. or physiological characteristics [42]. VideoTicket can be classified as a biometric verification scheme, whereby system users assert an identity that needs to be verified. ...
Article
Identity fraud (IDF) may be defined informally as exploita-tion of credential information using some form of imper-sonation or misrepresentation of identity, in the context of transactions. Thus, IDF may be viewed as a combination of two old problems: user authentication and transaction au-thorization. We propose an innovative approach to detect IDF attempts, by combining av-certificates (digitally-signed audiovisual recordings in which users identify themselves) with av-signatures (audiovisual recordings showing users' explicit consent for unique transaction details). Av-cer-tificates may be used in on-site transactions, to confirm user identity. In the case of remote (e.g. web-based) transactions, both av-certificates and av-signatures may be used to au-thenticate users and verify their consent for transaction de-tails. Conventional impersonation attacks, whereby creden-tials (e.g. passwords, biometrics, or signing keys) are used without the consent of their legitimate users, fail against VideoTicket. The proposed solution assumes that identity thieves have access to such credentials.
... In this section, we discuss the rationale for choosing the points of comparison from each test, which includes issues of how quality, enrollment and acquisition, glasses, and timing between visits were handled.Table I summarizes the sensors; algorithms; and the total number of subjects, biometric samples (both genuine and impostor), genuine comparison scores, and impostor comparison scores used in each of the tests reported in this analysis (in Tables IV and V).Table II summarizes key properties of the evaluations. The following evaluation properties were originally introduced in taxonomy of biometric applications by Wayman [11]: cooperative versus non-cooperative users, public versus private users, overt versus covert capture, attended versus unattended applications, habituated versus non-habituated, standard versus non-standard environment, and open versus closed systems. We have used some of these categories for comparison of the three studies in Table II, and we included categories to describe whether there was user training, the minimum number of successful samples required to enroll a subject, how many samples were required to meet a quality score threshold, the recognition mode used in the offline testing, the median or mean and the max time between collection of the first and 2 nd samples, and whether samples were collected with or without glasses. ...
Conference Paper
Iris recognition has long been widely regarded as a highly accurate biometric, despite the lack of independent, large-scale testing of its performance. Recently, however, three third-party evaluations of iris recognition were performed. This paper compares and contrasts the results of these independent evaluations. We find that despite differences in methods, hardware, and/or software, all three studies report error rates of the same order of magnitude: observed false non-match rates (FNMRs) from 0.0122 to 0.03847 at a false match rate (FMR) of 0.001. Further, the differences between the best performers' error rates are an order of magnitude smaller than the observed error rates.
... 1, presents a challenge to almost any large modern organization. Biometrics authentication has been defined as " automatic identification or identity verification of an individual based on physiological and behavioral characteristics " [2]. This paper has not been revised and corrected according to reviewers comments Copyright PARS'07. ...
Article
Full-text available
Contemporarily, the internet has been heavily used for the electronic commerce especially in the areas of finance and banking. The transactions of the finance and banking on the internet involve use of handwritten signature as a symbol for consent and authorization. Handwritten signature is one of the biometric techniques that are widely accepted as personal attribute for identity verification. Hence, it is vital to have an online handwritten signature verification system that is fast, reliable and accurate to avoid attempts to forge handwritten signatures, which has resulted in heavy losses for various financial institutions. This paper presents the implementation of an online handwritten signature verification system (OHSV) using dynamic features as the discriminators. It will describe the functions and modules of the system, explain on the approach used, and discuss the performance results of the system, which are measured based on the false rejection rate (FRR), and false acceptance rate (FAR). The former means the rate of genuine signatures that are being incorrectly rejected while the latter means that forgeries that are incorrectly accepted. The experimental results showed that the features based on number of stroke and vertical speed are sufficient to discriminate genuine samples from forgery sample based on the given threshold.
... We start with a narrow definition, designed as much to limit the scope of our inquiry as to determine it. " Biometric technologies " are automated methods of verifying or recognizing the identity of a living person based on a physiological or behavioral charac- teristic [7, 8]. ...
... However, as we will show in the paper, some other biometric techniques have higher discriminative power, so, there might be changes in the early future in this trend. A BIS can be considered as an automatic pattern recognition system that establishes the authenticity of a speciÿc physiological or behavioral characteristic possessed by an user [50,73]. In a ÿrst enrollment stage (system training) the system captures individual physiognomies, which are digitally represented by means of feature vector templates or prototypes. ...
Article
In this paper, we provide an overview of the fundamentals of biometric identification, together with a description of the main biometric technologies currently in use, all of them within a common reference framework. A comparison on different qualitative parameters of these technologies is also given, so that the reader may have a clear perspective of advantages and disadvantages of each. A section on multibiometrics describes the state of the art in making these systems work coordinately. Fusion at different conceptual levels is described. Finally, a section on commercial issues provides the reader a perspective of the main companies currently involved in this field.
... The databases used in this contest have not been acquired in a real environment and according to a formal protocol [23], [16], [19], [2] (also refer to [24] for an example of performance evaluation on real applications). . ...
Article
Full-text available
Reliable and accurate fingerprint recognition is a challenging pattern recognition problem, requiring algorithms robust in many contexts. FVC2000 competition attempted to establish the first common benchmark, allowing companies and academic institutions to unambiguously compare performance and track improvements in their fingerprint recognition algorithms. Three databases were created using different state-of-the-art sensors and a fourth database was artificially generated; 11 algorithms were extensively tested on the four data sets. We believe that FVC2000 protocol, databases, and results will be useful to all practitioners in the field not only as a benchmark for improving methods, but also for enabling an unbiased evaluation of algorithms
... Biometrics include fingerprint verification, hand geometry, retinal scanning, iris scanning, face recognition, and signature verification (Ashbourn 2000). Generally, physical and behavioural characteristics used by biometrics include the following taxonomy (Zhang 2000): Biometric authentication aims to identify an individual using either a biological feature they possess (physiological characteristic like a fingerprint), or something they do (behavioural characteristic, like a signature) (Wayman and Alyea 2000). ...
Article
Full-text available
Security is becoming an increasingly important issue for business, and with it comes the need for appropriate authentication; consequently, it is becoming gradually more important to develop secure e-commerce systems. Fraud via the web, identity theft, and phishing are raising concerns for users and financial organisations. In addition, current authentication methods, like passwords, have many problems (e.g. some users write them down, they forget them, or they make them easy to hack). We can overcome these drawbacks by using biometric authentication systems. Biometric systems are being used for personal authentication in response to the rising issue of authentication and security. Biometrics provide much promise, in terms of preserving our identities without the inconvenience of carrying ID cards and/or remembering passwords. This research is important because the securing of e-commerce transactions is becoming increasingly important. Identity theft, hacking and viruses are growing threats to Internet users. As more people use the Internet, more identity theft cases are being reported. This could harm not only the users, but also the reputation of the organisations whose names are used in these illegal acts. For example, in the UK, online banking fraud doubled in 2008 compared to 2007. More users took to e-shopping and online banking, but failed to take necessary protection. For non-western cultures, the figures for web security, in 2008, illustrated that Saudi Arabia was ranked ninth worldwide for users who had been attacked over the web. The above statistics reflect the significance of information security with e-commerce systems. As with any new technology, user acceptance of the new technology is often hard to measure. In this thesis, a study of user acceptance of biometric authentication systems in e-transactions, such as online banking, within Saudi society was conducted. It examined whether Saudis are practically willing to accept this technology. This thesis focuses upon Saudi Arabia, which has developing economy. It has achieved a rapid rate of growth, and therefore makes an interesting and unique case study. From an economist‟s point of view, Saudi Arabia is the powerhouse of the Middle East. It has the leading regional economy, and, even though it is still relatively young. It has a young and rapid growing population; therefore, this makes Saudi Arabia an attractive potential market for all kinds of e-commerce applications. Having said that, with more than half of population under the age of 30 are more to be expected to take the risk of accepting new technology. For this work, 306 Saudi participants were involved in the experiments. A laboratory experiment was created that actively tested a biometric authentication system in combination with a survey. The Technology Acceptance Model (TAM) was adopted in the first experimental phase as the theoretical basis on which to develop the iv research framework, the model has proven its efficiency as a good predictor for the biometric authentication system. Furthermore, in a second experimental phase, the Unified Theory of Acceptance and Use of Technology (UTAUT) with moderating variables such as age, gender and education level was examined as a proposed conceptual framework to overcome the limitations of TAM. The aim of the study was to explore factors affecting users‟ acceptance of biometric authentication systems. The findings from Structural Equation Modelling (SEM) analysis indicate that education level is a significant moderating factor, while gender and age do not record as significant. This thesis added new knowledge to this field and highlighted the importance of the perceptions of users regarding biometric security technologies. It helps determine the factors affecting the acceptance of biometric technology. To our knowledge, this is the first systematic study of this issue carried out by academic and non-biased researchers in Saudi Arabia. Furthermore, the thesis presents security technology companies and developers of information security products with information to help in the determination of what is significant to their user base when taking into account the introduction of new secure systems and products.