Context in source publication

Context 1
... applications covertly and unobtrusively determine whether a person belongs to a watch-list of identities. Examples of screening applications could include airport security, security at public events, and other surveillance applications. The screening watch list consists of a moderate (e.g., a few hundred) number of identities. By their very nature, these screening applications do not have a well-defined “user” enrolment phase, can expect only minimal control over their subjects and imaging conditions and require large sustainable throughput with as little human supervision as possible. Neither large scale identification nor screening can be accomplished without biometrics (e.g., by using token-based or knowledge-based identification). Even if some modalities like iris or fingerprint can be considered as "sufficiently efficient", it is interesting to also envisage other inputs as the choice of one modality is linked to acceptability or usage purposes. In this report, we describe in more details 4 modalities: iris, fingerprint, DNA and face, even if there are other possible choices such as voice, signature or gaiting. The iris (see Fig.4) is an overt body that is available for remote (non invasive) assessment. Unlike other modalities, face for example, the variability in appearance of any iris might be well enough constrained to make an automated system possible based on currently available machine vision technologies [3]. There is no international iris standard, only a preliminary report was proposed including : Image acquisition (near infra red images), image compression (Using a low JPEG compression level), image pre-processing including boundary extraction, the used coordinate system, rotation uncertainty, image quality, grey scale density, contrast (50 grey level separations between pupil and iris, and 90 grey level separations between iris and sclera) and up to now only the UK group has rejected this report making comments about ID devices: the Japanese, German and US groups have accepted this report with some comments about iris size, iris quality measurement, and the iris compression format. The US group has asked to include some normal light image acquisition standards (instead of only near infra red images). The iris code obtained in the corresponding encoding process is the most precise print of all existing biometric techniques, at the expense of rather constrained acquisition conditions (the camera must be infra-red, the eyes must be at a very precise distance from the camera). These elements provide a very good quality of the initial image which is necessary to ensure such a high level of performance. On the other hand they may generate a long time during the enrolment phase and the necessity of personal assistance [2]. This method also requires a relatively expensive acquisition system and necessarily involves the scanning of the eye, which can initially prove offputting to users. The resulting reliability means it can be successfully used both for identification and authentication, an advantage which few other techniques can offer. Most fingerprint processing approches use specific features called minutiae as a description of the fingerprint. These minutiae are composed of big details like starting lines, splitting lines and line fragments and smaller ones like ridges ending, incipient ridges, bifurcation, pores, delta and line- shapes (see Fig. 5). Automatic minutiae detection is an extremely critical process, especially in low-quality fingerprint images, where noise and contrast deficiency can originate pixel configurations similar to minutiae or hide real minutiae. Several approaches to automatic minutiae extraction have been proposed [5]: Although rather different from each other, most of these methods transform fingerprint images into binary images. The images obtained are submitted to a postprocessing which allows the detection of ridge-lines. The different fingerprint authentication systems can be classified by their level of accuracy. The greater the accuracy needed, the more complex and naturally the more expensive the system is. The classification of a system is based on the number of features (minutiae) that it can deal with. The high-tech systems are able to exploit up to 80 points and also to distinguish between a real fingerprint and a forged one (synthetic fingerprint). The most widely used in general use employs some 20 particular points. Since the fingerprint is never captured in the same position, the verification algorithm must perform rotation and translation of the captured fingerprint in order to adjust the fingerprint minutiae with the template minutiae. The final stage in the matching procedure is to compare the sampled template with a set of enrolled templates (identification), or a single enrolled template (authentication). It is highly improbable that the sample is bit-wise identical to the template. This is due to approximations in the scanning procedure, misalignment of the images and errors or approximations introduced in the process of extracting the minutiae. Accordingly, a matching algorithm is required to test various orientations of the image and the degree of correspondence of the minutiae, and it assigns a numerical score to the match. Different methods exist for processing fingerprints: - The direct optical correlation is practically not used, because it is not very efficient for large databases. - The general shape of the fingerprint is generally used to pre-process the images, and reduce the search in large databases. This uses the general directions of the lines of the fingerprint, and the presence of the core and the delta. Several categories have been defined in the Henry system: whorl, right loop, left loop, arch, and tented arch. - Most methods use minutiae, the specific points like ridge endings, bifurcations, etc. Only the position and direction of these features are stored in the signature for further comparison. - Some methods count the number of ridges between particular points, generally the minutiae, instead of the distances computed from the position. - Other Pattern matching algorithms use the general shape of the ridges. The fingerprint is divided into small sectors, and the ridge direction, phase and pitch are extracted and stored. - Very often, algorithms use a combination of all these techniques. The international ISO norm for the finger Pattern is still under discussion. There are several American standards for the fingerprints [6]: - ANSI/NIST-ITL 1-2000 Revision of ANSI/NISTCSL 1-1993 and ANSI/NIST-ITL 1a-1997 This standard defines the content, format, and units of measurement for the exchange of fingerprint, palmprint, facial, and SMT information that may be used in the identification of a subject. Exchanged information consists of several items, including record data, digitized characteristic information, and compressed or uncompressed fingerprint, palmprint, facial, and SMT images. This standard forms the basis for interoperability between federal, state, local, and international users of Automated Fingerprint Identification Systems (AFIS) in the interchange of fingerprint search transactions. All agencies involved in the electronic transmission of fingerprint, palmprint, facial, and SMT images and related data must adhere to the format described by the standard. Dissimilar vendor equipment belonging to agencies submitting fingerprint images to the FBI must also adhere to the standard. In addition to being able to successfully submit fingerprint search transactions to the FBI, agencies that adhere to the standard can also effectively exchange fingerprint search transactions among themselves even though their systems are manufactured by different vendors. The FBI has been a supporter during the development of this standard and they required that their AFIS system vendors comply with the provisions of this standard. - ANSI INCITS 377–2004 Published in 2004: “Information technology - Finger Pattern Based Interchange Format”: This standard specifies an interchange format for the exchange of pattern-based fingerprint recognition data. It describes the conversion of a raw fingerprint image to a cropped and down- sampled finger pattern followed by the cellular representation of the finger pattern image to create the finger-pattern interchange data. - ANSI INCITS 378-2004: Published ...

Citations

... Gregory & Simon (2008) suggested that specific particular rules are necessary to ensure that data are not utilized outside the fundamental intention to which it was collected for. It is therefore very important for people to understand that the purpose of biometric technology is not a medium intended to be used for breaching privacy but rather to be used for the storage of the features of the data of an individual referred to as template (Dorizzi, 2005). According to Ashbourn (2000), biometric authentication method may be considerably applicable globally but is still new to some developing nation. ...
Conference Paper
Currently, institutions are increasingly dependent on ICTs to provide their clients with educational services. In most industries, cloud services have been able to offer scalable online infrastructure on a broad scale; it means that users can access, transfer and save data to internet sites from everywhere and from every device, so IT moves from the company to the software. cause of its cost-effectiveness, the motive to use Cloud Computing" (CC) is increased, but the development of CC is not far behind in education. The study highlights using CC in Higher Education (HE), and its influence on both learners and educators, and addressing the difficulties of CC in HE. Also, explores the use of this technology in different fields. As a result, the researcher found CC has many benefits in that learners can access online courses, learning reports, and other cloud-based tools. Many applications can run in the cloud computing environment, so HE institutions can take the lead in these features. Literature review was followed in this study, data was collected qualitatively. In conclusion, preferred applications used with CC in higher education such as (LMS), Amazon, IBM, and Microsoft Azure. While found some drawbacks such as lack of network and access usability choices, Privacy and Individual Mistakes. As well as many challenges facing the use of cloud computing technology in universities, and the study recommend a set of essential recommendations related to the use of fog computing technology in educational institutions. Keywords: Cloud Computing, Cloud Computing in Education, Cloud Computing in Higher Education
... Gregory & Simon (2008) suggested that specific particular rules are necessary to ensure that data are not utilized outside the fundamental intention to which it was collected for. It is therefore very important for people to understand that the purpose of biometric technology is not a medium intended to be used for breaching privacy but rather to be used for the storage of the features of the data of an individual referred to as template (Dorizzi, 2005). According to Ashbourn (2000), biometric authentication method may be considerably applicable globally but is still new to some developing nation. ...
Conference Paper
Full-text available
Currently, institutions are increasingly dependent on ICTs to provide their clients with educational services. In most industries, cloud services have been able to offer scalable online infrastructure on a broad scale; it means that users can access, transfer and save data to internet sites from everywhere and from every device, so IT moves from the company to the software. cause of its cost-effectiveness, the motive to use Cloud Computing" (CC) is increased, but the development of CC is not far behind in education. The study highlights using CC in Higher Education (HE), and its influence on both learners and educators, and addressing the difficulties of CC in HE. Also, explores the use of this technology in different fields. As a result, the researcher found CC has many benefits in that learners can access online courses, learning reports, and other cloud-based tools. Many applications can run in the cloud computing environment, so HE institutions can take the lead in these features. Literature review was followed in this study, data was collected qualitatively. In conclusion, preferred applications used with CC in higher education such as (LMS), Amazon, IBM, and Microsoft Azure. While found some drawbacks such as lack of network and access usability choices, Privacy and Individual Mistakes. As well as many challenges facing the use of cloud computing technology in universities, and the study recommend a set of essential recommendations related to the use of fog computing technology in educational institutions. Keywords: Cloud Computing, Cloud Computing in Education, Cloud Computing in Higher Education
... Gregory & Simon (2008) suggested that specific particular rules are necessary to ensure that data are not utilized outside the fundamental intention to which it was collected for. It is therefore very important for people to understand that the purpose of biometric technology is not a medium intended to be used for breaching privacy but rather to be used for the storage of the features of the data of an individual referred to as template (Dorizzi, 2005). According to Ashbourn (2000), biometric authentication method may be considerably applicable globally but is still new to some developing nation. ...
Conference Paper
Currently, institutions are increasingly dependent on ICTs to provide their clients with educational services. In most industries, cloud services have been able to offer scalable online infrastructure on a broad scale; it means that users can access, transfer and save data to internet sites from everywhere and from every device, so IT moves from the company to the software. cause of its cost-effectiveness, the motive to use Cloud Computing" (CC) is increased, but the development of CC is not far behind in education. The study highlights using CC in Higher Education (HE), and its influence on both learners and educators, and addressing the difficulties of CC in HE. Also, explores the use of this technology in different fields. As a result, the researcher found CC has many benefits in that learners can access online courses, learning reports, and other cloud-based tools. Many applications can run in the cloud computing environment, so HE institutions can take the lead in these features. Literature review was followed in this study, data was collected qualitatively. In conclusion, preferred applications used with CC in higher education such as (LMS), Amazon, IBM, and Microsoft Azure. While found some drawbacks such as lack of network and access usability choices, Privacy and Individual Mistakes. As well as many challenges facing the use of cloud computing technology in universities, and the study recommend a set of essential recommendations related to the use of fog computing technology in educational institutions. Keywords: Cloud Computing, Cloud Computing in Education, Cloud Computing in Higher Education
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [30] [33]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR, EER denotes Equal Error Rate and it"s the point where (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [30] [33]. This is done in general using a small database recorded for this purpose. ...
... Often the equal-error-rate (EER), the point on the DET curve where the FA rate and FR rate are equal, is used as this single summary number [33]. However, the suitability of any system or techniques for an application must be determined by taking into account the various costs and impacts of the errors and other factors such as implementations and lifetime support costs and end-user acceptance issues [30] [33]. This paper has presented a human authentication method combined dynamic face, online signature and text independent speech information in order to improve the problem of single biometric authentication, since single biometric authentication has the fundamental problems of high False Accept Rate (FAR) and False Reject Rate (FRR). ...
Article
Full-text available
In this paper, the use of finite Gaussian mixture modal (GMM) tuned using Expectation Maximization (EM) estimating algorithms for score level data fusion is proposed. Automated biometric systems for human identification measure a “signature” of the human body, compare the resulting characteristic to a database, and render an application dependent decision. These biometric systems for personal authentication and identification are based upon physiological or behavioral features which are typically distinctive, Multi-iometric systems, which consolidate information from multiple biometric sources, are gaining popularity because they are able to overcome limitations such as non-universality, noisy sensor data, large intra-user variations and susceptibility to spoof attacks that are commonly encountered in mono modal biometric systems. Simulation result show that finite mixture modal (GMM) is quite effective in modelling the genuine and impostor score densities, fusion based the product of Likelihood Ratio achieves a significant performance on eNTERFACE2005 multi-biometric database based on dynamic face, on-line signature and text independent speech modalities.
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [34,35]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [34,35]. This is done in general using a small database recorded for this purpose. ...
... This is done in general using a small database recorded for this purpose. Performance capabilities have been traditionally shown in the form of ROC (receiver-or relativeoperating characteristic) plots [34], in which the probability of a false-acceptance is plotted versus the probability of a false-rejection for varying decision thresholds. Unfortunately, with ROC plots, curves corresponding to well-performing systems tend to bunch together near the lower left corner, impeding a clear visualization of competitive systems [35]. ...
Article
Full-text available
Face recognition has long been a goal of computer vision, but only in recent years reliable automated face recognition has become a realistic target of biometrics research. In this paper the contribution of classifier analysis to the Dynamics Face Biometrics Verification performance is examined. It refers to the paradigm that in classification tasks, the use of multiple observations and their judicious fusion at the data, hence the decision fusions at different levels improve the correct decision performance. The fusion tasks reported in this work were carried through fusion of two well-known face recognizers, ICA I and ICA II. It incorporates the decision at matching score level; the fusion within the scores based Likelihood Ration of the classifier. This strategy increases the accuracy of the face recognition system and at the same time reduces the limitations of individual recognizer. The performance of the analysis studies were tested based on eNTERFACE2005 and the simulation results are showed a significant performance achievements.
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [34,35]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [34,35]. This is done in general using a small database recorded for this purpose. ...
... Performance capabilities have been traditionally shown in the form of ROC (receiver-or relativeoperating characteristic) plots [34], in which the probability of a false-acceptance is plotted versus the probability of a false-rejection for varying decision thresholds. Unfortunately, with ROC plots, curves corresponding to well-performing systems tend to bunch together near the lower left corner, impeding a clear visualization of competitive systems [35]. ...
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [34,35]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR, EER denotes Equal Error Rate and it's the point where (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [34,35]. This is done in general using a small database recorded for this purpose. ...
... Performance capabilities have been traditionally shown in the form of ROC (receiver-or relativeoperating characteristic) plots [34], in which the probability of a false-acceptance is plotted versus the probability of a false-rejection for varying decision thresholds. Unfortunately, with ROC plots, curves corresponding to well-performing systems tend to bunch together near the lower left corner, impeding a clear visualization of competitive systems [35]. ...
Article
Face recognition has long been a goal of computer vision, but only in recent years reliable automated face recognition has become a realistic target of biometrics research. In this paper the contribution of classifier analysis to the Dynamics Face Biometrics Verification performance is examined. It refers to the paradigm that in classification tasks, the use of multiple observations and their judicious fusion at the data, hence the decision fusions at different levels improve the correct decision performance. The fusion tasks reported in this work were carried through fusion of two well-known face recognizers, ICA I and ICA II. It incorporates the decision at matching score level; the fusion within the scores based Likelihood Ration of the classifier. This strategy increases the accuracy of the face recognition system and at the same time reduces the limitations of individual recognizer. The performance of the analysis studies were tested based on eNTERFACE2005 and the simulation results are showed a significant performance achievements.
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [34,35]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR, EER denotes Equal Error Rate and it's the point where (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [34,35]. This is done in general using a small database recorded for this purpose. ...
... Performance capabilities have been traditionally shown in the form of ROC (receiver-or relativeoperating characteristic) plots [34], in which the probability of a false-acceptance is plotted versus the probability of a false-rejection for varying decision thresholds. Unfortunately, with ROC plots, curves corresponding to well-performing systems tend to bunch together near the lower left corner, impeding a clear visualization of competitive systems [35]. ...
Article
Face recognition has long been a goal of computer vision, but only in recent years reliable automated face recognition has become a realistic target of biometrics research. In this paper the contribution of classifier analysis to the Dynamics Face Biometrics Verification performance is examined. It refers to the paradigm that in classification tasks, the use of multiple observations and their judicious fusion at the data, hence the decision fusions at different levels improve the correct decision performance. The fusion tasks reported in this work were carried through fusion of two well-known face recognizers, ICA I and ICA II. It incorporates the decision at matching score level; the fusion within the scores based Likelihood Ration of the classifier. This strategy increases the accuracy of the face recognition system and at the same time reduces the limitations of individual recognizer. The performance of the analysis studies were tested based on eNTERFACE2005 and the simulation results are showed a significant performance achievements.
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [25,26]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR, EER denotes Equal Error Rate and it's the point where (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [25,26]. This is done in general using a small database recorded for this purpose. ...
... Often the equal-error-rate (EER), the point on the DET curve where the FA rate and FR rate are equal, is used as this single summary number [25]. However, the suitability of any system or techniques for an application must be determined by taking into account the various costs and impacts of the errors and other factors such as implementations and lifetime support costs and end-user acceptance issues [25,26]. This paper has presented a human authentication method combined dynamic face and on-line signature information in order to improve the problem of single biometric authentication, since single biometric authentication has the fundamental problems of high False Accept Rate (FAR) and False Reject Rate (FRR). ...
Article
In this paper, the use of finite Gaussian mixture modal (GMM) based Expectation Maximization (EM) estimated algorithm for score level data fusion is proposed. Automated biometric systems for human identification measure a “signature” of the human body, compare the resulting characteristic to a database, and render an application dependent decision. These biometric systems for personal authentication and identification are based upon physiological or behavioral features which are typically distinctive, Multi-biometric systems, which consolidate information from multiple biometric sources, are gaining popularity because they are able to overcome limitations such as non-universality, noisy sensor data, large intra-user variations and susceptibility to spoof attacks that are commonly encountered in mono modal biometric systems. Simulation show that finite mixture modal (GMM) is quite effective in modelling the genuine and impostor score densities, fusion based the resulting density estimates achieves a significant performance on eNTERFACE 2005 multi-biometric database based on dynamic face and signature modalities.
... The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system [25,26]. The decision threshold must be adjusted according to the desired characteristics for the application considered. ...
... High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR, EER denotes Equal Error Rate and it's the point where (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned [25,26]. This is done in general using a small database recorded for this purpose. ...
... Often the equal-error-rate (EER), the point on the DET curve where the FA rate and FR rate are equal, is used as this single summary number [25]. However, the suitability of any system or techniques for an application must be determined by taking into account the various costs and impacts of the errors and other factors such as implementations and lifetime support costs and end-user acceptance issues [25,26]. This paper has presented a human authentication method combined dynamic face and on-line signature information in order to improve the problem of single biometric authentication, since single biometric authentication has the fundamental problems of high False Accept Rate (FAR) and False Reject Rate (FRR). ...
Article
Full-text available
In this paper, the use of finite Gaussian mixture modal (GMM) based Expectation Maximization (EM) estimated algorithm for score level data fusion is proposed. Automated biometric systems for human identification measure a “signature” of the human body, compare the resulting characteristic to a database, and render an application dependent decision. These biometric systems for personal authentication and identification are based upon physiological or behavioral features which are typically distinctive, Multi-biometric systems, which consolidate information from multiple biometric sources, are gaining popularity because they are able to overcome limitations such as non-universality, noisy sensor data, large intra-user variations and susceptibility to spoof attacks that are commonly encountered in mono modal biometric systems. Simulation show that finite mixture modal (GMM) is quite effective in modelling the genuine and impostor score densities, fusion based the resulting density estimates achieves a significant performance on eNTERFACE 2005 multi-biometric database based on dynamic face and signature modalities.