Fig 1 - uploaded by Loris Nanni
Content may be subject to copyright.
UCN of scores: k c represent the model of the claimed users and k 1. . .k B form the background models.

UCN of scores: k c represent the model of the claimed users and k 1. . .k B form the background models.

Source publication
Article
Full-text available
In this paper, we describe a supervised technique that allows to develop a more robust biometric system with respect to those based directly on the similarities of the biometric matchers or on the similarities normalised by the unconstrained cohort normalisation.In order to discriminate between genuine and impostors a quadratic discriminant classif...

Context in source publication

Context 1
... Fig. 1 the scheme of the UCN is reported. We obtain good results (reported in Section 3) without consid- ering the log domain.The third feature is the average score among the test pattern x and the users that belong to the background ...

Similar publications

Article
Full-text available
Heterogeneous face recognition (HFR) aims to match face images across different imaging domains such as visible-to-infrared and visible-to-thermal. Recently, the increasing utility of nonvisible imaging has increased the application prospects of HFR in areas such as biometrics, security, and surveillance. HFR is a challenging variate of face recogn...
Conference Paper
Full-text available
Many studies address aphid feeding behaviour to learn more about resist-ance mechanisms in plants. Time-to-event techniques can be used to exploit the datasets more profoundly by looking at changes in aphid behaviour over time. Our present dataset was acquired by automated video-tracking of probing behaviour of Nasonovia ribisnigri, the lettuce aph...
Article
Full-text available
In this study, the relationships between total length and opercle dimensions (length and height) of European perch inhabiting Lake Ladik were examined. Sampling was carried out between January and February 2010. A total of 110 individuals were sampled and total length ranged from 11.6 to 24.7 cm. Opercles were extracted for all fish caught. Opercle...
Article
Full-text available
Biometry analysis was conducted on 282 specimens of whiting, Merlangius merlangus (Linneaus, 1758) from the northern Adriatic Sea. The total length of all specimens ranged from 16.6 to 33.7 cm. There were twenty one morphometric and nine meristic characteristics measured. Sexual dimorphism was observed in 15 morphometric measurements. The number of...
Article
Full-text available
Automatic on-line signature recognition has been investigated by several authors in order to allow machines to recognize an user from its own biometric traits. The following paper deals with features and models required in order to allow a machine to learn and discriminate signatures. The proposed solution approaches the signature making process as...

Citations

... With probability-based classifiers, such as Gaussian Mixture Models or Hidden Markov Models, the system output s(X/λ C ) is a probability, p(X/λ C ). With these classifiers, the decision is generally carried out using the likelihood ratio pðX=λ C Þ pðX=λ C Þ test [2][3][4], as a better performance is obtained. Nevertheless, this Score Ratio (equation (4)) has not been used with non-probabilistic classifiers, to the best of our knowledge. ...
... We refer to the latter as probability-like classifiers. For these, the decision is taken as in equations (3) and (4). However, for distancebased ones, the decision must be changed, as shown in equations (5) and (6). ...
... Two methods to obtain this likelihood can be seen in the literature: with a cohort set or representative set of the NC class [2,4,9], or using a model to capture the behaviour of the NC class [3,9]. ...
Article
Full-text available
Abstract One of the ever present goals in biometrics research is to improve system performance. Herein, an alternative method is proposed that is independent of the biometric characteristic and the system, as this proposal, Score Ratio, is applied to the output (comparison score) of the classifier. The Likelihood Ratio is widely used with probabilistic classifiers because it performs well in these circumstances. However, when the classifiers are non‐probabilistic, then this ratio is not used. This is our proposal: with non‐probabilistic classifier based systems, the decision is taken solely through the score, supposing that the biometric feature, X, belongs to the Claimant (H0 hypothesis), here, it is also proposed to make use of the score considering that X does not belong to the Claimant (H1 hypothesis); more specifically, using the ratio between these two scores: the Score Ratio. For more objective results, benchmarking and reproducibility are used in the experiments, applying our proposal with third‐party (benchmarking) experimental protocols, databases, classifiers and performance measures for fingerprint, iris and finger vein recognition. Statistically significant improvements have been obtained when the Score Ratio is used with regard to not using it in all cases tested.
... Intensity based methods use techniques like principal component analysis, independent component analysis and linear discriminant analysis for the comparison (e.g.,[26,22]). Other categories of methods are based on force field transformations (e.g.,[3]), 2D ear curves geometry (e.g.,[8]), Fourier descriptors[1], wavelet transformation (e.g.,[11]), Gabor filters (e.g.,[18]) or scale-invariant feature transformation (e.g.,[15]). A last category of comparison techniques are based on 3D shape features. ...
Conference Paper
Full-text available
Comparing ear photographs is considered to be an important aspect of victim identification. In this paper we study how automated ear comparison can be improved with soft computing techniques. More specifically we describe and illustrate how bipolar data modelling tech-niques can be used for handling data imperfections more adequately. In order to minimise rescaling and reorientation problems, we start with 3D ear models that are obtained from 2D ear photographs. To com-pare two 3D models, we compute and aggregate the similarities between corresponding points. Hereby, a novel bipolar similarity measure is pro-posed. This measure is based on Euclidian distance, but explicitly deals with hesitation caused by bad data quality. Comparison results are ex-pressed using bipolar satisfaction degrees which, compared to traditional approaches, provide a semantically richer description of the extent to which two ear photographs match.
... Intensity based methods use techniques like principal component analysis, independent component analysis and linear discriminant analysis for the comparison (e.g., [21], [22], [26]). Other categories of methods are based on force field transformations (e.g., [3]), 2D ear curves geometry (e.g., [7]), Fourier descriptors [1], wavelet transformation (e.g., [12]), Gabor filters (e.g., [17]) or scaleinvariant feature transformation (e.g., [16]). A last category of comparison techniques are based on 3D shape features. ...
Conference Paper
Full-text available
Ear biometric authentication is considered to be an important aspect of human identification and is, among other techniques, used in victim identification for practical reasons . State-of-the-art techniques transform 2D ear photos to 3D ear models to adequately cope with geometrical and photometric normalisation issues. From each 3D ear model a feature list is extracted and used in the comparison process. In this paper we study how automated comparison of 3D ear models can be improved by soft computing techniques. More specifically we investigate and illustrate how multiple-criteria decision support techniques, which are based on fuzzy set theory, can be used for fine-tuning the ear comparison process. Point-to-point matching schemes are enriched with Logic Scoring of Preference (LSP) multiple-criteria decision support facilities. In this way valuable knowledge of forensic experts on ear identification aspects can be incorporated in the comparison process. The benefits and added value of the approach are discussed and demonstrated by an illustrative example.
... A multimatcher approach is adopted in [69][70][71]. In the first work, each matcher is trained using features extracted from a single-subwindow (SW) of the entire 2D image. ...
... The rank-1 recognition rate is about 84 %. A quadratic discriminant classifier (QDA) is trained in the later work in [71] to improve discrimination between genuine and impostors in verification tasks, based on the same feature extraction technique. Finally, in [70], the multimatcher approach is used along very similar lines yet combining different color spaces instead of different SWs, according to the ability of different color spaces of bringing different information. ...
Chapter
Full-text available
Ear biometrics, compared with other physical traits, presents both advantages and limits. First of all, the small surface and the quite simple structure play a controversial role. On the positive side, they allow faster processing than, say, face recognition, as well as less complex recognition strategies than, say, fingerprints. On the negative side, the small ear area itself makes recognition systems especially sensitive to occlusions. Moreover, the prominent 3D structure of distinctive elements like the pinna and the lobe makes the same systems sensible to changes in illumination and viewpoint. Overall, the best accuracy results are still achieved in conditions that are significantly more favorable than those found in typical (really) uncontrolled settings. This makes the use of this biometrics in real world applications still difficult to propose, since a commercial use requires a much higher robustness. Notwithstanding the mentioned limits, ear is still an attractive topic for biometrics research, due to other positive aspects. In particular, it is quite easy to acquire ear images remotely, and these anatomic features are also relatively stable in size and structure along time. Of course, as any other biometric trait, they also call for some template updating. This is mainly due to age, but not in the commonly assumed way. The apparent bigger size of elders’ ears with respect to those of younger subjects, is due to the fact that aging causes a relaxation of the skin and of some muscle-fibrous structures that hold the so called pinna, i.e. the most evident anatomical element of the ear. This creates the belief that ears continue growing all life long. On the other hand, a similar process holds for the nose, for which the relaxation of the cartilage tissue tends to cause a curvature downwards. In this chapter we will present a survey of present techniques for ear recognition, from geometrical to 2D-3D multimodal, and will attempt a reasonable hypothesis about the future ability of ear biometrics to fulfill the requirements of less controlled/covert data acquisition frameworks.
... They achieved a rank-1 recognition rate of ∼ 84% and a rank-5 recognition rate of ∼ 93%; for verification experiments, the area under the ROC curve was ∼ 98.5% suggesting very good performance. Later Nanni and Lumini [2009b] improved the performance of their ear matcher using score normalization. In order to discriminate between the genuine article and impostors, they trained a quadratic discriminant classifier. ...
Article
Full-text available
Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
... The recent researches on personal identification systems based on the human's biological features pave the way for biometric recognition. Among the biometric features, retinal structure provides the most secure features since it cannot be imitated (Nanni & Lumini, 2009a, 2009bWu et al., 2009). The human identification system based on retinal structure was first introduced in Hill (1999). ...
Article
The characteristics of human body such as fingerprint, face, hand palm and iris are measured, recorded and identified by performing comparison using biometric devices. Even though it has not seen widespread acceptance yet, retinal identification based on retinal vasculatures in retina provides the most secure and accurate authentication means among biometric systems. Using retinal images taken from individuals, retinal identification is employed in environments such as nuclear research centers and facilities, weapon factories, where extremely high security measures are needed. The superiority of this method stems from the fact that retina is unique to every human being and it would not be changed during human life. Adversely, other identification approaches such as fingerprint, face, palm and iris recognition, are all vulnerable in that those characteristics can be corrupted via plastic surgeries and other changes. In this study we propose an alternate personal identification system based on retinal vascular network in retinal images, which tolerates scale, rotation and translation in comparison. In order to accurately identify a person our new approach first segments vessel structure and then employ similarity measurement along with the tolerations. The developed system, tested on about four hundred images, presents over 95% of success which is quite promising.
... In this work we develop an online signature verification system based on two classifiers and a set of fuzzy logic inference decision module. Research on online signature verification system has been carried out over the years [42][43][44][45][46][47]. In [45], pressure signal is used as the main feature of the signature pattern and neuro-templates were used as classifiers for the online signature verification system. ...
Article
Full-text available
Compared to physiologically based biometric systems such as fingerprint, face, palm-vein and retina, behavioral based biometric systems such as signature, voice, gait, etc. are less popular and many of the research in these areas are still in their in-fancy. One of the reasons is due to the inconsistencies in human behavior which requires more robust algorithms in their developments. In this paper, an online signature verifi-cation system is proposed based on fuzzy logic inference. To ensure higher accuracy, the signature verification system is designed to include the fusion of multi classifiers, namely, the back propagation neural network algorithm and the Pearson correlation technique. A fuzzy logic inference engine is also designed to fuse two global features which are the time taken to sign and the length of the signature. The use of the fuzzy logic inference engine is to overcome the boundary limitations of fixed thresholds and overcome the uncertainties of thresholds for various users and to have a more human-like output. The system has been developed with a robust validation module based on Pearson's correlation algorithm in which more consistent sets of signatures are enrolled. In this way, more consistent sets of training patterns are used for training. The results show that the incorporation of multi classifier fusion technique has improved the false rejection rate and false acceptance rate of the system as compared to the individual classifiers and the use of fuzzy logic in-ference module for the final decision helps to further improved the system performance.
Chapter
Enterprise Resource Planning (ERP) systems are leveraged by Enterprises for improving and streamlining their internal processes linked to various resources like customers, suppliers, human capitals, machineries etc. for almost last few decades. ERP strengthens and enhances the overall business performance to survive in the volatile business environment along with the strong global challenges. ERP provides competitive edges for an organization over its competitors by integrating all the various wings of an organization within quick response time and with low operating cost. In the era of fourth Industry revolution, where Enterprises are driven by current technology trends like Cognitive Computing, Internet of Things, Cloud Computing etc., Cloud ERP Systems are getting popularity among the Enterprises due to its low implementation time and costs. But the implementations of Cloud ERP Systems involve various complexities and it may end up with total failure causing loss of efforts and investments. The authors have explored and identified the critical factors for the implementation of Cloud Resolutions for any Enterprises. The Fishbone (Ishikawa) analysis technique is used to identify the critical factors to be addressed for the successful implementation of the Cloud ERP solutions for any organizations. The outcomes focus on the certain factors, like ensuring system and data security, managing customizations to unique internal processes, agility to ever changing business environment etc., play the significant role on the successful implementation of Cloud ERP Systems.
Chapter
Biometric identification devices depend on some characteristics of human body such as face, fingerprint, hand palm, eye etc. Among all these features, vascular structure based retinal biometry provides the most secure person identification system. In this paper, we propose biometric authentication framework with some existing and unique features present on human retinal vascular structure. The approach begins with segmentation from colored fundus images, followed by selecting unique features like center of optic disc (OD), macula, the distance between OD and macula, bifurcation points and their angles. A 96 bytes digital template is prepared then against each image by concatenating every selected features and finally, every template is compared with each other for finding dissimilarity. The study shows around 92% accuracy in template preparation and matching on all the images of DRIVE database.
Conference Paper
Biometric user verification or authentication is a pattern recognition problem that can be stated as a basic hypothesis test: X is from client C (\(H_0\)) vs. X is not from client C (\(H_1\)), where X is the biometric input sample (face, fingerprint, etc.). When probabilistic classifiers are used (e.g., Hidden Markov Models), the decision is typically performed by means of the likelihood ratio: \({P(X/H_0)}/{P(X/H_1)}\). However, as far as we know, this ratio is not usually performed when distance-based classifiers (e.g., Dynamic Time Warping) are used. Following that idea, we propose, here, to perform the decision based not only on the score (“score” being the classifier output) supposing X is from the client (\(H_0\)), but also using the score supposing X is not from the client (\(H_1\)), by means of the ratio between both scores: the score ratio. A first approach to this proposal can be seen in this work, showing that to use the score ratio can be an interesting technique to improve distance-based biometric systems. This research has focused on the biometric signature, where several state of the art systems based on distance can be found. Here, the score ratio proposal is tested in three of them, achieving great improvements in the majority of the tests performed. The best verification results have been achieved with the use of the score ratio, improving the best ones without the score ratio by, on average, 24 %.