Figure 8 - uploaded by Trevor J. Bihl
Content may be subject to copyright.
HSI facial recognition graphical user interface (with unity weighting). 

HSI facial recognition graphical user interface (with unity weighting). 

Source publication
Article
Full-text available
A qualia exploitation of sensor technology (QUEST) motivated architecture using algorithm fusion and adaptive feedback loops for face recognition for hyperspectral imagery (HSI) is presented. QUEST seeks to develop a general purpose computational intelligence system that captures the beneficial engineering aspects of qualia-based solutions. Qualia-...

Context in source publication

Context 1
... GUI tool, pictured in Figure 8, is a direct parallel to the architecture presented in Figure 7. A user can select the active agents, enable feedback loops, and select from either a score or rank fusion approach while simultaneously analyzing results. The GUI displays the probe to be matched and the best current match is directly opposite. Below these displays, the top ten matches are displayed in thumbnail depiction along with their relative rankings and scores. Viewing the results of each algorithm is permitted by selecting the algorithm of interest in the “Results to Display” drop down menu. If feedback loops are employed, a user can select which result set to view, accompanied by the dimensionality of the gallery, in the “Gallery Set Results to View” menu. The pictorial results can be viewed in either grayscale or color images. A box plot is displayed for each probe under consid- eration to provide continuous score distribution feedback when viewing results for each face and method. Additionally, gallery matches for each probe are scrollable to enable the visual evaluation of results for the entire score distribution. To review the quantitative results, the user can choose from cumulative match score plots, box plots, or histogram depiction of the relative scores and statistics. For processing purposes, Matlab’s multiple processor pooling was employed on a dual quad core computer with 16 GB of RAM. The processing requirements of the hyperspectral data along with the chosen methods benefit from the use of parallel processing. However, for computational ease, an additional utility tool allows the user to view saved results of any prior run by simply loading a results file. This file will display the algorithms used, type of feedback loops used, and weighting schemes and permit a user to view all results and face matches. The user is notified if the selected computer can support running the complete suite of software tools by viewing the status bar. 5.1. Algorithm and Fusion Performance. During the initial testing of the CMU data, many of the same algorithms were utilized from previous HSI research [12, 17, 18, 21]. The results confirmed some of the challenges present in the CMU data. The di ff erence being the quality between the CMU data and the grayscale AT & T data [37] or the more recent CAL HSI data [18] obtained with more modern equipment. Although the performance level of these algorithms were not replicated, the value of the various techniques is not diminished. A comparison of the previously published performance versus that obtained through our initial testing is shown in Figure 9, to establish a preliminary performance threshold. Data processing starts with common but now automated preprocessing step, followed by the extraction of basic face features and then a matching step where face features and characteristics are compared for subject matching. Average computation time for the preprocessing of each face is 14 seconds. Face matching algorithms take an additional average of 13 seconds to process each face against the gallery of 36 subjects for an algorithm suite consisting of 6 algorithms including SIFT, eigenface, various geometric comparisons, and NDSI. Processing time can vary depending on the number of algorithms or agents activated by the user. Findings from this initial round of testing reinforce the need for a fusion framework that combines complimentary aspects of these algorithms to enhance the performance capability regardless of data quality or environmental setting. Taking into account the processing time of some algorithms, a method to accomplish e ff ective data reduction and processing should also be considered to reduce overall computational time. The next section will briefly describe the results of integrating the separate algorithms into a hierarchy for a robust face recognition system. 5.2. QUEST Hierarchy Results and Findings. A combination of score and rank fusion strategies were tested with the most e ff ective being a weighted score fusion strategy, wherein the overall matching score is a combination of weighted individual matching scores. Figure 10 illustrates a cumulative match score result using three eigenface-based methods (“hair,” “face,” and “skin”) and unity weighting; the right- hand figure illustrates the changes to the comparison space through dropping the two lowest performing faces ...

Similar publications

Conference Paper
Full-text available
A face recognition methodology employing an efficient fusion hierarchy for hyperspectral imagery (HSI) is presented. A Matlab-based graphical user interface (GUI) is developed to aid processing, track performance and to display results. The incorporation of adaptive feedback loops enhance performance through the reduction of candidate subjects in t...
Conference Paper
Full-text available
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertic...

Citations

... Their method provided high recognition accuracy under pose variations on a dataset which contains 1400 hyperspectral images from 200 people. However, the method does not achieve the same promising results on the public hyperspectral datasets used in [15]. ...
Conference Paper
Full-text available
Hyperspectral imaging systems collect and process information from specific wavelengths across the electromagnetic spectrum. The fusion of multi-spectral bands in the visible spectrum has been exploited to improve face recognition performance over conventional broad band face images. In this paper, we propose a new Convolutional Neu-ral Network (CNN) framework which adopts a structural sparsity learning technique to select the optimal spectral bands to obtain the best face recognition performance over all of the spectral bands. Specifically, in this method, all the bands are fed to a CNN and the convolutional filters in the first layer of the CNN are then regularized by employing a group Lasso algorithm to zero out the redundant bands during the training of the network. Contrary to other methods which usually select the bands manually or in a greedy fashion, our method selects the optimal spectral bands automatically to achieve the best face recognition performance over all the spectral bands. Moreover, experimental results demonstrate that our method outperforms state of the art band selection methods for face recognition on several publicly-available hyperspectral face image datasets.
... Many of these protocols have provided promising results on databases captured under controlled conditions. However, these methods often indicate significant performance drop in the presence of variation in face orientation [2,6]. ...
... Their method provided high recognition accuracy under pose variations on a dataset which contains 1400 hyperspectral images from 200 people. However, the method does not achieve the same promising results on the public hyperspectral datasets used in [6]. ...
Preprint
Full-text available
Hyperspectral imaging systems collect and process information from specific wavelengths across the electromagnetic spectrum. The fusion of multi-spectral bands in the visible spectrum has been exploited to improve face recognition performance over all the conventional broad band face images. In this book chapter, we propose a new Convolutional Neural Network (CNN) framework which adopts a structural sparsity learning technique to select the optimal spectral bands to obtain the best face recognition performance over all of the spectral bands. Specifically, in this method, images from all bands are fed to a CNN, and the convolutional filters in the first layer of the CNN are then regularized by employing a group Lasso algorithm to zero out the redundant bands during the training of the network. Contrary to other methods which usually select the useful bands manually or in a greedy fashion, our method selects the optimal spectral bands automatically to achieve the best face recognition performance over all spectral bands. Moreover, experimental results demonstrate that our method outperforms state of the art band selection methods for face recognition on several publicly-available hyperspectral face image datasets.
... They collected a 3D face database using a stereo camera system for performance evaluation. Skin reflectance models, based on non-thermal hyperspectral imagery, have been used to develop skin/face detection and classification algorithms [15,16] which can be used for face liveness detection. ...
Article
Full-text available
Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications. This paper explores features for face liveness detection based on tracking the gaze of the user. In the proposed approach, a visual stimulus is placed on the display screen, at apparently random locations, which the user is required to follow while their gaze is measured. This visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific positions on the screen. Features extracted from sets of collinear and colocated points are used to estimate the liveness of the user. Data are collected from genuine users tracking the stimulus with natural head/eye movements and impostors holding a photograph, looking through a 2D mask or replaying the video of a genuine user. The choice of stimulus and features are based on the assumption that natural head/eye coordination for directing gaze results in a greater accuracy and thus can be used to effectively differentiate between genuine and spoofing attempts. Tests are performed to assess the effectiveness of the system with these features in isolation as well as in combination with each other using score fusion techniques. The results from the experiments indicate the effectiveness of the proposed gaze-based features in detecting such presentation attacks.
... QuEST concept implementations have been instantiated in various applications, including Breast Cancer detection [9], cyber [10], and facial recognition [11]. In these problems, algorithms were employed to consider features in a general-to-specific sense, e.g. ...
... shape, texture, spatial, spectral, and interest points, in a fusion hierarchy. Each algorithm was first considered as a QuEST agent; agents were connected through various links with such connections used to extract context [11]. Agent internal representation was improved through adaptive feedback with methods termed "adaptive gallery" and "multi-look" [11]. ...
... Each algorithm was first considered as a QuEST agent; agents were connected through various links with such connections used to extract context [11]. Agent internal representation was improved through adaptive feedback with methods termed "adaptive gallery" and "multi-look" [11]. Adaptive gallery architecture changed library dimensionality by removing low scoring matches, multi-look considered alternating library images if and when new information became available. ...
Poster
Full-text available
Qualia-based Exploitation of Sensing Technology (QuEST) is a dual process framework that leverages what is known about human neurophysiological and neuropsychological processes to create an artificial cognitive exoskeleton functioning similarly to the human mind. In this paper, we present a quick QuEST overview and a visionary approach using QuEST methods that can improve cognitive V2V network resistance to hacking. QuEST tenets and designs have been used successfully in cyber security, facial recognition, and cancer detection; thus V2V information security in the open internet context can be enhanced via QuEST. Of note, QuEST’s focus is on intelligence amplification (IA) versus artificial intelligence (AI) and developing a machine architecture which closes the loop between human and machine.
... QuEST concept implementations have been instantiated in various applications, including Breast Cancer detection [9], cyber [10], and facial recognition [11]. In these problems, algorithms were employed to consider features in a general-to-specific sense, e.g. ...
... shape, texture, spatial, spectral, and interest points, in a fusion hierarchy. Each algorithm was first considered as a QuEST agent; agents were connected through various links with such connections used to extract context [11]. Agent internal representation was improved through adaptive feedback with methods termed "adaptive gallery" and "multi-look" [11]. ...
... Each algorithm was first considered as a QuEST agent; agents were connected through various links with such connections used to extract context [11]. Agent internal representation was improved through adaptive feedback with methods termed "adaptive gallery" and "multi-look" [11]. Adaptive gallery architecture changed library dimensionality by removing low scoring matches, multi-look considered alternating library images if and when new information became available. ...
... However, the dataset is not publicly available; therefore, it has not been possible to validate their research results. In fact, other researchers have been unable to replicate these results on their own databases [18]. Therefore, the question whether spectral reflectance of the human face is a viable biometric is still and open question. ...
Article
Full-text available
Over a decade ago, Pan et al. [IEEE TPAMI 25, 1552 (2003)] performed face recognition using only the spectral reflectance of the face at six points and reported around 95% recognition rate. Since their database is private, no one has been able to replicate these results. Moreover, due to the unavailability of public datasets, there has been no detailed study in the literature on the viability of facial spectral reflectance for person identification. In this study, we introduce a new public database of facial spectral reflectance profiles measured with a high precision spectrometer. For each of the 40 subjects, spectral reflectance was measured at the same six points as Pan et al. [IEEE TPAMI 25, 1552 (2003)] in multiple sessions and with time lapse. Furthermore, we sample the facial spectral reflectance from two public hyperspectral face image datasets and analyzed the data using state of the art face classification techniques. The best performing classifier achieved the maximum rank-1 identification rate of 53.8%. We conclude that facial spectral reflectance alone is not a reliable biometric for unconstrained face recognition.
... They report high recognition rate under pose variations on a proprietary database comprising 1400 hyperspectral images of 200 subjects. However, their results are not repeatable on public hyperspectral face databases [3]. The same database was used by Pan et al. [4] to incorporate both the spatial and the spectral information. ...
... However, the spectral range of CMU-HSFD is 450-1090nm and includes the NIR range but the Spectral Signature Matching still does not give high accuracy on this database. Similar recognition rate for the Spectral Signature Matching [2] on the CMU-HSFD is reported in Ryer's PhD thesis [3]. ...
Article
Full-text available
Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges including low signal to noise ratio, inter band misalignment and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad-hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using spatiospectral covariance for band fusion and PLS regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition. We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard datasets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and NIR spectrum.
... However recognition rate dropped when the time lapse between probe and gallery image acquisition increased. Moreover, spectral features showed poor recognition accuracy on public hyperspectral face databases [17]. Robila [14] also used spectral features of different face regions in hyperspectral images of 120 bands (400-900nm) but compared them using spectral angle measurement. ...
... Although, the spectral range of CMU-HSFD is 450-1100nm and includes the NIR range but the spectral features still does not give high accuracy on this database. Ryer reported similar recognition rate for spectral signature features using the CMU database in his PhD thesis [17]. ...
Conference Paper
Full-text available
Compact and discriminative feature extraction from high dimensional hyperspectral image cubes is a challenging task. In this paper we propose a spatio-spectral feature extraction method based on the 3D Discrete Cosine Transform (3D-DCT). The 3D-DCT optimally compacts information in the low frequency coefficients. Therefore, we represent each hyperspectral facial cube by a small number of low frequency DCT coefficients. For the purpose of classification, we propose Partial Least Square (PLS) regression. The proposed algorithm is evaluated on three standard hyperspectral face databases. Experimental results show that the proposed algorithm has outperformed five current state of the art hyperspectral face recognition algorithms by a significant margin.
Chapter
Hyperspectral imaging systems collect and process information from specific wavelengths across the electromagnetic spectrum. The fusion of multi-spectral bands in the visible spectrum has been exploited to improve face recognition performance over all the conventional broadband face images. In this chapter, we propose a new Convolutional Neural Network (CNN) framework which adopts a structural sparsity learning technique to select the optimal spectral bands to obtain the best face recognition performance over all of the spectral bands. Specifically, in this method, images from all bands are fed to a CNN, and the convolutional filters in the first layer of the CNN are then regularized by employing a group Lasso algorithm to zero out the redundant bands during the training of the network. Contrary to other methods which usually select the useful bands manually or in a greedy fashion, our method selects the optimal spectral bands automatically to achieve the best face recognition performance over all spectral bands. Moreover, experimental results demonstrate that our method outperforms state-of-the-art band selection methods for face recognition on several publicly available hyperspectral face image datasets.
Article
Full-text available
Hyperspectral imaging technology with sufficiently discriminative spectral and spatial information brings new opportunities for robust facial image recognition. However, hyperspectral imaging poses several challenges including a low signal-to-noise ratio (SNR), intra-person misalignment of wavelength bands, and a high data dimensionality. Many studies have proven that both global and local facial features play an important role in face recognition. This research proposed a novel local features extraction algorithm for hyperspectral facial images using local patch based low-rank tensor decomposition that also preserves the neighborhood relationship and spectral dimension information. Additionally, global contour features were extracted using the polar discrete fast Fourier transform (PFFT) algorithm, which addresses many challenges relevant to human face recognition such as illumination, expression, asymmetrical (orientation), and aging changes. Furthermore, an ensemble classifier was developed by combining the obtained local and global features. The proposed method was evaluated by using the Poly-U Database and was compared with other existing hyperspectral face recognition algorithms. The illustrative numerical results demonstrate that the proposed algorithm is competitive with the best CRC_RLS and PLS methods.