Article

Face recognition using Principal Component Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The strategy of face recognition involves the examination of facial features in a picture, recognizing those features and matching them to 1 of the many faces in the database. There are lots of algorithms effective at performing face recognition, such as for instance: Principal Component Analysis, Discrete Cosine Transform, 3D acceptance methods, Gabor Wavelets method etc. This work has centered on Principal Component Analysis (PCA) method for face recognition in an efficient manner. There are numerous issues to take into account whenever choosing a face recognition method. The main element is: Accuracy, Time limitations, Process speed and Availiability. With one of these in minds PCA way of face recognition is selected because it is really a simplest and easiest approach to implement, extremely fast computation time. PCA (Principal Component Analysis) is an activity that extracts the absolute most relevant information within a face and then tries to construct a computational model that best describes it.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Some good feature extraction techniques are widely used in machine learning, and it is worth further understanding in detail. Those feature extractions are principle component analysis (PCA) [2], HOG [1], linear discriminant analysis (LDA) [3], and transfer learning feature extractors such as VGG16 [4]. ...
... Equation (1), x represents the width pixel of the image, y represents the height pixel of the image, and R img is the resized image. Before the feature extraction process, all images are normalized with the greyscaled process Equation (2). Equation (2) shows the greyscale equation for the normalization process. ...
... Before the feature extraction process, all images are normalized with the greyscaled process Equation (2). Equation (2) shows the greyscale equation for the normalization process. RGB is an additive color in computer vision. ...
Article
Full-text available
Masked face recognition (MFR) is an interesting topic in which researchers have tried to find a better solution to improve and enhance performance. Recently, COVID-19 caused most of the recognition system fails to recognize facial images since the current face recognition cannot accurately capture or detect masked face images. This paper introduces the proposed method known as histogram-based recurrent neural network (HRNN) MFR to solve the undetected masked face problem. The proposed method includes the feature descriptor of histograms of oriented gradients (HOG) as the feature extraction process and recurrent neural network (RNN) as the deep learning process. We have proven that the combination of both approaches works well and achieves a high true acceptance rate (TAR) of 99 percent. In addition, the proposed method is designed to overcome the underfitting problem and reduce computational burdens with large-scale dataset training. The experiments were conducted on two benchmark datasets which are RMFD (Real-World Masked Face Dataset) and Labeled Face in the Wild Simulated Masked Face Dataset (LFW-SMFD) to vindicate the viability of the proposed HRNN method.
... Face recognition is one area that is always interesting to study. At present, face recognition has been implemented in many ways such as digital authentication systems, health systems, licensing systems, access control systems [1], [2]. Currently, research on recognizing faces has been developed in many ways that are more complex and challenging. ...
... After the face has been detected, the next process is to recognize the face. Facial recognition can be done by various methods such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), a local binary pattern, histogram oriented gradient (HOG) and others [1], [2]. ...
... Kaur and Himanshi [1]also proposed the PCA method with an eigenface approach in his research. Some of the steps taken in the proposed method are reading input images, conducting training, developing datasets, evaluating PCA features, evaluating errors, evaluating Euclidian distance values, searching for minimum values, and recognizing imagery based on minimal values. ...
... The smallest Euclidean distance to g is considered the recognized face [12]. The principal component analysis (PCA) [14], [15] is another facial recognition model and works by maximizing the variance amongst classes of data. PCA extracts unique features from a collection of images and projects them onto a given face space. ...
... PCA extracts unique features from a collection of images and projects them onto a given face space. For frontal face recognition, the PCA has the advantage of its rapid computation [14]. The fishers linear discriminant analysis (LDA) [16], [17] is another model that shares a similar prospect with the PCA for facial recognition. ...
Article
Full-text available
Deep learning models have been at the forefront of facial recognition because they deliver improved classification accuracy over traditional ones. Regardless, deep learning models require an extensive dataset for training. To significantly cut down on its training time and dataset volume, pretrained models, have been used although, they are still required to undergo the usual training process for custom facial recognition tasks. This research focuses on an improved facial recognition system that lacks the training and retraining requirements. The system uses an existing deep learning feature extraction model. First, a user stands before a camera-enabled system. After that, the user supplies a unique identification number to fetch a corresponding face image from the database. This process generates two face feature vectors. One from the camera and that retrieved from the database. The cosine distance function determines the similarity value of these vectors. When the cosine distance value falls below a set threshold, the face is recognized and access granted. If the cosine distance of the two vectors gives a value above this threshold, access is denied. The proposed model performs satisfactorily on publicly available datasets.
... Finally, the face is identified by comparing the obtained feature vector against a pre-existing database which contains the data of all subjects [1]. Even though face recognition technology was conceived about 30 years ago, it still needs improvements due to the complexity of the facial features upto using neural networks to increase the accuracy level [2]- [4]. ...
Article
Full-text available
This research presents a face recognition system based on different classifiers that deal with various face positions. The proposed system involves the extraction of features through the VGG-Face-16 deep neural network, which only extracts essential features of input images, leading to an improved recognition step and enhanced algorithm efficiency, while the recognition involves the radial basis function in support vector machine (SVM) classifier and evaluate the performance of the system. Also, the system is designed and implemented later by using other classifiers; they are K-neareste2 neighbour (KNN) classifiers, logistic regression (LR), gradient boosting (XGBoost), decision tree classifier (DT) and Naive Bayes classifier (NB). The proposed algorithm was tested with the four face databases: AT&T, PINs Face, linear friction welding (LFW) and real database. The database was divided into two groups: One contains a percentage of images that are used for training and the second contains a percentage of images (remainder) which was used for testing. The results show that the classification by RBF in SVM has the highest recognition rate in the case of using small, medium and large databases; it was 100% in AT&T and Real database, while its efficiency appears to be lower when using large-size databases whereas it is 96% in PINs database and 60.1% in LFW database.
... The following steps summarize the PCA algorithm according to (Kaur & Himanshi, 2015): ...
Article
Full-text available
Fruits classification is demanded in some fields, such as industrial agriculture. Automatic fruit classification from their digital image plays a vital role in those fields. The classification encounters several challenges due to capturing fruits’ images from different viewing angle, rotation, and illumination pose. In this paper a framework for recognition and classification of fruits from their images have been proposed depending on texture features, the proposed system rely on three phases; firstly, pre-processing, as images need to be resized, filtered, color convert, and threshold in order to create a fruit mask which is used for fruit’s region of interest segmentation; followed by two methods for texture features extraction, first method utilize Local Binary Pattern (LBP), while the second method uses Principal Component Analysis (PCA) to generate features vector for each fruit image. Classification is the last phase; two supervised machine learning algorithms; K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) are utilized to identity and recognize the fruits images classes. Both methods are tested using 1200 fruits images, from 12 classes acquired from Fruits-360 database. The results show that combining LBP with K-NN, and SVM yields the best accuracy up to 100% and 89.44% respectively, while the accuracy of applying PCA with K-NN and SVM reached to 86.38 % and 85.83% respectively.
... The face recognition based on eigenvectors computed from face images can be done. To escalate the efficiency and to reduce the time taken in recognition, Principal Component Analysis (Kaur & Himanshi, 2015;Zafaruddin & Fadewar, 2019) is done on the eigenvectors and all the irrelevant information is then truncated. A slightly different version of kernel PCA-based dimensional reduction on eigenfaces was done by Kim et al, 2002. ...
Article
Full-text available
Face recognition is an emerging field of research in recent days. With the rise of deep learning, face recognition has become efficient and precise, creating new milestones. The performance, accuracy, and computational time of the existing schemes can be enhanced by devising a new scheme. In this context, a multiclass classification framework for face recognition using residual network (ResNet) and principal component analysis (PCA) schemes of deep learning with Dlib library is proposed in this paper. The proposed framework produces face recognition accuracy of 99.6% and a reduction of computational time with 68.03% using principal component analysis.
... It is clear that face recognition system is used in application. There are several methods are used in face recognition such as (Principal Component Analysis, Linear Discriminant Analysis, Independent Component Analysis, Local Binary Pattern etc.) [23]. Among all face recognition method, the Independent Component Analysis (ICA) algorithm achieves a result of 86.7% to recognize faces which is considered a good result, as it compares with Principal Component Analysis (PCA) algorithm on the same sample, where the result is 76.7% [24]. ...
Article
Full-text available
Security system in every sector all over the world is a most demandable and crucial issue to protect the fake user especially in wireless communication system. The most common feature in current wireless communication system is only transmitted users' information; consequently, it is difficult to identify the actual sender to attain the desired security. Face recognition is one of the most popular security systems in biometric authentication, which can detect intruders to restricted or high-security areas in wireless communication sector, and help in minimizing the face user. In this paper, we proposed a system that is the senders will transmit message information with their image to confirm the original senders. A face-recognition algorithm will be implemented at the receiver end. When a senders' transmitted information enters to the receiver section, the image of sender will be separated from original message signal and sent to the face-recognition algorithm to be analysed and compared with an existing database of trusted user. An alarm goes off if the user is not recognized.
... Through Eigen decompositions, the eigenvectors and eigenvalues of the covariance matrix have been proven to generate the e i 's and i's. Where W is a matrix made up of the column vectors w i 's stacked one on top of the other [19], which gives the covariance matrix as, ...
Chapter
Full-text available
The main goal of object detection is to identify and find one or more effective targets in still or video data. It covers a wide range of techniques, including image processing, pattern recognition, and machine learning. The scope of this study is to detect small vehicles in an uncontrolled environment from aerial photographs using effective pre-processing and deep learning algorithms. The model architecture is divided into two phases: Training and Detection. Cropping and extraction of the training samples, feature representation, and classification are all part of the training step. The detection phase comprises of extracting the regions of interest, feature extraction, and classification. The tests are carried out using the Vehicle Detection in Aerial Imagery (VEDAI) dataset. In this paper, we explore the feasibility of dimensionality reduction as pre-processing through Principal Component Analysis (PCA) for effective detection of vehicles in aerial imagery; used to remove unwanted features to remove the common misclassifications caused due to ambiguity of small objects. A comparative study between Deep Learning Models such as ResNet50 and MobileNetv1 based on the proficiency to detect small objects, were coupled with PCA pre-processing to provide the observations. ResNet50 gave a classification accuracy of 85%, whereas MobileNetv1 gave a classification accuracy of 76.25%. From the experimental results, when PCA pre-processing is coupled with architectures comprising skip connections like ResNet50, the misclassification rate of vehicles in aerial imagery was brought down drastically, providing a better detection rate comparable to existing benchmark.KeywordsAerial imageryVehicle detectionVEDAIDimensionality reductionPrincipal component analysisDeep learningResidual networksResNetMobileNet
... PCA is a feature extraction technique in which dimensionality is reduced by considering the variance of each attribute. The components that do not contribute much are removed in reduction [23]. Table II shows our selected features which gave high accuracy with less time computation. ...
Conference Paper
Full-text available
Sign language elevates the communication gap between special and ordinary people. It allows them to connect with society. Many studies have proposed image-based or sensor-based models for sign language whereas this study develops a sign language recognition (SLR) system for Italian sign language through Myo armband Electromyography (EMG) sensors only. The sensor keeps the subject comfortable during its utilization. In the proposed methodology, raw EMG from the Myo armband was denoised using an infinite impulse response (IIR) filter. A combination of 11 features was extracted from various domains after preprocessing of EMG signals. Subsequently, some prominent classifiers were trained and tested for accurate system design. The system has achieved the highest accuracy of 93.5% using the Linear Discriminant technique. The system authentication was performed on a 10-folds cross-validation
... Алгоритм анализа главных компонент с применением ядра или Kernel PCA, является расширением базового метода PCA. Один из главных недостатков оригинального алгоритма в том, что он использует линейное разложение и испытывает трудности при преобразовании изображений, имеющих сложные нелинейные структуры [10]. Принцип его работы, это повышение размерности исходного изображения, для выделения в пространстве более высоких размерностей векторов, описывающих исходное изображение более полно. ...
Article
Full-text available
The purpose of the paper is to study the holistic image transformation methods for person's identity authentication from a thermographic image of faces. Within this study the datasets of images of faces in the far infrared range (LWIR) have been collected. The novelty of the study consists in the features of the dataset of images which collected in real conditions that affect the quality of authentication, such as facial expressions, wearing glasses or a medical mask, applying makeup / cosmetics, different illumination and temperature conditions of the environment, head turns. The methods under the study are based on the construction and selection of image features while reducing the dimension and converting the image into another form of representation. These methods are used to solve the problem of distinguishing features in images, in authentication person identity by 2D image of his face and allows solving other computer vision problems. This paper discusses classical methods of integral image transformation: principal component analysis, principal component analysis using a kernel, linear descriptor analysis, independent component analysis, truncated singular value decomposition, discrete cosine transformation. As a measure of the proximity of images, the Euclidean distance between the vectors of image features is used. Testing of the methods was performed on a set of thermograms consisting of 632 thousand images of the faces of 158 people. The F-measure was used as a metric to assess the quality of comparison of the selected methods. As a result of the experiment, the independent component analysis method showed the highest values of the F-measure metric – 0.72. The results of the study can find applications in access control and control systems to increase the fault tolerance of authentication of persons. The use of the considered methods is effective in the tasks of processing thermographic images for authenticating a person by secondary signs, by the pattern of his veins and vessels on the face, in cases of changes in facial expressions and appearance by means of applying makeup and wearable accessories.
... The method uses the projecting phenomenon of the detected face with a known individual eigenfaces (Jalled, 2017). A set of eigenfaces which are the transformed faces containing the characteristics information are generated (Paul and Al Sumam, 2012;Meher and Meban, 2014;Kaur and Himanshi, 2012). These sets correspond to the input components for the initial training of the algorithm (Wagh et al., 2015). ...
... The method uses the projecting phenomenon of the detected face with a known individual eigenfaces (Jalled, 2017). A set of eigenfaces which are the transformed faces containing the characteristics information are generated (Paul and Al Sumam, 2012;Meher and Meban, 2014;Kaur and Himanshi, 2012). These sets correspond to the input components for the initial training of the algorithm (Wagh et al., 2015). ...
... In [6], the developers have been proposed detection of face features from images using databases. The researchers [7] have been presented face recognition system by Principal Component Analysis (PCA) which has been given accurate result, easy implementation and fast computation time. The authors [8] have been developed human face recognition system using the eigenface approach. ...
Conference Paper
Full-text available
Face detection applications currently can use machine learning and equations for better identification of human faces in an image or video. In several types of applications such as recognition of face application that is processing in method of biometric for checking the human’svalidation which is related to facial information and different types of samples utilizing face detection system. This paper of presentation shows that system of detection and recognition in face by Viola-Jones process add together Haar Cascade and Principal Component Analysis (PCA) of technique which is proposed for gaining better accuracy level in detection of facial portions and recognition of faces. The findings show that the proposed system has performed better compared with other conventional face detection and recognition systems. The investigation has been done in a MATLAB environment on real images of the Faces94, Faces95, Faces96, and grimace databases.
... The calculations largely decrease with this analysis, form the order of several pixels in the images to the order of the number of images in the training set [16,17]. ...
Article
Full-text available
Face recognition is a well-known image analysis application in the fields of pattern recognition and computer vision. It utilizes the uniqueness of human facial characteristics for personnel identification. The paper in hand presents a facial recognition system that uses facial features and Support Vector Machine (SVM) to achieve accurate recognition. The image processing happens with histogram equalization, also utilizing a median filter for pre-processing. A combination of wavelet transforms and Histograms of Oriented Gradients (HOG), extracts the feature vector, which produces a reliable performance. Dimensionality reduction happens by applying Principal Component Analysis (PCA). Finally, by applying the Support Vector Machine (SVM), face classification happens. Experimental development happened on the Yale database of 165 images from 15 individuals in a MATLAB environment. The testing proved the accuracy of 98.64% percentage of success with acceptable speed, and it confirmed the accuracy and robustness of the proposed system.
... Principal Component Analysis is a common technique used for identifying the face and pattern recognition to show the distinct similarities and their differences as discussed (Akrouf et al. 2009). Kaur and Himanshi (2015) have discussed that PCA is also used to extract the most relevant features of the face. Akariman et al. (2015), Wu and Lu (2016), have demonstrated on Local Binary Pattern algorithm. ...
Article
Full-text available
Face recognition system contains lots of challenges due to various environmental factors, background variations, poor quality of camera, different illumination and others. Since twins are involved with criminal activities, twin identification becomes an essential task. The proposed system is focused on identifying the identical twins for the still images. The fusion based approach has been implemented in the proposed system. It combines the features extracted by using Principal Component Analysis (PCA), Histogram Oriented of Gradients (HOG), Local Binary Pattern (LBP), Gabor and distance between the facial components. Three types of fusion such as Decision Level Fusion, Feature Level Fusion and Score Level Fusion are used in the proposed approach. Based on these fusion generated scores, the twin has been identified. In the proposed system, Particle Swarm Optimization is used for the best feature selection and SVM classifier is used for training and testing the image. The proposed system provides better results when compared with the other twin detection techniques.
... We based our system on face recognition and detection techniques. There are many techniques for face recognition and detection, for example, local binary patterns (LBP) [8,9], principal component analysis (PCA) [10,11], a combination of PCA, wavelet, and support vector machines (SVM) [12], local binary pattern histogram (LBPH) [13], independent component analysis (ICA) [14,15], eigenfaces [16], and linear discriminant analysis (LDA) [17,18], SVM [19,20], combining fast discrete curvelet transform (FDCvT) and invariant moments with SVM and deep learning technology [21,22]. Dharpure et al. [23] proposed a system that utilized counting objects techniques for a fast template matching process based on the normalized cross-correlation (NCC) algorithm. ...
Article
Full-text available
A smart student attendance system (SSAS) is presented in this paper. The system is divided into two phases: hardware and software. The Hardware phase is implemented based on Arduino's camera while the software phase is achieved by using image processing with face recognition depended on the cross-correlation technique. In comparison with traditional attendance systems, roll call, and sign-in sheet, the proposed system is faster and more reliable (because there is no action needed by a human being who by its nature makes mistakes). At the same time, it is cheaper when compared with other automatic attendance systems. The proposed system provides a faster, cheaper and reachable system for an automatic smart student attendance that monitors and generates attendance report automatically.
... After entering a random image as input in the face recognition system, it will explore the database and identify the person as output. Usually, a face identification system contains four components [2] as shown in Fig. 1: detection, alignment, feature extraction, and matching, the refining steps are localization and normalization (face detection and alignment) before face identification (extraction and matching) is done [3]. Facial image identification separates the facial region from the background. ...
Conference Paper
Full-text available
One such complicated and exciting problem in computer vision and pattern recognition is identification using face biometrics. One such application of biometrics, used in video inspection, biometric authentica-tion, surveillance, and so on, is facial recognition. Many techniques for detecting facial biometrics have been studied in the past three years. However , considerations such as shifting lighting, landscape, nose being farther from the camera, background being farther from the camera creating blurring, and noise present render previous approaches bad. To solve these problems, numerous works with sufficient clarification on this research subject have been introduced in this paper. This paper analyzes the multiple methods researchers use in their numerous researches to solve different types of problems faced during facial recognition. A new technique is implemented to investigate the feature space to the abstract component subset. Principle Component Analysis (PCA) is used to analyze the features and use Speed up Robust Features (SURF) technique Eigenfaces, identification, and matching is done respectively. Thus, we get improved accuracy and almost similar recognition rate from the acquired research results based on the facial image dataset, which has been taken from the ORL database.
... The main advantage of PCA would be that it greatly reduces the dimension of the data and allows a test picture to be recognized. The trained images are not saved as image files but saved as their measurements, which are used to project any trained image to the set of the individual faces [16]. The appropriate data is must be collected and processed effectively, to measure within each picture. ...
Article
In recent years, Machine Learning has regularly grown to be fast research, however, humans have an exclusive method of expressing their emotions and beneath the impact of clarity, heritage, and different factors, and there are a few problems in facial features recognition. In this paper, mainly focusing the visually challenged persons is struggling to identify the emotions of the people in front of us, due to the lack of non-visual information in their nearby environment. The proposed system is to collect the face features to checking the population's influence on the results of emotion reputation like body movements or voice indicators and additionally video series further to photograph in the emotion reputation system. The face features are extracted by using the Haar classifier which detects the face and then obtains the characteristic value. Six kinds of specific emotional categories are obtained through the Neural Network classifier training. The proposed experiment result is taken from the analysis and achieves reasonable accuracy, dependent on the test set, and test emotions.
... It is a collection of vectors from eigenface. In addition, it calculates the covariance matrix from several parts of a collection of training face images [14]. The ability to extract this feature can be used to recognize facial images. ...
Article
Full-text available
Facial recognition is one of the most successful applications of image analysis and understanding. This paper presents a Principal Component Analysis (PCA) and eigenface method for facial feature extraction. Several performance metrics, i.e. accuracy, precision, and recall are taken into account as a baseline of experiment. Furthermore, two public data sets, namely SOF (Speech on faces) and MIT CBCL Facerec are incorporated in the experiment. Based on our experimental result, it can be revealed that PCA has performed well in terms of accuracy, precision, and recall metrics by 0.598, 0.63, and 0.598, respectively.
... However, the recognition of face suffers from errors due to the changes in the illumination conditions, resolution and posing angles [3]. Various face recognition methods such as Linear Discriminant Analysis [4,5], Artificial Neural Network [6], Eigenfaces [6][7][8], Independent Component Analysis [9], Principal Component Analysis (PCA) [10,11] and Fisherfaces [8] have been developed. ...
Conference Paper
Full-text available
Video surveillance systems continue to grow in importance and use. They monitor the behavior and activities of the people using electronic equipment. Consequently, video surveillance has emerged as a main component in ensuring public security at airports, hospitals, banks, government agencies, casinos and also educational institutions. Therefore, they have a great potential for enhancing security requirements in educational institutions. However, real-time detection and recognition of a human face from the video sequences is a difficult task due to the background variations, changes in the facial expression and illumination intensity. The ability to automatically recognize the faces in the surveillance video is highly important in detecting the intruder/suspicious person. Face detection and recognition are the two main stages of the surveillance process. Facial recognition has gained a lot of significance in commercial, finance and security applications. Various face recognition techniques are developed to improve the accurate recognition of the face in the image. However, the existing techniques suffer due to the variation in the illumination intensities, facial angles, low resolution, improper focus and light variations. This paper provides a survey of the face detection and recognition techniques. The survey presents the comparative analysis of the recent face detection and recognition techniques along with the merits and also discusses their applicability in the education sector. This information is very important in choosing what techniques would best be applied in educational institutions putting into consideration the financial and technological constraints they operate under.
... Facial Recognition techniques also include Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Gabor Wavelets method etc. Kaur and Himashi in [24] have utilized PCA method for facial recognition as it provides better recognition rates and less computational complexity. By reducing the dimensionality of the data, a large amount of computation time can be saved which is critical for searching faces in vast image datasets. ...
Article
Full-text available
Video recorders record the output of each security camera. After an incident, the video footage can be used for evidence by locating a suspect or criminal for a crime. A manual scan of the video footage requires a considerable amount of manpower and time, a luxury which cannot be afforded when tracking down a Person Of Interest (POI). An automated system is proposed in this research which aims at finding the desired POI through the available volume of video data quickly and accurately. It is visualized to go through all the available videos and detect the POI using facial recognition. Thereafter it would create a video montage of all the desired frames and incorporate time and location information to produce a path map followed by the POI. The proposed system reduces the human burden, human error and reduce the time taken when searching the POI manually. Validation has been performed on various video data collected by ourselves as well. The results depict that the proposed system is able to correctly identify POI with an accuracy of 86 percent for video data captured in a constrained environment. Videos captured by a cell phone in an unconstrained environment result in an accuracy of around 80 percent. Real video tested in our university campus revealed the proposed system is capable of generating tracking information for POI effectively.
... In face recognition, facial feature extraction is the key for recognizing examples accurately and creating more effective systems. There are a lot of techniques for extracting facial feature, for example, Principal Component analysis (PCA) [1], Edge Contour Feature analysis [2], Elastic Bunch Graph Matching [3], etc. Many classification and recognition techniques, such as, KNN [4], Neural networks classification [5] and Support Vector Machine (SVM) [6], have also been proposed. ...
Article
In this paper, we propose a new face recognition method combining Vector Quantization (VQ) method and Support Vector Machine (SVM) classifier. VQ method is used as a feature extractor and SVM classifier for feature classification. By applying low pass filtering and VQ processing to a facial image, a histogram including effective facial feature is generated, which is called VQ histogram. After dividing VQ histograms into training set and testing set, classifiers are trained with training examples (training histograms) by using Gradient Descent Method (GDM). Testing examples (testing histograms) can be tested with optimal classifiers for face recognition. We use the publicly available ORL face database for the evaluation of recognition accuracy, which consist of 400 images of 40 individuals. Experimental results show that the variety of filter size affects the recognition accuracy. The recognition rate increases with an increase of the ratio of training examples and testing examples, and maximum recognition rate of 98.0 % is obtained.
... The recognition accuracy rate achieved was 86%, which is higher than that of other existing methods. In [49], a method for facial recognition based on PCA was proposed. The main aim of this study was to investigate the effect of possible non-idealities, such as excess of accuracy, strict time limitations, high processing speed, and availability. ...
Article
Full-text available
A fast automated biometric solution has been proposed to satisfy the future border control needs of airports resulting from the rapid growth in the number of passengers worldwide. Automated border control (ABC) systems handle the problems caused by this growth, such as congestion at electronic gates (e-gates) or delays in the planned arrival schedules. Different modalities, such as face, fingerprint, or iris recognition, will be used in most of the ABC systems located at airports in the European/Schengen areas. Because facial recognition is the modality that travelers consider most acceptable, it was decided to include this modality in all second generation passports. Face recognition systems, installed in small kiosks inside the e-gates, require high quality facial images to allow high performance and efficiency. Accurate face recognition algorithms, which should be invariant to non-idealities, such as changes in pose and expression, occlusions, and changes in lighting, are also required for these systems. In this paper, a review of the most important face recognition algorithms described in the literature that are invariant to these non-idealities and that can be used in ABC e-gates is presented. A comparative analysis of the most common ABC e-gates located at the different airports is provided. In addition, the results of an experimental evaluation of a face recognition system when halogen, white LEDs, near infra-red, or fluorescence illumination was used, which was conducted in order to determine which type of illumination is optimal for use in ABC e-gates, are presented. To conclude, improvements that could be implemented in the near future in ABC face recognition systems are described.
Chapter
Face recognition technology is a technique that recognizes and authenticates individuals based on their unique facial features. It has various applications, including security, access control, identity verification, and social media tagging. These systems use algorithms to analyze facial traits and create a facial template for comparison with a database of known faces. Advances in machine learning and computer vision have improved the accuracy of face recognition technology. However, concerns about the privacy and security implications of biometric data collection and storage have arisen. This paper provides an overview of the history, techniques, algorithms, applications, performance evaluation metrics, challenges, and future directions of face recognition technology.
Chapter
In this paper, the geometry of decision border between affine sub-spaces is investigated. Affine sub-spaces are used as prototypes in machine learning approaches such as “Tangent Learning Vector Quantization" and “Tangent Distance Kernel for Support Vector Machines" for classification of data. These models assume that there are class invariant manifolds that can be locally approximated by an affine space of similar dimensions. However, in practice this assumption may not always be true, because the affine spaces compete to provide a suitable local metric that leads to proper decision boundaries for an optimal separation and classification in the feature space. Therefore, considering affine spaces together with the corresponding decision borders is necessary when drawing conclusion about the geometry of the classification problem. An understanding of the type of decision border, between two affine sub-spaces, can be used to modify related learning methods, prevent undesirable scenarios, and gain insights about the geometry of the data set. We will show that the decision borders, that are basically quadratic surfaces, can be affine spaces, Hyper-Cones, or hyperbolic paraboloids embedded in the feature space. Each type of border suggests a relative formation of data points. We will also show when a linear decision border happens.
Article
Masked face recognition embarks the interest among the researchers to find a better algorithm to improve the performance of face recognition applications, especially in the Covid-19 pandemic lately. This paper introduces a proposed masked face recognition method known as Principal Random Forest Convolutional Neural Network (PRFCNN). This method utilizes the strengths of Principal Component Analysis (PCA) with the combination of Random Forest algorithm in Convolution Neural Network to pre-train the masked face features. PRFCNN is designed to assist in extracting more salient features and prevent overfitting problems. Experiments are conducted on two benchmarked datasets, RMFD (Real-World Masked Face Dataset) and LFW Simulated Masked Face Dataset using various parameter settings. The experimental result with a minimum recognition rate of 90% accuracy promises the effectiveness of the proposed PRFCNN over the other state-of-the-art methods.
Article
Full-text available
The Zambia Association of Public Universities and Colleges (ZAPUC) hosted an international conference from 29thApril to 3rdMay 2018 at Avani Hotel, located at the banks of the Victoria Falls in Zambia’s tourist capital, Livingstone. The conference brought together educationalists, researchers, policy makers, government officials and industry executives to reflect on the role of Universities and colleges in fostering sustainable national development. The conference availed the participants the opportunity to gauge how higher education could be harnessed into being a key contributor to the realization of the sustainable development goals within the Agenda 2030 and beyond. The main objective of the conference was to provide a platform and stimulate discussion on the role of higher education in sustainable national development with particular reference to the SubSaharan Africa region. It was expected that the conference will bring out issues that our respective governments and higher institutions of learning need to consider for the repositioning and transformation of higher education to effectively contribute to sustainable national development. The theme of the conference was “Repositioning the role of Universities and Colleges in Sustainable National Development” The outcomes of the Conference were: 1. Enhanced sharing of good practices, research results and collaboration initiatives in solving challenges that cross borders through the unlocking and harnessing of new knowledge as well as building cultural and political understanding resulting in the modelling of environments that promote dialogue and debate and positively contribute to national development. 2. Forging of mutually beneficial partnerships and collaboration networks among higher education institutions and with industry and government resulting in the adoption of new initiatives for cofinancing of higher education and implementation of projects thereby complementing the limited government funding
Article
This paper presents an ensemble face recognition system which makes use of the novel local descriptor called Dense Local Graph Structure (D-LGS) which is exploited from symmetric LGS that and it uses additional graph structure in addition to its own local graph structure. This additional local graph structure is generated by finding additional corner pixel points through bilinear interpolation of neighbourhood pixels. These corner pixels lead to most stable features and information related to local deformation of the image. In this proposed ensemble system, three classifiers, namely K-nearest neighbour, Chi-square and correlation coefficient are used. Further the proposed approach fuses the decisions obtained from individual classifiers through OR rule, majority voting and AND rule. To evaluate the performance of proposed ensemble system, the experiment is conducted with three face databases viz. AT&T (formerly The ORL Database of Faces), UFI and LFW face database. The ensemble face recognition system on the use of novel dense local graph structure has reached the accuracy of 100% on AT&T, 99.3488% on UFI and 87.3372% on LFW face database. Further, the templates of D-LGS are optimized using Genetic algorithm (GA) as part of ‘curse-of-dimensionality’ and the reduced number of templates give accuracies of 100% on AT&T and 99.2165% on LFW face database.
Chapter
To precisely re-identify a person is a daunting task due to various conditions such as pose variation, illumination variation, and uncontrolled environment. The methods addressed in related work were insufficient for correctly identifying the targeted person. There has been a lot of exploration in the domain of deep learning, convolutional neural network (CNN) and computer vision for extracting features. In this paper, FaceNet network is used to detect face and extract facial features and these features are used for re-identifying person. Accuracy of FaceNet is compared with Histogram of Oriented Gradients (HOG) method. Euclidean distance is used for checking similarity between faces.
Chapter
In recent years, deep learning has become a very prevalent technology in face recognition. Google came up with a deep convolution neural network called Facenet which performs face recognition using only 128 bytes per face. As claimed by Google, Facenet attained nearly 100-percent accuracy on the widely used Labeled Faces in the Wild (LFW) dataset. But in the case of low resolution face images it’s the other way round. This low resolution challenge occurs in many existing face recognition algorithms, due to which satisfactory performance has become hard to be achieved. The goal of this paper is to present the obtained results after evaluating the performance of Facenet on low resolution face images compared to high resolution face images.
Conference Paper
In recent years, there have been very promising applications of biometric systems to improve access control systems and security of data recording. Of all the biometric systems available, fingerprint verification is the most dominant in commercial application due to its excellent performance and low cost. In this study, it is attempted to implement a fingerprint and face detection and recognition biometrics system for the professors' attendance management thus replacing the current manual system. Proposed system provides faculty face recognition using Viola-Jones Face Detection Method and Principal Component Analysis (PCA) integrated with fingerprint verification using Arduino. Test results improved attendance system accuracy and automate faculty attendance system.
Conference Paper
Full-text available
Face recognition has a major impact in security measures which makes it one of the most appealing areas to explore. To perform face recognition, researchers adopt mathematical calculations to develop automatic recognition systems. As a face recognition system has to perform over wide range of database, dimension reduction techniques become a prime requirement to reduce time and increase accuracy. In this paper, face recognition is performed using Principal Component Analysis followed by Linear Discriminant Analysis based dimension reduction techniques. Sequencing of this paper is preprocessing, dimension reduction of training database set by PCA, extraction of features for class separability by LDA and finally testing by nearest mean classification techniques. The proposed method is tested over ORL face database. It is found that recognition rate on this database is 96.35% and hence showing efficiency of the proposed method than previously adopted methods of face recognition systems.
Conference Paper
Full-text available
Principal Component Analysis (PCA) is one of the most widely used subspace projection technique for face recognition. In subspace methods like PCA, feature selection is fundamental to obtain better face recognition. However, the problem of finding a subset of features from a high dimensional feature set is NP-hard. Therefore, to solve the feature selection problem, heuristic methods such as evolutionary algorithms are gaining importance. In many face recognition applications, due to the small sample size (SSS) problem, it is difficult to construct a single strong classifier. Recently, ensemble learning in face recognition is gaining significance due to its ability to overcome the SSS problem. In this paper, the NP-hard problem of finding the best subset of the extracted PCA features for face recognition is solved by using the differential evolution (DE) algorithm and is referred to as FS-DE. The feature subset is obtained by maximizing the class separation in the training data. We also present an ensemble based approach for face recognition (En-FR), where different subsets of PCA features are obtained by maximizing the distance between a subset of classes of the training data instead of whole classes. The subsets of the classes are obtained by bagging and overlap each other. Each subset of the PCA features selected is used for face recognition and all the outputs are combined by a simple majority voting. The proposed algorithms, FS-DE and En-FR, are evaluated on four wellknown face databases and the performance is compared with the PCA and Fisher's LDA algorithms.
Article
Full-text available
Automatic facial expression analysis is an interesting and challenging problem, and impacts important applications in many areas such as human–computer interaction and data-driven animation. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. In this paper, we empirically evaluate facial representation based on statistical local features, Local Binary Patterns, for person-independent facial expression recognition. Different machine learning methods are systematically examined on several databases. Extensive experiments illustrate that LBP features are effective and efficient for facial expression recognition. We further formulate Boosted-LBP to extract the most discriminant LBP features, and the best recognition performance is obtained by using Support Vector Machine classifiers with Boosted-LBP features. Moreover, we investigate LBP features for low-resolution facial expression recognition, which is a critical problem but seldom addressed in the existing work. We observe in our experiments that LBP features perform stably and robustly over a useful range of low resolutions of face images, and yield promising performance in compressed low-resolution video sequences captured in real-world environments.
Conference Paper
Full-text available
Principal component analysis (PCA) and Linear Discriminant Analy- sis (LDA) techniques are among the most common feature extraction tech- niques used for the recognition of faces. In this paper, two face recognition systems, one based on the PCA followed by a feedforward neural network (FFNN) called PCA-NN, and the other based on LDA followed by a FFNN called LDA-NN, are developed. The two systems consist of two phases which are the PCA or LDA preprocessing phase, and the neural network classification phase. The proposed systems show improvement on the recognition rates over the conventional LDA and PCA face recognition systems that use Euclidean Distance based classifier. Additionally, the recognition performance of LDA- NN is higher than the PCA-NN among the proposed systems.
Conference Paper
Full-text available
Face recognition, the art of matching a given face to a database of faces, is a non-intrusive biometric method that dates back to the 1960s. Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. The paper presents a new method for face recognition. This can cope with different lightning conditions and different distorted levels in facial images. This method relies on a variation of Principal Component Analysis (PCA) technique. The algorithm extracts the eigan values and eigan vectors from the images. It performs the economic size singular value decomposition to obtain a unitary matrix, which is use for recognition. The images are recognized base on the minimum distance. The system finds the closest match from the database to the incoming image. The system uses the ldquoOlivetti face databaserdquo as the face image database. The database contains 10 images of each person in the group. The proposed system will take a picture that is not included in the database and match it to a picture of the same person that is within the image database. Experimental results demonstrate that the proposed approach can efficiently recognize human faces. This system satisfactorily deals with the problems caused by using other face recognition systems. This algorithm can achieve 93.7% and higher performance. Successful results were obtained in different situations where images have taken under different lighting conditions. The proposed method reduces the computational load. In comparison with the traditional use of PCA, the proposed method gives better recognition accuracy and discriminatory power.
Conference Paper
This paper provides an example of the face recognition using SIFT-PCA method and impact of Graph Based segmentation algorithm on recognition rate. Principle component analysis (PCA) is a multivariate technique that analyzes a face data in which observation are described by several inter-correlated dependent variables. The goal is to extract the important information from the face data, to represent it as a set of new orthogonal variables called principal components. The paper presents a proposed methodology for face recognition based on preprocessing face images using segmentation algorithm and SIFT (Scale Invariant Feature Transform) descriptor. The algorithm has been tested on 50 subjects (100 images). The proposed method first was tested on ESSEX face database and next on own segmented face database using SIFT-PCA. The experimental result shows that the segmentation in combination with SIFT-PCA has a positive effect for face recognition and accelerates the recognition PCA technique.
Conference Paper
The human face is one of the most popular characteristic which can be used in the biometric security system to identify or verify a user. Face is an acceptable biometric modality because it can be captured from a distance, even without physical contact of the user being identified. Thus the identification or verification does not require cooperation of the user. Recognition systems based on human face are used for a wide variety of applications, due to these benefits. However, the crucial task is still to provide reliable recognition accuracy, but it is a challenging problem under real-world conditions. There have been proposed many methods, but only a few of them are being used in the real-world applications. Even the most recent face recognition algorithms are still facing problems when there is non-ideal imaging, varying illumination, occlusions in the scene or noise of used cameras. We solve these issues within The Next-Generation Hybrid Broadcast Broadband project (HBB-Next) [1]. In this project, we also deal with development of face recognition application, as part of multimodal interface, which will interact with HBB-TV user. In this paper we provide a comparative study of several conventional face recognition methods (PCA a.k.a. Eigenfaces, RBF) and novel kernel methods (KPCA, GDA and SVM) that are suitable to work properly under these conditions. We evaluate the influence of noise and partial occlusion on face recognition accuracy. We are focused on occlusions of eyes and eyebrows as these are the most significant features of a face. Face recognition rates achieved by machine learning methods with accuracy achieved by human perception only are compared. In addition we explore these methods for cases where only a few (up to 4) training samples is available.
Conference Paper
Principle component analysis (PCA) and its improved models have found wide applications in pattern recognition field. PCA is a common method applied to dimensionality reduction and feature extraction. Its goal is to choose a set of projection directions to represent original data with the minimum MSE. In this paper, we propose a Principle Vectors Subspace (PVS) for face recognition. Firstly, we use PCA to extract each dimension vector, so we attain a subspace which conclude principle vectors of each dimension. Then we use a base of this subspace to represent a test sample and classify it by Nearest Neighbor classifier. In order to evaluate the performance of our method, we make a comparison of PCA, KPCA and our method on the ORL and AR databases. The experimental results show our method take a good performance.
Conference Paper
Face recognition is a biometric analysis tool that has enabled surveillance systems to detect humans and recognize humans without their cooperation. In this scheme face recognition is done by Principal Component Analysis (PCA). Face images are projected onto a face space that encodes best variation among known face images. The face space is defined by eigenface which are eigenvectors of the set of faces, which may not correspond to general facial features such as eyes, nose, lips. The eigenface approach uses the PCA for recognition of the images. The system performs by projecting pre extracted face image onto a set of face space that represent significant variations among known face images. Computers that detect and recognize faces could be applied to a wide variety of practical applications including criminal identification, security systems, identity verification etc.
Conference Paper
Multi-resolution analysis has been known to be effective for face recognition, however, most approaches only utilize scale and position information of different scales of decomposed image, only a few approaches utilize directional information. To investigate the potential of shear lets direction, this paper presents a new method for face description and recognition using shear lets transform and principle component analysis. Motivated by multi-resolution analysis, face images are performed by shear lets transform, and then directional information is exploited along with conventional scaling and translation parameters. Finally, face feature is extracted by principle component analysis. Experimental results on ORL and FERET face database show that the proposed method can get high face recognition rates.
Conference Paper
Machine automated face recognition has gained significant importance due to its scientific challenges and its potential applications. However, most of the systems designed to date can only successfully recognize faces when images are obtained under constrained conditions. The success of face recognition systems rely on a variety of information in images of human faces such as pose, facial expression, occlusion and presence or absence of structural components. The proposed model targets an approach for the recognition of expression variant faces since there are very few face recognition solutions to address this problem and this is a key research area in face recognition. This model proposes an approach to face recognition where the facial expression in the training image and in the testing image diverge and only a single sample image per class is available to the system. The input to the system is a frontal face image with neutral expression and identical background where the subjects' hair is tied away from the face. The proposed model is based on Principal Component Analysis approach. This approach has been applied on a set of images in order to extract a set of Eigen-images known as Eigen faces and weights of this representation are used for recognition. For the classification task, distance metric Euclidean Distance has been used to find the distance with the weight vectors associated with each of the training images. When tested with eight subjects and six basic expressions the overall recognition rate was 89%, for trained faces.
Article
The problem of face recognition using Laplacian pyramids with different orientations and independent components is addressed in this paper. The edginess like information is obtained by using Oriented Laplacian of Gaussian (OLOG) methods with four different orientations (0°, 45°, 90°, and 135°) then preprocessing is done by using Principle Component analysis (PCA) before obtaining the Independent Components. The independent components obtained by ICA algorithms are used as feature vectors for classification. The Euclidean distance (L2) classifier is used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination, facial expressions and facial poses up to 180° rotation angle.
Conference Paper
This paper describes the novel approach of classifying the humans on the basis of their compressed face images. The compression of the face images is performed using Discrete Wavelet Transform (DWT). While the classification encompass the use of Principal Components Analysis (PCA). Classification technique utilizes PCA in some different way. Only first principal component is used as feature vector out of 92 components (since image size is 112×92), causing a better results of 87.39%. The Euclidean distance is used as distance metric. In the end our results are compared to our previous research of classifying the uncompressed images.
Omelina eubos, Oravec Milos and Pavolovacova Jarmila pp
  • Banjozef Feder
Classification of compressed human face images
  • Zahid Riaz
  • Arif Giggiti
  • Zulfqar Ali
Face Detection and Recognition (Theory and Practice)
  • Eyal Arubas