Conference PaperPDF Available

Performance Analysis of Face Recognition using Eigenface Approach

Authors:
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Performance Analysis of Face Recognition using
Eigenface Approach
Ibnu Utomo Wahyu Mulyono
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
ibnu.utomo.wm@dsn.dinus.ac.id
Eko Hari Rachmawanto
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
eko.hari@dsn.dinus.ac.id
De Rosal Ignatius Moses Setiadi
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
moses@dsn.dinus.ac.id
Amiq Fahmi
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
amiq.fahmi@dsn.dinus.ac.id
Ajib Susanto
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
ajib.susanto@dsn.dinus.ac.id
Muljono
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
muljono@dsn.dinus.ac.id
Abstract— Eigenface is an algorithm in the principal
component analysis (PCA) that is used to recognize faces.
Eigenface used to reduce dimensionality and find the best vector
for distributing the facial image in the facial space. This method
has been widely used and implemented in various previous
researches to recognize human face images. Not only to detect
human faces under normal conditions, but PCA has also been
proven to be able to properly recognize images in various
expressions. It can even recognize facial images with various
challenges such as detecting faces after plastic surgery and
combining them with facial image reconstruction techniques.
This research aims to examine the performance of the PCA-
Eigenface method to recognize human face images from several
databases that have their own challenges, such as the lack of
illumination of facial images, significant variations in expression
and the use of accessories such as glasses. The recognizable
accuracy is quite varied, from 100% to 67% in each database
with and with an average recognition of more than 85%.
Keywords—Eigenface, Euclidian, Face recognition, JAFFE,
PCA, Yale
I. I
NTRODUCTION
Face recognition is one area that is always interesting to
study. At present, face recognition has been implemented in
many ways such as digital authentication systems, health
systems, licensing systems, access control systems[1], [2].
Currently, research on recognizing faces has been developed
in many ways that are more complex and challenging. Not
only face recognition, but facial expressions are also
recognized that is useful for social communication[3][4].
Other research also combines recognition technology and
reconstruction of damaged facial images caused by random
occlusion [5]. Besides facial recognition is also done on facial
images that have been done by plastic surgery, as in research
[6].
The first process that is carried out before carrying out the
face is the process of face detection. After the face has been
detected, the next process is to recognize the face. Facial
recognition can be done by various methods such as principal
component analysis (PCA), linear discriminant analysis
(LDA), independent component analysis (ICA), a local binary
pattern, histogram oriented gradient (HOG) and others[1], [2].
PCA is one of the most popular methods and is still used
today. Eigenface is an algorithm in PCA that is used to
recognize faces. Eigenface is used to reduce dimensionality in
facial space[7]. Many studies recognize faces using the
eigenface approach as in research [5]–[10]. Where the
eigenface algorithm is proven to be able to recognize human
faces with high accuracy, it can even recognize faces in more
challenging cases such as facial expressions, recognize real
faces and after surgery, can even be combined with
reconstruction of damaged facial images. So in this research,
the eigenface method will carry out performance and analysis
tests to recognize facial images taken from public databases
that have been widely used in previous research.
II. R
ELATED
W
ORK
Research conducted by Kshirsagar et al. [8] proposed
recognizing facial images using the eigenfaces approach.
Eigenface was chosen for reasons of algorithm simplicity,
computational speed, and learning ability. The steps proposed
in the research method are first to make an acquisition on the
image dataset, calculate eigenface to get the highest
eigenvalue, calculate the appropriate distribution in the
dimensions of each individual face space, then make a
classification of the weight pattern if it is a human face. In this
way, the accuracy of recognizing faces is more than 85%.
Kaur and Himanshi [1]also proposed the PCA method
with an eigenface approach in his research. Some of the steps
taken in the proposed method are reading input images,
conducting training, developing datasets, evaluating PCA
features, evaluating errors, evaluating Euclidian distance
values, searching for minimum values, and recognizing
imagery based on minimal values. The results of recognizable
accuracy are around 99%.
Johnson and Savakis [5], proposed an eigenface approach
that was modified to recognize human faces. This method
combines the PCA L1 method with greedy based search
techniques. In his research, the method was used for
recognition of images damaged by random occlusion and
subsequently the image reconstruction. From all datasets used,
20% of the data is occluded with noise. Then the
reconstruction of 20% of the data is reconstructed based on the
recognition in the entire dataset.
George et al. [6] also proposed the PCA method for
recognizing faces. This time recognizing faces is done on the
original face image and facial image that plastic surgery has
done. The use of this algorithm also proved to be effective and
can make recognition of facial images with an accuracy above
80%.
Based on previous researches, it is evident that the
eigenface algorithm in the PCA method has many advantages
in computational speed and accuracy in recognizing facial
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
12
images. Even recognizing more challenging facial images still
gets relatively very good accuracy. So in this research, it is
also proposed to test the eigenface algorithm to recognize face
images in several databases with certain levels of difficulty
and images of faces using accessories such as glasses.
III. R
ESEARCH
M
ETHODOLOGY
At this stage, the research stage that will be carried out in
this research is proposed, where a face recognizing process
based on the PCA-eigenface method is then carried out to
measure the accuracy of recognition. The entire research
process is carried out using Matlab 2015a. Fig. 1 describes the
process of detection and face recognition using the PCA-
eigenface method to be performed.
Fig. 1. PCA-Eigenface method used
A. Input Image
The input image used in this study is derived from three
public face image databases, which are used as standards in
research recognizing facial images. This aims to make it easy
to do a comparison in the next research.
B. Preprocessing
At this stage, all images that have no grayscale type are
converted to grayscale images. One goal is to reduce the
computational burden because the bit depth of grayscale
images is only one-third of the RGB color image, which is 8
bits. In addition, two face image databases also use grayscale
imagery, so that other facial image databases need to be
compared to the image type. Converting RGB colors to
grayscale using formulas (1).
 = 0.2989 ∗  +0.5870 ∗  + 0.1140 (1)
C. Creating Image Vector
In this section, all face images in the form of a 2-
dimensional matrix are converted into 1-dimensional vectors.
To simplify this process, we use the reshape () function in
Matlab, which is written in the formula (2).
 = ℎ(
,∗,1) (2)
Where r is the row of image and c is column of the image.
D. Mean Image
Calculate the mean image using a formula (3)[1].
 = 1



(3)
Where nT are numbers of training image vector.
E. Subtract Mean
Calculating S matrix(
), i.e. S matrix is getting from the
subtraction of all image vectors from the mean image vector.
Use the formula (4)[1].
=  −  (4)
F. Calculating covariance matrix
Use formula (5) to calculate covariance matrix(
)[1].
=1



(5)
G. Calculating eigenvector and eigenvalue
To make it easier to get eigenvector and eigenvalue values,
use the eig() function in Matlab with the formula (6).
[,] = (
) (2)
Where V: eigenvector matrix and D: eigenvalue matrix
H. Extracting PCA feature
Use Kaiser's rule here to find how many Principal
Components (eigenvectors) to be taken if corresponding
eigenvalue (D) is greater than 1, then the eigenvector (V) will
be chosen for creating eigenface (E). Then get the PCA-
eigenface () by multiply with S matrix (
), see formula
(7)
 = 
∗ (2)
I. Labelling PCA feature
Make labeling on PCA images feature trained to conduct
the training stage.
J. Calculating Similarity using Euclidean Distance
To do the last stage, recognize the calculation of similarity
distance in the test image using the Euclidean distance
algorithm. The Euclidean algorithm is chosen because this
algorithm is the best algorithm for calculating similarity
calculations in digital image processing, such as in
recognizing and identifying processes[1], [11]–[13]. In this
Test image
Extracting
PCA feature
Input from Image
Database
Preprocessing Training Phase
Creating image
matrix
Mean image
Calculating eigenvector
and eigenvalue
Labeling every
train image
Extracting
PCA feature
Calculating Similarity
using Euclidean Distance
Show recognized
image
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
13
research, the rare Euclidean calculation process uses the norm
() function that is available in Matlab. Images that have the
closest distance to the test image are displayed as a
recognizable image.
IV. P
ERFORMANCE
A
NALYSIS
In this study a performance test recognizes human face
images with three kinds of face image databases, where the
first database is the grimace face image database owned by the
University of Essex, UK which can be downloaded on the web
page https://cswww.essex.ac.uk/mv/allfaces/grimace.html,
the second database was taken by the Japanese Female Facial
Expression (JAFFE) Database which can be downloaded from
the page http://www.kasrl.org/jaffedb_info.html [14], and the
third database is taken from The Yale Face Database which
can be downloaded through the page
http://vision.ucsd.edu/content/yale-face-database [15].
In the first experiment, the test was carried out on a
database of grimace facial images. In the notes on the page, it
is explained that there are four databases of face images,
namely face94, face95, face96, and grimace. The grimace face
image is the most difficult to recognize face image among the
four image databases. The reason is a variation of major facial
expressions, has the most variety of images with head turn, tilt
and slant. In this database, there are 18 subjects, consisting of
men and women. There are 20 images on each face image with
image dimensions of 180 * 200 pixels, in portrait format and
with a jpg extension. Of the 20 images divided into two parts,
namely 18 images as training images and 2 images as test
images. In this database, the results of face recognition on the
test image using the method that has been proposed, get a
perfect result of 100% accuracy. Fig. 2 shows a sample of
facial images used in this research, while Fig. 3 is a sample of
recognizable results in one of the test images.
Fig. 2. Example of face image used from ESSEX database
Fig. 3. One of the results of recognition experiments on ESSEX images
The second experiment was carried out on the JAFFE
database. The JAFFE database has 10 female subjects, with
each woman taking 20 to 23 photos. But because the proposed
method requires training data and test data, each subject is
equalized to the number of images, namely 18 training images
and 2 test images. JAFFE imagery has a grayscale image type
or has 8-bit depth, so it does not need to be converted again as
in the ESSEX image database. The dimensions of each image
are 256 * 256 pixels with 100dpi resolution, have tiff
extensions and are uncompressed. JAFFE face images have
more varied expressions and a little bit of lighting
modification. Because all subjects are women and some
subjects have similarities to the process of facial recognition
is relatively more difficult. For more details, see in Fig. 4. Of
the 20 test images used, there are 2 errors recognizing face
images, so that in this experiment only an accuracy of 90%
was obtained. Also note that these two images are images with
the same individual, but errors recognize two different
individuals, for more details, see Fig. 5. Whereas Fig. 6 is an
example of the results of recognizing the correct face image in
the JAFFE database.
Fig. 4. Example of face image used from the JAFFE database
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
14
(a)
(b)
Fig. 5. The wrong face recognized from the JAFFE database
Fig. 6. C orrect face recognition sample results in the JAFFE image database
The last test was carried out on face images in the Yale
database. The third experiment had the highest level of
difficulty compared to the two previous tests. This is due to
variations in Yale's image more and has fewer datasets. In the
Yale database, there are 15 individuals where each individual
only has 11 poses. Each pose has a significant difference,
these differences are facial expressions, lighting directions,
and the use of accessories such as glasses. The use of glasses
is a thing that makes recognizing faces difficult to do, because
it covers a part of the face area, especially if the glasses used
have a large size and contrast in color. Because there are only
11 images in each individual, 10 images are used as training
images and 1 image is used as a test image for each individual.
To be more challenging, the chosen test image is a face image
that uses glasses. Fig. 7 is a face image taken from an
individual in the Yale database.
The image used in the Yale database uses the extension
gif, with a depth of 8 bits and dimensions of 320 * 243 pixels
(landscape). Based on the test results show that there are five
recognizable errors from a total of 15 test images. In other
words, the success rate of recognizing facial images is only
around 67%. This is due to the selected test image using
glasses. The use of proven glasses can influence the results of
face recognition significantly. If this method is applied to the
attendance machine with face detection, of course, everyone
will try to position themselves to show normal expressions so
that they are easily detected. But in the future face detection
will be applied in more challenging ways, for example, to
detect criminal faces that use various accessors for disguise,
so the eigenface algorithm needs further development.
center light
happy
left light
no glasses
normal
rightlight
sad
sleepy
surprise
wink
glasses (as test image)
Fig. 7. A sample of Yale face image database
Fig. 8. A sample of the wrong face recognized from the Yale database
Fig. 9. Correct face recognition sample results in the Yale image database
From several experiments from the three databases above
and based on the hypotheses from previous studies, it can be
concluded that the Eigenface method is actually an established
recognizing algorithm and has high accuracy. It is evident that
this method can perfectly recognize the Grimface face image
database which is the most difficult database of facial images.
Face images in this database have less lighting, unequal face
position (there is a shift in position), and various expressions.
Similarly, in the JAFFE image dataset, the accuracy results are
still very good. But at Yale database, the accuracy obtained is
not very good. This is caused by the selection of training data
and data testing that deliberately selected face images that do
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
15
not use eye glasses on training data and use glasses on data
testing or vice versa. This is the reason for the lack of accuracy
of this method.
V. C
ONCLUSIONS
This research aims to examine the performance of
eigenface algorithms to recognize facial images. In this
research three times, testing is based on a public database of
different facial images. Based on all experiments that have
been carried out, it can be concluded that the eigenface
algorithm is able to recognize face images properly under
normal conditions. This is evidenced by the accuracy of
recognizing face images that reached 100% in the first
experiment, even though the images used were minimal
lighting and had many variations in expression. In the second
experiment, the results of recognizing accuracy are still very
good, where there are only two errors recognized from 20 test
images. This is possible because variations in the intensity of
lighting on facial images are added by variations in facial
expressions.
In the third experiment, the eigenface algorithm did not
have good performance. This is evident that the recognizable
results are slightly unsatisfactory because of only 67% of
images that are successfully recognized correctly. This is
possible because training data varies greatly for each
individual. Where the direction of lighting is extreme, as well
as deliberate testing data that uses facial images using glasses.
This kind of face recognition process is indeed more
challenging, even in the future a more sophisticated
recognition process may be needed in terms of improving
security.
R
EFERENCES
[1] R. Kaur and E. Himanshi, “Face Recognition using Principal
Component Analysis,” in IEEE International Advance Computing
Conference, 2015, pp. 585–589.
[2] S. Singh and S. V. A. V. Prasad, “Techniques and Challenges of Face
Recognition: A Critical Review,” in Procedia Computer Science, 2018,
vol. 143, pp. 536–543.
[3] I. M. Revina and W. R. S. Emmanuel, “A Survey on Human Face
Expression Recognition Techniques,” J. King Saud Univ. - Comput.
Inf. Sci., vol. in Press, Sep. 2018.
[4] A. De, A. Saha, and M. C. Pal, “A Human Facial Expression
Recognition Model Based on Eigen Face Approach,” in Procedia
Computer Science, 2015, vol. 45, pp. 282–289.
[5] M. Johnson and A. Savakis, “Fast L1-eigenfaces for robust face
recognition,” in IEEE Western New York Image and Signal Processing
Workshop, 2014, pp. 1–5.
[6] G. George, R. Boben, B. Radhakrishnan, and L. P. Suresh, “Face
recognition on surgically altered faces using principal component
analysis,” in Proceedings of IEEE International Conference on Circuit,
Power and Computing Technologies, ICCPCT 2017, 2017, pp. 7–12.
[7] W. Saputra, H. Wibawa, and N. Bahtiar, “Pengenalan Wajah
menggunakan Algoritma Eigenface Dan Euclidean Distance,” J.
Informatics Technol., vol. 2, no. 1, pp. 102–110, 2013.
[8] V. P. Kshirsagar, M. R. Baviskar, and M. E. Gaikwad, “Face
recognition using Eigenfaces,” in 2011 3rd International Conference
on Computer Research and Development, 2011, pp. 302–306.
[9] M. Agarwal, H. Agrawal, N. Jain, and M. Kumar, “Face recognition
using principle component analysis, eigenface and neural network,” in
2010 International Conference on Signal Acquisition and Processing,
ICSAP 2010, 2010, pp. 310–314.
[10] E. B. Putranto, P. A. Situmorang, and A. S. Girsang, “Face recognition
using eigenface with naive Bayes,” in Proceedings - 11th 2016
International Conference on Knowledge, Information and Creativity
Support Systems, KICSS 2016, 2016.
[11] N. D. A. Partiningsih, R. R. Fratama, C. A. Sari, D. R. I. M. Setiadi,
and E. H. Rachmawanto, “Handwriting Ownership Recognition using
Contrast Enhancement and LBP Feature Extraction based on KNN,” in
2018 5th International Conference on Information Technology,
Computer, and Electrical Engineering (ICITACEE), 2018, pp. 342–
346.
[12] T. Sutojo, D. R. I. M. Setiadi, P. S. Tirajani, C. A. Sari, and E. H.
Rachmawanto, “CBIR for classification of cow types using GLCM and
color features extraction,” in Proceedings - 2017 2nd International
Conferences on Information Technology, Information Systems and
Electrical Engineering, ICITISEE 2017, 2018.
[13] O. R. Indriani, E. J. Kusuma, C. A. Sari, E. H. Rachmawanto, and D.
R. I. M. Setiadi, “Tomatoes classification using K-NN based on GLCM
and HSV color space,” in 2017 International Conference on Innova tive
and Creative Information Technology (ICITech), 2017, pp. 1–6.
[14] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial
expressions with Gabor wavelets,” in Proceedings Third IEEE
International Conference on Automatic Face and Gesture Recognition,
1998, pp. 200–205.
[15] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs.
Fisherfaces: recognition using class specific linear projection,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul.
1997.
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
16
... In addition to the well-established employment of machine learning models [12] and neural networks in such systems, researchers worldwide are continuously working on developing, refining, and exploring alternative approaches and algorithms for facial recognition and tracking. Beyond the Haar cascade classifier method discussed in this study, other prominent methods encompass Eigenfaces, Fisherfaces [13], and Local Binary Patterns Histograms (LBPH) [14]. Each of these approaches boasts its unique strengths and limitations, necessitating careful consideration when devising specific applications. ...
... At evaluating the Haar cascade classifier method against its contemporaries, such as Eienfaces, Fisherfaces [13], and LBPH [14], it emerges as the frontrunner in terms of accuracy and efficiency in facial recognition. Eigenfaces and Fisherfaces tend to grapple with overfitting issues and often exhibit suboptimal speed, while Local Binary Patterns Histograms fall short in accurately recognizing faces in varying lighting conditions. ...
Article
Full-text available
This scientific investigation is dedicated to an in-depth exploration and comprehensive evaluation of the Haar cascade classifier method as an essential component of facial recognition technology. In addition to its rigorous analysis, this paper substantiates the rationale behind selecting this particular algorithm and offers a detailed account of the system's implementation process, shedding light on the intricacies of the experimental setup employed for our study. The crafted facial recognition system underwent rigorous testing using a diverse dataset of facial photographs. This dataset included a broad spectrum of images captured at varying distances from the camera, under diverse lighting conditions, and encompassing various facial orientations within the camera's field of view. Subsequent to the extensive testing phase, a meticulous analysis of the test results was meticulously conducted. These results provided valuable insights into the system's strengths and weaknesses, highlighting the significance of certain factors in achieving optimal accuracy in facial recognition technology. The in-depth evaluation allowed us to draw robust conclusions regarding the critical considerations essential for the effective design and deployment of facial recognition systems aimed at attaining exceptionally high levels of precision and reliability. By identifying these factors, our research contributes significantly to the advancement of facial recognition technology, paving the way for more accurate and dependable systems in various applications. Key words: system, recognition, device, face tracking. algorithm, Haar classifiers.
... According to [2], single constraints such as aging, pose variation, varying expression, variable illumination and inter-person variability may hinder the performance of face recognition modules. The work of [3] further supports this claim, as they investigate the ability of the PCA-Eigenface method to identify human faces from a variety of databases that were obtained under constrained conditions, including low light levels, noticeably different expressions, and the use of accessories like glasses. They found that in each database, the mean recognition accuracy ranged from 100% to 67%. ...
Article
Full-text available
From literature, majority of face recognition modules suffer performance challenges when presented with test images acquired under multiple constrained environments (occlusion and varying expressions). The performance of these models further deteriorates as the degree of degradation of the test images increases (relatively higher occlusion level). Deep learning-based face recognition models have attracted much attention in the research community as they are purported to outperform the classical PCA-based methods. Unfortunately their application to real-life problems is limited because of their intensive computational complexity and relatively longer run-times. This study proposes an enhancement of some PCA-based methods (with relatively lower computational complexity and run-time) to overcome the challenges posed to the recognition module in the presence of multiple constraints. The study compared the performance of enhanced classical PCA-based method (HE-GC-DWT-PCA/SVD) to FaceNet algorithm (deep learning method) using expression variant face images artificially occluded at 30% and 40%. The study leveraged on two statistical imputation methods of MissForest and Multiple Imputation by Chained Equations (MICE) for occlusion recovery. From the numerical evaluation results, although the two models achieved the same recognition rate (85.19%) at 30% level of occlusion, the enhanced PCA-based algorithm (HE-GC-DWT-PCA/SVD) outperformed the FaceNet model at 40% occlusion rate, with a recognition rate of 83.33%. Although both Missforest and MICE performed creditably well as de-occlusion mechanisms at higher levels of occlusion, MissForest outperforms the MICE imputation mechanism. MissForest imputation mechanism and the proposed HE-GC-DWT-PCA/SVD algorithm are recommended for occlusion recovery and recognition of multiple constrained test images respectively.
... Previous research used eigenface/ PCA to detect human faces with maximum and minimum results of 6.0000e +04 [10]. Research Ref. [11], with the same method and the results obtained show if the image fits in a variety of expressions. Research using methods with almost the same results was also carried out by Ref. [2], This research is more about examining the work system of the method used but has not directly compared it with the real expression of the research object. ...
Article
Full-text available
The face constitutes a research subject for analyzing human facial expressions, as it can provide insights into an individual's emotions. Facial expressions are identified through changes in key facial features such as the eyes, eyebrows, mouth, and forehead. The field of education has witnessed rapid technological advancements, especially in light of the pandemic. Consequently, the need for the development of technology has become increasingly pressing. This study aims to evaluate the accuracy of Lobe software in analyzing human facial expressions. The sample population for this research comprised students from IKIP Muhammadiyah Maumere, Indonesia, with a sample size of 19 selected using a simple random sampling method. The results of the analysis showed that the accuracy of the software was pretty good, with a score of 90% for each category of neutral, sad, happy, and angry faces. The questionnaire data analysis yielded a score of 84%, which is only 6% lower than the score achieved through the students' self-reporting, implying an error rate of less than 10% if validated.
... Discrete wavelet transform (DWT) is a signal processing technique used to analyze discrete signal by using scalingand wavelet functions [28]. Wavelet can dilate or compress and shift over time. ...
Article
Full-text available
The Principal Component Analysis (PCA) is a technique that reduces data dimension by synthesizing important representative features. Discrete Wavelet Transform (DWT) is a strong signal processing tool that can analyze the image in multi-resolutions and can discover significant features like edges. Properly trained supervised machine learning models can recognize input face pretty correctly. In this work face features extracted using PCA and DWT are assembled together to train various Machine Learning Models. Each model is studied using four benchmark face data sets namely YALE, JAFEE, GEORGIA TECH and ORL. The models are further optimized and their performances compared with existing state of the art methods. For YALE the proposed method achieved 94.54% accuracy using Logistic regression, for JAFEE 99.5% accuracy in Logistic regression, for GEORGIA TECH and ORL the method also out performs the existing techniques by achieving 84.6% and 98.5% accuracies.
Article
Full-text available
Human Face receives major attention and acquires most of the efforts of the research and studies of Machine Learning in detection and recognition. In real-life applications, the problem of quick and rapid recognition of the Human Face is always challenging researchers to come out with powerful and reliable techniques. In this paper, we proposed a new human face recognition system using the Discrete Wavelet Transformation named HFRDWT. The proposed system showed that the use of Wavelet Transformation along with the Convolutional Neural Network to represent the features of an image had significantly reduced the face recognition time, which makes it useful in real-life areas, especially in public and crowded places. The Approximation coefficient of the Discrete Wavelet Transformation played the dominant role in our system by reducing the raw image resolution to a quarter while maintaining the high level of accuracy rate that the raw image had. Results on ORL, Japanese Female Facial Expression, extended Cohn-Kanade, Labeled Faces in the Wild datasets, and our new Sudanese Labeled Faces in the Wild dataset showed that our system obtained the least recognition timing (average of 24 milliseconds for training and 8 milliseconds for testing) and acceptable high recognition rate (average of 98%) compared to the other systems.
Article
Full-text available
A lot of researches are going on since last two decades for object recognition, shape matching, and pattern recognition in the field of computer vision. Face recognition is one of the important issues in object recognition and computer vision. In our day to day activities, a number of biometric applications are available for recognizing humans such as eye or iris recognition, fingerprint recognition, face recognition. Face is an important part of human being and requires detection for different applications such as security, forensic investigation. It requires proper techniques for face detection and recognition with challenges of different facial expressions, pose variations, occlusion, aging and resolution either in the frame of stationary object or video sequencing images. Authors tried to put the concept of face synthesis, for improving accuracy and recognition rate on different face database like ORL, YALE, AR and LFW. Authors had presented a critical review of various types of face recognition techniques and challenges, to improve efficiency and recognition rate for identifying face images in large database, with comparison of accuracy or recognition rate.
Article
Full-text available
Human Face expression Recognition is one of the most powerful and challenging tasks in social communication. Generally, face expressions are natural and direct means for human beings to communicate their emotions and intentions. Face expressions are the key characteristics of non-verbal communication. This paper describes the survey of Face Expression Recognition (FER) techniques which include the three major stages such as preprocessing, feature extraction and classification. This survey explains the various types of FER techniques with its major contributions. The performance of various FER techniques is compared based on the number of expressions recognized and complexity of algorithms. Databases like JAFFE, CK, and some other variety of facial expression databases are discussed in this survey. The study on classifiers gather from recent papers reveals a more powerful and reliable understanding of the peculiar characteristics of classifiers for research fellows.
Article
Full-text available
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases
Article
Face recognition using eigenfaces is a popular technique based on principal component analysis (PCA). However, its performance suffers from the presence of outliers due to occlusions and noise often encountered in unconstrained settings. We address this problem by utilizing L1-eigenfaces for robust face recognition. We introduce an effective approach for L1-eigenfaces based on combining fast computation of L1-PCA with a greedy search technique. Experimental results demonstrate that L1-eigenfaces outperform traditional L2-eigenfaces for face recognition and reconstruction on the Yale face database corrupted with random occlusions.
Article
The strategy of face recognition involves the examination of facial features in a picture, recognizing those features and matching them to 1 of the many faces in the database. There are lots of algorithms effective at performing face recognition, such as for instance: Principal Component Analysis, Discrete Cosine Transform, 3D acceptance methods, Gabor Wavelets method etc. This work has centered on Principal Component Analysis (PCA) method for face recognition in an efficient manner. There are numerous issues to take into account whenever choosing a face recognition method. The main element is: Accuracy, Time limitations, Process speed and Availiability. With one of these in minds PCA way of face recognition is selected because it is really a simplest and easiest approach to implement, extremely fast computation time. PCA (Principal Component Analysis) is an activity that extracts the absolute most relevant information within a face and then tries to construct a computational model that best describes it.