Content uploaded by De Rosal Ignatius Moses Setiadi
Author content
All content in this area was uploaded by De Rosal Ignatius Moses Setiadi on Nov 01, 2019
Content may be subject to copyright.
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Performance Analysis of Face Recognition using
Eigenface Approach
Ibnu Utomo Wahyu Mulyono
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
ibnu.utomo.wm@dsn.dinus.ac.id
Eko Hari Rachmawanto
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
eko.hari@dsn.dinus.ac.id
De Rosal Ignatius Moses Setiadi
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
moses@dsn.dinus.ac.id
Amiq Fahmi
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
amiq.fahmi@dsn.dinus.ac.id
Ajib Susanto
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
ajib.susanto@dsn.dinus.ac.id
Muljono
Department of Informatics Engineering
Dian Nuswantoro University
Semarang, Indonesia
muljono@dsn.dinus.ac.id
Abstract— Eigenface is an algorithm in the principal
component analysis (PCA) that is used to recognize faces.
Eigenface used to reduce dimensionality and find the best vector
for distributing the facial image in the facial space. This method
has been widely used and implemented in various previous
researches to recognize human face images. Not only to detect
human faces under normal conditions, but PCA has also been
proven to be able to properly recognize images in various
expressions. It can even recognize facial images with various
challenges such as detecting faces after plastic surgery and
combining them with facial image reconstruction techniques.
This research aims to examine the performance of the PCA-
Eigenface method to recognize human face images from several
databases that have their own challenges, such as the lack of
illumination of facial images, significant variations in expression
and the use of accessories such as glasses. The recognizable
accuracy is quite varied, from 100% to 67% in each database
with and with an average recognition of more than 85%.
Keywords—Eigenface, Euclidian, Face recognition, JAFFE,
PCA, Yale
I. I
NTRODUCTION
Face recognition is one area that is always interesting to
study. At present, face recognition has been implemented in
many ways such as digital authentication systems, health
systems, licensing systems, access control systems[1], [2].
Currently, research on recognizing faces has been developed
in many ways that are more complex and challenging. Not
only face recognition, but facial expressions are also
recognized that is useful for social communication[3][4].
Other research also combines recognition technology and
reconstruction of damaged facial images caused by random
occlusion [5]. Besides facial recognition is also done on facial
images that have been done by plastic surgery, as in research
[6].
The first process that is carried out before carrying out the
face is the process of face detection. After the face has been
detected, the next process is to recognize the face. Facial
recognition can be done by various methods such as principal
component analysis (PCA), linear discriminant analysis
(LDA), independent component analysis (ICA), a local binary
pattern, histogram oriented gradient (HOG) and others[1], [2].
PCA is one of the most popular methods and is still used
today. Eigenface is an algorithm in PCA that is used to
recognize faces. Eigenface is used to reduce dimensionality in
facial space[7]. Many studies recognize faces using the
eigenface approach as in research [5]–[10]. Where the
eigenface algorithm is proven to be able to recognize human
faces with high accuracy, it can even recognize faces in more
challenging cases such as facial expressions, recognize real
faces and after surgery, can even be combined with
reconstruction of damaged facial images. So in this research,
the eigenface method will carry out performance and analysis
tests to recognize facial images taken from public databases
that have been widely used in previous research.
II. R
ELATED
W
ORK
Research conducted by Kshirsagar et al. [8] proposed
recognizing facial images using the eigenfaces approach.
Eigenface was chosen for reasons of algorithm simplicity,
computational speed, and learning ability. The steps proposed
in the research method are first to make an acquisition on the
image dataset, calculate eigenface to get the highest
eigenvalue, calculate the appropriate distribution in the
dimensions of each individual face space, then make a
classification of the weight pattern if it is a human face. In this
way, the accuracy of recognizing faces is more than 85%.
Kaur and Himanshi [1]also proposed the PCA method
with an eigenface approach in his research. Some of the steps
taken in the proposed method are reading input images,
conducting training, developing datasets, evaluating PCA
features, evaluating errors, evaluating Euclidian distance
values, searching for minimum values, and recognizing
imagery based on minimal values. The results of recognizable
accuracy are around 99%.
Johnson and Savakis [5], proposed an eigenface approach
that was modified to recognize human faces. This method
combines the PCA L1 method with greedy based search
techniques. In his research, the method was used for
recognition of images damaged by random occlusion and
subsequently the image reconstruction. From all datasets used,
20% of the data is occluded with noise. Then the
reconstruction of 20% of the data is reconstructed based on the
recognition in the entire dataset.
George et al. [6] also proposed the PCA method for
recognizing faces. This time recognizing faces is done on the
original face image and facial image that plastic surgery has
done. The use of this algorithm also proved to be effective and
can make recognition of facial images with an accuracy above
80%.
Based on previous researches, it is evident that the
eigenface algorithm in the PCA method has many advantages
in computational speed and accuracy in recognizing facial
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
12
images. Even recognizing more challenging facial images still
gets relatively very good accuracy. So in this research, it is
also proposed to test the eigenface algorithm to recognize face
images in several databases with certain levels of difficulty
and images of faces using accessories such as glasses.
III. R
ESEARCH
M
ETHODOLOGY
At this stage, the research stage that will be carried out in
this research is proposed, where a face recognizing process
based on the PCA-eigenface method is then carried out to
measure the accuracy of recognition. The entire research
process is carried out using Matlab 2015a. Fig. 1 describes the
process of detection and face recognition using the PCA-
eigenface method to be performed.
Fig. 1. PCA-Eigenface method used
A. Input Image
The input image used in this study is derived from three
public face image databases, which are used as standards in
research recognizing facial images. This aims to make it easy
to do a comparison in the next research.
B. Preprocessing
At this stage, all images that have no grayscale type are
converted to grayscale images. One goal is to reduce the
computational burden because the bit depth of grayscale
images is only one-third of the RGB color image, which is 8
bits. In addition, two face image databases also use grayscale
imagery, so that other facial image databases need to be
compared to the image type. Converting RGB colors to
grayscale using formulas (1).
= 0.2989 ∗ +0.5870 ∗ + 0.1140 ∗ (1)
C. Creating Image Vector
In this section, all face images in the form of a 2-
dimensional matrix are converted into 1-dimensional vectors.
To simplify this process, we use the reshape () function in
Matlab, which is written in the formula (2).
= ℎ(
,∗,1) (2)
Where r is the row of image and c is column of the image.
D. Mean Image
Calculate the mean image using a formula (3)[1].
= 1
(3)
Where nT are numbers of training image vector.
E. Subtract Mean
Calculating S matrix(
), i.e. S matrix is getting from the
subtraction of all image vectors from the mean image vector.
Use the formula (4)[1].
= − (4)
F. Calculating covariance matrix
Use formula (5) to calculate covariance matrix(
)[1].
=1
(5)
G. Calculating eigenvector and eigenvalue
To make it easier to get eigenvector and eigenvalue values,
use the eig() function in Matlab with the formula (6).
[,] = (
) (2)
Where V: eigenvector matrix and D: eigenvalue matrix
H. Extracting PCA feature
Use Kaiser's rule here to find how many Principal
Components (eigenvectors) to be taken if corresponding
eigenvalue (D) is greater than 1, then the eigenvector (V) will
be chosen for creating eigenface (E). Then get the PCA-
eigenface () by multiply with S matrix (
), see formula
(7)
=
∗ (2)
I. Labelling PCA feature
Make labeling on PCA images feature trained to conduct
the training stage.
J. Calculating Similarity using Euclidean Distance
To do the last stage, recognize the calculation of similarity
distance in the test image using the Euclidean distance
algorithm. The Euclidean algorithm is chosen because this
algorithm is the best algorithm for calculating similarity
calculations in digital image processing, such as in
recognizing and identifying processes[1], [11]–[13]. In this
Test image
Extracting
PCA feature
Input from Image
Database
Preprocessing Training Phase
Creating image
matrix
Mean image
Calculating eigenvector
and eigenvalue
Labeling every
train image
Extracting
PCA feature
Calculating Similarity
using Euclidean Distance
Show recognized
image
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
13
research, the rare Euclidean calculation process uses the norm
() function that is available in Matlab. Images that have the
closest distance to the test image are displayed as a
recognizable image.
IV. P
ERFORMANCE
A
NALYSIS
In this study a performance test recognizes human face
images with three kinds of face image databases, where the
first database is the grimace face image database owned by the
University of Essex, UK which can be downloaded on the web
page https://cswww.essex.ac.uk/mv/allfaces/grimace.html,
the second database was taken by the Japanese Female Facial
Expression (JAFFE) Database which can be downloaded from
the page http://www.kasrl.org/jaffedb_info.html [14], and the
third database is taken from The Yale Face Database which
can be downloaded through the page
http://vision.ucsd.edu/content/yale-face-database [15].
In the first experiment, the test was carried out on a
database of grimace facial images. In the notes on the page, it
is explained that there are four databases of face images,
namely face94, face95, face96, and grimace. The grimace face
image is the most difficult to recognize face image among the
four image databases. The reason is a variation of major facial
expressions, has the most variety of images with head turn, tilt
and slant. In this database, there are 18 subjects, consisting of
men and women. There are 20 images on each face image with
image dimensions of 180 * 200 pixels, in portrait format and
with a jpg extension. Of the 20 images divided into two parts,
namely 18 images as training images and 2 images as test
images. In this database, the results of face recognition on the
test image using the method that has been proposed, get a
perfect result of 100% accuracy. Fig. 2 shows a sample of
facial images used in this research, while Fig. 3 is a sample of
recognizable results in one of the test images.
Fig. 2. Example of face image used from ESSEX database
Fig. 3. One of the results of recognition experiments on ESSEX images
The second experiment was carried out on the JAFFE
database. The JAFFE database has 10 female subjects, with
each woman taking 20 to 23 photos. But because the proposed
method requires training data and test data, each subject is
equalized to the number of images, namely 18 training images
and 2 test images. JAFFE imagery has a grayscale image type
or has 8-bit depth, so it does not need to be converted again as
in the ESSEX image database. The dimensions of each image
are 256 * 256 pixels with 100dpi resolution, have tiff
extensions and are uncompressed. JAFFE face images have
more varied expressions and a little bit of lighting
modification. Because all subjects are women and some
subjects have similarities to the process of facial recognition
is relatively more difficult. For more details, see in Fig. 4. Of
the 20 test images used, there are 2 errors recognizing face
images, so that in this experiment only an accuracy of 90%
was obtained. Also note that these two images are images with
the same individual, but errors recognize two different
individuals, for more details, see Fig. 5. Whereas Fig. 6 is an
example of the results of recognizing the correct face image in
the JAFFE database.
Fig. 4. Example of face image used from the JAFFE database
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
14
(a)
(b)
Fig. 5. The wrong face recognized from the JAFFE database
Fig. 6. C orrect face recognition sample results in the JAFFE image database
The last test was carried out on face images in the Yale
database. The third experiment had the highest level of
difficulty compared to the two previous tests. This is due to
variations in Yale's image more and has fewer datasets. In the
Yale database, there are 15 individuals where each individual
only has 11 poses. Each pose has a significant difference,
these differences are facial expressions, lighting directions,
and the use of accessories such as glasses. The use of glasses
is a thing that makes recognizing faces difficult to do, because
it covers a part of the face area, especially if the glasses used
have a large size and contrast in color. Because there are only
11 images in each individual, 10 images are used as training
images and 1 image is used as a test image for each individual.
To be more challenging, the chosen test image is a face image
that uses glasses. Fig. 7 is a face image taken from an
individual in the Yale database.
The image used in the Yale database uses the extension
gif, with a depth of 8 bits and dimensions of 320 * 243 pixels
(landscape). Based on the test results show that there are five
recognizable errors from a total of 15 test images. In other
words, the success rate of recognizing facial images is only
around 67%. This is due to the selected test image using
glasses. The use of proven glasses can influence the results of
face recognition significantly. If this method is applied to the
attendance machine with face detection, of course, everyone
will try to position themselves to show normal expressions so
that they are easily detected. But in the future face detection
will be applied in more challenging ways, for example, to
detect criminal faces that use various accessors for disguise,
so the eigenface algorithm needs further development.
center light
happy
left light
no glasses
normal
rightlight
sad
sleepy
surprise
wink
glasses (as test image)
Fig. 7. A sample of Yale face image database
Fig. 8. A sample of the wrong face recognized from the Yale database
Fig. 9. Correct face recognition sample results in the Yale image database
From several experiments from the three databases above
and based on the hypotheses from previous studies, it can be
concluded that the Eigenface method is actually an established
recognizing algorithm and has high accuracy. It is evident that
this method can perfectly recognize the Grimface face image
database which is the most difficult database of facial images.
Face images in this database have less lighting, unequal face
position (there is a shift in position), and various expressions.
Similarly, in the JAFFE image dataset, the accuracy results are
still very good. But at Yale database, the accuracy obtained is
not very good. This is caused by the selection of training data
and data testing that deliberately selected face images that do
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
15
not use eye glasses on training data and use glasses on data
testing or vice versa. This is the reason for the lack of accuracy
of this method.
V. C
ONCLUSIONS
This research aims to examine the performance of
eigenface algorithms to recognize facial images. In this
research three times, testing is based on a public database of
different facial images. Based on all experiments that have
been carried out, it can be concluded that the eigenface
algorithm is able to recognize face images properly under
normal conditions. This is evidenced by the accuracy of
recognizing face images that reached 100% in the first
experiment, even though the images used were minimal
lighting and had many variations in expression. In the second
experiment, the results of recognizing accuracy are still very
good, where there are only two errors recognized from 20 test
images. This is possible because variations in the intensity of
lighting on facial images are added by variations in facial
expressions.
In the third experiment, the eigenface algorithm did not
have good performance. This is evident that the recognizable
results are slightly unsatisfactory because of only 67% of
images that are successfully recognized correctly. This is
possible because training data varies greatly for each
individual. Where the direction of lighting is extreme, as well
as deliberate testing data that uses facial images using glasses.
This kind of face recognition process is indeed more
challenging, even in the future a more sophisticated
recognition process may be needed in terms of improving
security.
R
EFERENCES
[1] R. Kaur and E. Himanshi, “Face Recognition using Principal
Component Analysis,” in IEEE International Advance Computing
Conference, 2015, pp. 585–589.
[2] S. Singh and S. V. A. V. Prasad, “Techniques and Challenges of Face
Recognition: A Critical Review,” in Procedia Computer Science, 2018,
vol. 143, pp. 536–543.
[3] I. M. Revina and W. R. S. Emmanuel, “A Survey on Human Face
Expression Recognition Techniques,” J. King Saud Univ. - Comput.
Inf. Sci., vol. in Press, Sep. 2018.
[4] A. De, A. Saha, and M. C. Pal, “A Human Facial Expression
Recognition Model Based on Eigen Face Approach,” in Procedia
Computer Science, 2015, vol. 45, pp. 282–289.
[5] M. Johnson and A. Savakis, “Fast L1-eigenfaces for robust face
recognition,” in IEEE Western New York Image and Signal Processing
Workshop, 2014, pp. 1–5.
[6] G. George, R. Boben, B. Radhakrishnan, and L. P. Suresh, “Face
recognition on surgically altered faces using principal component
analysis,” in Proceedings of IEEE International Conference on Circuit,
Power and Computing Technologies, ICCPCT 2017, 2017, pp. 7–12.
[7] W. Saputra, H. Wibawa, and N. Bahtiar, “Pengenalan Wajah
menggunakan Algoritma Eigenface Dan Euclidean Distance,” J.
Informatics Technol., vol. 2, no. 1, pp. 102–110, 2013.
[8] V. P. Kshirsagar, M. R. Baviskar, and M. E. Gaikwad, “Face
recognition using Eigenfaces,” in 2011 3rd International Conference
on Computer Research and Development, 2011, pp. 302–306.
[9] M. Agarwal, H. Agrawal, N. Jain, and M. Kumar, “Face recognition
using principle component analysis, eigenface and neural network,” in
2010 International Conference on Signal Acquisition and Processing,
ICSAP 2010, 2010, pp. 310–314.
[10] E. B. Putranto, P. A. Situmorang, and A. S. Girsang, “Face recognition
using eigenface with naive Bayes,” in Proceedings - 11th 2016
International Conference on Knowledge, Information and Creativity
Support Systems, KICSS 2016, 2016.
[11] N. D. A. Partiningsih, R. R. Fratama, C. A. Sari, D. R. I. M. Setiadi,
and E. H. Rachmawanto, “Handwriting Ownership Recognition using
Contrast Enhancement and LBP Feature Extraction based on KNN,” in
2018 5th International Conference on Information Technology,
Computer, and Electrical Engineering (ICITACEE), 2018, pp. 342–
346.
[12] T. Sutojo, D. R. I. M. Setiadi, P. S. Tirajani, C. A. Sari, and E. H.
Rachmawanto, “CBIR for classification of cow types using GLCM and
color features extraction,” in Proceedings - 2017 2nd International
Conferences on Information Technology, Information Systems and
Electrical Engineering, ICITISEE 2017, 2018.
[13] O. R. Indriani, E. J. Kusuma, C. A. Sari, E. H. Rachmawanto, and D.
R. I. M. Setiadi, “Tomatoes classification using K-NN based on GLCM
and HSV color space,” in 2017 International Conference on Innova tive
and Creative Information Technology (ICITech), 2017, pp. 1–6.
[14] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial
expressions with Gabor wavelets,” in Proceedings Third IEEE
International Conference on Automatic Face and Gesture Recognition,
1998, pp. 200–205.
[15] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs.
Fisherfaces: recognition using class specific linear projection,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul.
1997.
2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
16