ArticlePDF Available

PERFORMANCE ANALYSIS OF EIGENFACE RECOGNITION UNDER VARYING EXTERNALCONDITIONS

Authors:
  • University of Medicine, Pharmacy, Science and Technology Targu Mures
  • Sapientia Hungarian University of Transylvania, Tîrgu-Mureș, Romania

Figures

Content may be subject to copyright.
44
Scientific Bulletin of the „Petru Maior” University of Tîrgu Mureş
Vol. 11 (XXVIII) no. 2, 2014
ISSN-L 1841-9267 (Print), ISSN 2285-438X (Online), ISSN 2286-3184 (CD-ROM)
PERFORMANCE ANALYSIS OF EIGENFACE
RECOGNITION UNDER VARYING EXTERNAL
CONDITIONS
Szidónia LEFKOVITS1, László LEFKOVITS2
1Universitatea “Petru Maior”
Nicolae Iorga 1, Tîrgu-Mureş, Romania
1szidonia.lefkovits@science.upm.ro
2Universitatea Sapientia
Şos. Sighişoarei 1C, Tîrgu-Mureş/Corunca, Romania
2lefkolaci@ms.sapientia.ro
Abstract
In the field of image processing and computer vision face recognition is one of the most
studied research domain. It has large variety of applications in different areas like security
and surveillance systems, identification and authentication etc.
In this paper we propose to analyze the face recognition system based on the eigenface[22]
method under different conditions. The eigenface method is a statistical dimensionality
reduction method, which obtains the adequate face space, out of a given training
database. The idea of observing the performances i.e. the recognition rate in different
situations (like presence or absence of important facial features such as glasses or beard)
came from the diploma work [20]. The experiments described in this article study the
recognition performance of the algorithm, by varying the number of considered feature
vectors. Beside of these, we studied the behavior of such a system if the analyzed individual
is wearing glasses or beard. Finally, we concentrate on carrying out experiments for noisy
images by adding common types of noise like salt & pepper noise, Gaussian noise or
Poisson noise to every test image.
Key words: face recognition, Eigenfaces, Principal Component Analysis, performance evaluation
1. Introduction
Object recognition is an important research
domain because it can be applied in a wide variety of
systems. Especially human face recognition is the
most widespread research area, owing to the fact that
face is usually in handy it can be recorded easily by
different types of visible or hidden cameras, the
subject doesn’t even notice that he/she is recorded.
Other methods that are more reliable and are used
in biometric identification, for example fingerprint
recognition, iris recognition, need special devices
such as high resolution cameras.
On the other hand, face recognition is the most
used identification method wherewith people
recognize each other.
Automated facial recognition is a biometric
application from a single image or video sequence,
comparing the detected face to a large database of
faces, from individuals already known.
Several application areas of human face
recognition are known, such as: biometrical
identification, access control, video surveillance,
passport control, security systems, banc verification
for ATM, image categorization in films and videos,
identification of thefts in open-air cameras.
Face recognition systems can be divided in two
classes global aspect based methods and local aspect
based methods. The first one is called appearance
based method and the second one is also called part-
based or feature based method.
This article concentrates only on the eigenface
method, the most common face recognition system,
which can be included in the global aspect based
category. Besides implementing the eigenface
method, developed by Turk and Pentland [22], it
studies the influence of several external conditions on
the recognition performance, such as noise,
blurriness, illumination; change of facial features:
wearing glasses or presence of facial hear.
The paper is organized as follows: in the first
section a short introduction is presented, it follows a
summary about the related most important works in
the domain. The third section presents the eigenface
45
method and finally some comparative experimental
results and detection performances are exposed.
2. Related Work
Many approaches have been developed for face
recognition. In this section the most used such
systems are summed-up.
Several types of Artificial Neural Networks have
been used for face recognition: single layer adaptive
network, multilayer perceptron, convolutional
networks and probabilistic neural networks for
handling partial occlusion or distortion [7].
Elastic Bunch Graph Matching is also used for
face recognition. Here a dynamic graph is constructed
where vertexes are the features and edges the
distances between given features [14].
Face representation in 3D is one of the
geometrical representation techniques that are
developed as well, they are based on Hidden Markov
Models [2].
Despite of the large variety of face recognition
systems the most common approach with extremely
good results is still the eigenface method. This
method is the outcome of the simplest global aspect
based method, which takes in account the intensity of
pixels. Here, the two dimensional unknown face
image is compared to all the other faces from the
training database. Comparing faces pixel by pixel
works only in limited conditions, under given
circumstances. Its major bottleneck is the comparison
and classification in a very high dimensional space.
Thus, appears the need of dimensionality reduction.
One of the most common dimensionality reduction
methods is the extraction of Principal Components.
Kirby and Sirovich [8] exploit the PCA (Principal
Component Analysis) method for face recognition by
using the Karhunen-Loève conditions, in order to
define the geometry of faces. Turk and Pentland [22]
developed a recognition system which tracks the
subjects head and recognizes it by comparing the
characteristics of it to those of known individuals.
The system projects face images onto a feature space
named “face space” that spans the significant features
from known images. The significant projections are
called eigenfaces, because they are eigenvectors of
the face space. Su et al. [21] combines PCA and
Linear discriminant Analysis (LDA) for extracting
multi features and makes the final decision with
radial basis function network.
Recently, different types of PCA-based algorithms
have been developed by researchers of this domain:
weighted modular PCA [9], Kernel PCA [12],
diagonal PCA [23], adaptive PCA [3], two-
dimensional PCA [11]
Instead of PCA, Bartlett et al. [1] uses
Independent Component Analysis, because this
method is a generalization of PCA. Liu et al. [10]
combined Gabor wavelet transform with Fisher linear
discriminant and kernel PCA or Gabor features with
fractional polynomial models or Gabor features ICA
and PCA.
Nicholl et al. [13] created a face recognition
system which automatically selects the coefficient for
DWT and PCA.
Poon et al. [15, 16] analyze the performance of
PCA recognition for different datasets, varying the
image size, alignment, training set, blurriness,
illumination condition and noise. Shermina [19]
presents a method based on multi linear principal
component analysis.
Rady [17] compares different distance classifiers
with the same eigenface method. Dabhade [4]
combines Haar Detection, Gabor feature extraction
and Eigenface recognition in the achieved system.
For more details about other techniques and recent
advances in face recognition consult the survey
articles [6, 18]
3. The Eigenface method
The idea of retrieving relevant features from a set
of training images can be obtained by the extraction
of principal components. These features are not
necessarily evident, perceptible features such as facial
feature parts, but they characterize the common part
of a given set.
If we consider every gray-scale pixel of
nm
image, a feature space of this is obtained. Nowadays,
the number of pixels of an image is more hundred
thousand, even million. These dimensions can hardly
be applied in any classification algorithm, therefore
comes the necessity of dimensionality reduction.
The Principal Component Analysis is a statistical
method for dimensionality reduction, while
minimizing the mean square reconstruction
error [16, 5].
The PCA is able to extract only the relevant
information of a given space and transform every
element of it in a considerably lower dimensional
space.
Let us consider
L
images each of them having
the same dimension
nm
.
12
{, },
training L
S I I I
(1)
Each image is vectorized and we obtain an
N
dimensional column vector, where
N n m
.
The PCA is a linear algebra concept based on the
eigenvectors and eigenvalues of a matrix.
The eigenvector is a vector which is scaled by
a linear transformation. When a matrix acts on it, the
direction of the vector does not change, only the
magnitude.
Av v
(2)
In equation (2) the A is the analyzed matrix,
v
the
eigenvector and the scalar
is the corresponding
eigenvalue of the eigenvector
.
Equation (2) can be rewritten as the characteristic
equation of the matrix
A
.
 
0A I v

(3)
Nontrivial solution of the characteristic equation
exists if and only if
det
 
0AI

(4)
46
The characteristic equation is an
NN
linear
homogeneous equation system, with
N
equations
and
N
unknowns. The solutions of the
N
degree
characteristic polynomial (4) are the eigenvalues of
the
A
matrix. Because the degree characteristic
polynomial is
N
, we obtain
N
roots. These
N
roots
are not necessarily distinct. If all the eigenvalues are
distinct, the corresponding eigenvectors are linear
independent and form an
N
dimensional basis.
The matrix for which the eigenvalues are
computed is the covariance matrix of the input space
denoted by
training
S
in equation (1).
The covariance matrix of two random variables
e.g. images is
( , ) [(I )(I )]
i j i i jj
cov I I E

 
(5)
where
E
is the expected value.
The matrix form of the covariance matrix of the
whole set of images
training
S
can be rewritten as
( ) [(I [ ])(I [])]
T
cov I E E EII 
(6)
here
I
is the training set of images and
()EI
is
the mean image. The mean image is a column vector
of
N
pixels, as well.
1
1
() L
i
i
E I I
L

(7)
If the eigenvalues and eigenvectors of the covariance
matrix
are computed the “eigenfaces” are
obtained. This name comes from the parents of the
Eigenface method Turk and Pentland [22], they call it
so because the obtained eigenvectors are similar to
faces in appearance.
The covariance matrix is an
22
NN
matrix, which
can be simply formulated as
  
1
1LTT
ii
i
I I AA
L

 
(8)
where
A
is an
2
NL
matrix column-vise formed
of the differences of each image and the mean image,
the
i
th column is the
i
th difference.
21 [ ]
L
A I I I
 
 
(9)
The problem is the dimension of the covariance
matrix
which became
22
NN
. Fortunately, we
can obtain the same eigenvalues by computing them
for another covariance matrix. This covariance matrix
is
'T
AA
(10)
The dimension of this
'
matrix is
LL
, where
L
is the number of images. The order of magnitude of
2
N
is million, but the order of
L
is thousand
(
2
LN
). Thus, instead of computing eigenvalues
of a huge matrix (8) we compute the eigenvalues of a
such smaller matrix (10).
Consider the eigenvectors of
'
such that
'i i i
vv

(11)
Ti i i
A Av v
(12)
Left multiplying both sides with
A
we obtain
Ti i i
AA Av Av
(13)
We denoted
T
AA
Equation (13) becomes
i i i
Av Av

(14)
Let us denote
ii
u Av
 
1
L
i il i
l
u v I

(15)
then we obtain the eigenvalues
i
and eigenvectors
i
u
of the
covariance matrix.
i i i
uu

(16)
Thus, if we compute the eigenvalues of
'
, the same
are the eigenvalues of
and in the same way, if we
compute the eigenvectors
i
v
of corresponding
eigenvalues for
'
, we obtain the eigenvectors
i
u
of
using the equation (15).
After computing all the
L
eigenvalues, we have to
select the most representative
P
eigenvalues. This
value of
P
is less the hundred.
From the largest P eigenvalues the corresponding P
eigenvectors are computed, and a P dimensional span
is obtained, which is called by the authors of [22] the
“facespace”.
The training step is the computation of the facespace,
based on a set of input images (see eq.(1)).
In the test phase the input image has to be
projected into the facespace. Each of the selected
P
eigenvectors will have a corresponding weight.
These weights are, in fact, the eigenface components
of the input image, and are computed by a simple dot
product.
 
T
p p in
w u I

(17)
From these vectors putting column-wise we can form
a matrix W.
12
[ , ,..., ]
p
W w w w
(18)
If the eigenvectors form a basis then the input image
can be reconstructed using the following linear
combination:
rec p p
I w u
(19)
In order to determine which person resembles the
input image best, so an error has to be computed. This
error is the means square error of two
W
matrixes [22].
in
W
is the weight matrix of the input
image and
pers
W
is the average weight matrix of more
images for a given person.
2
in pers
MSE W W
(20)
The minimum of this mean square error will
determine the most alike person.
If the error is greater than a threshold we can say that
the input image is an unknown person.
In order to find out that input image is not a face
image another distance has to be computed, the
distance between the average reconstructed image
and the difference between the input image and the
mean image (
in in
AI

).
2
face in Irec
MSE A Avg
(21)
47
If this value is less than a threshold it means that the
input image is a face, otherwise it is an unknown
object the projection of which in the facespace is
useless.
4. Results and Experiments
In our experiments we have used the Yale
database [24] specially developed for face recognition
and facial expression recognition. This database has
165 grayscale images of 15 individuals and 11
different photos about each individual. These 11
poses to a facial expression: normal, happy, sad,
sleepy, surprised and wink. Different configuration
refers to center-light, left-light, right-light. Besides,
each subject appears also with and without glasses.
At the same time we generated another condition and
put beard to each subject.
We have to underline that these images are not
aligned and are in different illumination conditions.
In order to form an acceptable training set we have
normalized the images (see fig 1 original image and
normalized image). As we can observe the
normalization brings uniform illumination conditions.
It seems that this procedure messes up uniformly
lighted “normal” image, but repairs the left-lighted
and right-lighted image in some way. Further we will
see that different illumination conditions have a great
impact on recognition with this eigenface method.
Fig. 1: a).Yale database sample image [24]
b). Normalized sample image
c). Left light d). Normalized left light
The second step after normalization is obtaining the
mean image of the training data set. Because the
images are not properly aligned the mean image
shows only a blurred head shape formed of several
contours. Regarded to the covariance and
resemblance this image is the common part of each
image, that is why it has to be subtracted from each
image in the training set (see fig. 2 a). and b).)
Fig. 2: a). Mean image denoted by µ
b). Difference image
After obtaining the difference images, comes the
computation of covariance matrix and out of it we
obtain the eigenvalues and eigenvectors.
Fig. 3: First four eigenfaces
In our experiments we studied the influence of
number of used eigenvectors. As shown in fig. 3 the
eigenvectors corresponding to the most representative
(first four) eigenvalues are the strong contours of the
head, the first eigenvector is the shape of an average
head.
Fig. 4: 47th, 48th, 49th, 50th
As we compute more and more eigenvectors
corresponding to smaller values we can observe that
these represent the fine contours. Overall, we have to
use several eigenvectors corresponding to strong
contours. These give the basis of the shape. But at the
same time we have to consider some fine contours as
well.
We noticed that if the number of eigenvectors is
increased, the recognition accuracy becomes better,
until a certain amount, where it saturates.
In Table 1 we measured the recognition rate using 25,
50, 100, 112 eigenvectors.
The best detection rate is 99.38% for the considered
training data set. Table 1 shows that after 100
eigenvectors the increase of the number of such
significant vectors taken in account is useless.
Table 1: Number of most representative eigenvalues
Number of
Eigenface features
Recognition Rate
25
92.72%
50
96.36%
75
98.18%
100
99.39%
112
99.39%
48
In the testing phase each image is compared to the
existing classes in the training set. Each class
corresponds to a certain individual. For each image in
the training set we compute its weight. This weight is
the dot product between difference image and the
eigenvectors (equation (17)). After computing the
weight-vector for the training set we compute the
average of it for each individual. This average weight
is compared to the weight obtained for the tested
image (equation (20)). The similarity measure in this
case is the means square error between these two
vectors measured by the Euclidean distance.
As more eigenvectors we have considered as higher is
the dimensionality of the obtained weight-vector
(equation (18)). This means that we have more
components, so the vectors contain more information
based on which the error is more precise. This
statement can be also verified visually by computing
the reconstructed image. Comparing the reconstructed
images out of 10, 50, 100 and 112 eigenvectors we
can confirm that the more eigenvectors are used the
accurate the reconstruction is.
Fig. 5: Reconstructed image 10, 50, 100, 112 eigenvectors
Our second experiment studied the recognition rate of
the person if he/she wears glasses or he is bearded. In
this case the training set does not contain images with
these disturbing factors. As we have observed from
the measurements the reconstruction rate of people
with glasses is 86.66% and those with beard was
92.85% (table 2).
Table 2: Effect of different facial features
Noise type
Detection Rate
Glasses
86.66%
Beard
92.85%
Our final experiment observed the effect of noise in
the recognition process. We have tested three types of
different noises. The salt & pepper noise, the
Gaussian noise and the Poisson noise and other
illumination changes.
Considering our experiments we draw the following
conclusions: illumination changes have a great impact
on the recognition rate, mainly if the illumination is
not uniform and the source comes from different
directions. In the Yale database we compared the
center-light, left-light and right-light poses(figure 1c).
Gaussian noise is white noise with mean 0 and
covariance 1 (figure 6a).
Salt & pepper noise is a noise with white black pixels
with density 5% of the total pixels. (figure 6b)
The Poisson noise is separately computed for each
pixel with the Poisson mean which is equal to the
value of the pixel (figure 6c). The recognition rates
for these types of images are presented in table 3.
Table 3: Effect of noise
Noise type
Detection Rate
Salt & Pepper
98.36%
Gaussian
98.21%
Poisson
98.76%
Fig. 6: a).Gaussian b). Salt & Pepper c).Poisson
5. Conclusion and future work
Eigenface recognition is one of the most used face
recognition system, but it has to respect certain
conditions like geometric aligning of faces, uniform
illumination conditions. In the presented article we
analyzed these types of external conditions, putting
an accent on varying characteristic facial features
such as glasses or beard. We have observed that the
number of considered descriptive features increases
the recognition rate. The increase of image noise
slightly decreases the recognition accuracy.
Occlusion, facial features have little effect on
recognition, while illumination or light sources from
different directions have a great effect on the
recognition performance. Overall, these observations
can be helpful in designing future recognition
systems.
Acknowledgement
Financial support of the work of Szidónia
Lefkovits was provided from programs co-financed
by The Sectorial Operational Program of Human
Resources Development, Investing in people! Key
Area of Intervention 1.5 “Doctoral and Postdoctoral
scholarship in support of research”, Project title:
“Integrated system for quality improvement of the
doctoral and post-doctoral research in Romania and
promoting the role of science in society”, Contract:
POSDRU/159/1.5/S/133652.
The work of László Lefkovits in this paper is
supported by The Sectorial Operational Programme
Human Resources Development
POSDRU/159/1.5/S/137516 financed by the
European Social Found and by the Romanian
Government.
49
References
[1] Bartlett, M.S, Movellan, J.R. and Sejnowski T.J.
(2002) Face recognition by independent
component analysis IEEE Transaction on Neural
Networks vol. 13, no. 6, pp. 14501464.
[2] Castellani, U., M. Cristani, X. Lu, V. Murino
and Jain A.K. (2008), HMM-based geometric
signatures for compact 3D facerepresentation
and matching. IEEE Computer Society
Conference on Computer Visionand Pattern
Recognition Workshops, (CVPRW), pp. 1-6.
[3] Chen S, Shan T, Lovell BC (2007) Robust face
recognition in rotated eigenspaces. Proceedings
of Image and Vision Computing, New Zealand,
pp 16
[4] Dabhade, S. A., and Bewoor, M. S. (2012)
Rapid Face Recognition Technology using
Eigenfaces and Gabor Filter. International
Journal of Science and Applied Information
Technology, 1(4).
[5] Hiremath,V. and Mayakar, A. (2009), Face
recognition using Eigenface approach. IDT
workshop on interesting results in computer
science and engineering, Sweden.
[6] Jafri, R. and Arabnia, H. R. (2009) A Survey of
Face Recognition Techniques. Journal of
Information Processing Systems (JIPS), vol.5,
no. 2, pp. 41-68.
[7] Jie, L., M. Ji and D. Crookes (2008), A
probabilistic union approach to robust face
recognition with partialdistortion and occlusion.
IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pp.
993-996.
[8] Kumar, A. P., Das, S. and Kamakoti V. (2004),
Face recognition using weighted modular
principle component analysis, in Neural
Information Processing, vol.3316, Lecture Notes
In Computer Science: Springer
Berlin/Heidelberg, 2004, pp. 362-367.
[9] Kirby, M. and Sirovich L. (1990), Application of
the Karhunen-Loève procedure for the
characterisation of human faces. IEEE
Transaction on Pattern Analysis, vol. 12, pp.
831-835.
[10] Liu, C. (2004) Gabor-based kernel pca with
fractional power polynomial models for face
recognition. IEEE Transaction on Pattern
Analysis and Machine Intelligence vol. 26, pp.
572581.
[11] Meng, J. and Zhang, W. (2007), Volume
measure in 2DPCA-based face recognition,
Pattern Recognition Letters, Vol.28, pp.1203-
1208, 2007.
[12] Nhat, V. D. M. and Lee, S. (2005) An
Improvement on PCA Algorithm for Face
Recognition, Advances in Neural Networks -
ISNN 2005, Vol.3498, Lecture Notes in
Computer Science. Chongqing: Springer, 2005,
pp.1016-1021.
[13] Nicholl, P. and Amira, A. (2008), DWT/PCA
face recognition using automatic coefficient
selection. 4th IEEE International Symposium on
Electronic Design, Test and Applications,
DELTA 2008, 23-25 Jan., Belfast, pp. 390-393.
[14] Pervaiz, A.Z. (2010), Real time face recognition
system based on EBGM framework. Computer
Modellingand Simulation (UKSim), pp: 262-
266.
[15] Poon, B., Amin, M. A. and Yan, H. (2009). PCA
based face recognition and testing criteria.
IEEE International Conference on Machine
Learning and Cybernetics, vol. 5, pp. 2945-
2949.
[16] Poon, B., Amin, M. A. and Yan, H. (2011),
Performance evaluation and comparison of
PCA based human face recognition methods for
distorted images. International Journal of
Machine Learning and Cybernetics, 2(4), 245-
259.
[17] Rady, H. (2011), Face Recognition using
Principle Component Analysis with different
Distance Classifiers. International Journal of
Computer Science and Network Security, vol.11
no.10, pp. 134-144
[18] Sharif, M, Sajjad M. and Javed, M.Y. (2012), A
Survey: Face Recognition Techniques. Research
Journal of Applied Sciences, Engineering and
Technology vol. 4, no. 23, pp. 4979-4990.
[19] Shermina, J., (2011), Face recognition system
using multi linear principal component analysis
and locality preserving projection. IEEE GCC
Conference and Exhibition, 19-22 Feb., Stirling,
UK, pp. 283-286.
[20] Ştefan Răzvan Cristian (2014) Lucrare de
licenţă, Recunoaştere facială: Metoda
Eigenface. Universitatea „Petru Maior” Tîrgu-
Mureş, coordonator ştiinţific Sz. Lefkovits
[Bachelor Thesis, Facial Recognition: The
Eigenface Method].
[21] Su, H., Feng, D.D., Zhao, R. (2002), Face
recognition using multi-feature and radial basis
function Network. Proceeding of the Pan-
Sydney Area Workshop on Visual Information
Processing, Sydney.
[22] Turk, M. and A. Pentland, A. (1991), Eigenfaces
For Recognition, Journal Of Cognitive
Neuroscience, Vol.3, pp.71-86.
[23] Zhang, D.Q.A., Zhou, Z.H. and Chen S.C.
(2006), Diagonal principal component analysis
for face recognition. Pattern Recognition vol.
39, no. 1, pp. 140142.
[24] Yale Database
http://www.vision.ucsd.edu/content/yale-face-
database
... Wajah merupakan salah satu bagian dari manusia yang memiliki ciri berbeda dari yang lain-nya. Wajah dapat digunakan untuk mengenali seseorang, seperti kebutuhan absensi, pendataan penduduk, dan sistem keamanan, dengan menggunakan sistem pengenalan wajah [2]. Karena memiliki ciri yang berbeda wajah menjadi salah satu sistem keamanan yang sulit ditembus, seperti dari pencahayaan, warna kulit, potongan rambut, kacamata serta posisi wajah yang bebeda yang dalam keadaan menunduk, menoleh atau mengadah [3]. ...
Article
Full-text available
Abstrak−Tingkat keamanan pada hal akses menjadi salah satu prioritas utama setiap orang untuk meningkatkan sistem keamanan yang dirasa perlu adanya peningkatan mengikuti perkembangan teknologi modern. Penelitian ini membahas tentang sebuah sistem keamanan brangkas menggunakan face recognition yang berbasisAndroid. Penelitian bertujuan agar sistem keamanan brangkas memiliki tingkat keamanan yang lebih baik dari sistem sebelumnya. Tahap awal untuk membangun sistem ini, penulis melakukan tahap pengumpulan data secara literatur sebagai dasar teori dan metode pengembangan sistem yang digunakan oleh perancang perangkat lunak sebelumnya ialah metode waterfall, secara umum metode ini terbagi menjadi beberapa tahapan, diantaranya: Analisis, Desain, Kode Program dan Pengujian Unit. Untuk metode yang digunakan pada penelitian sistem ini adalah metode algoritma eigenfaces untuk tahap pendeteksian objek wajah pada proses awal training image. Serta metode algoritma Local Binary Patterns dan Histrogram Equalization pada tahap membaca gambar pengenalan wajah si pengguna secara akurat yang memiliki tingkat keakuratan membaca wajah hingga 95.56%. Hasil data wajah user akan diproses di Wemos D1 dan data akan dikirim dan disimpan dalam database. Hasil data dari data face recognition akan digunakan lagi sebagai data user untuk membuka brangkas. Kesimpulan yang didapat, sistem dapat membaca wajah user secara real time dan dapat bekerja secara baik untuk sistem keamanan brangkas. Kata Kunci: Android, Kamera, Pengenalan Wajah, Waterfall dan Wemos D1. Abstract−The level of security in terms of access is one of the main priorities of everyone to improve the security system that feels the need for improvement following the development of modern technology. This study discusses a security system using Android-based face recognition. The aim of this research is that the safe safety system has a better level of security than the previous system. The initial stage to build this system, the authors do the literature data collection stage as a basis for the theory and system development methods used by software designers before is the waterfall method, in general this method is divided into several stages, including: Analysis, Design, Program Code and Unit Testing. For the method used in the research of this system is the eigenfaces algorithm method for the detection of facial objects in the initial process of image training. As well as the Local Binary Patterns algorithm method and Histrogram Equalization at the stage of reading the user's face recognition image accurately which has an accuracy of face reading up to 95.56%. The results of the user's face data will be processed in Wemos D1 and the data will be sent and stored in a database. The results of data from face recognition data will be used again as user data to open the safe. The conclusion, the system can read the user's face in real time and can work well for safe security systems. Sesuai dengan perkembangan ilmu dan teknologi yang ada pada masa kini, yang menjadi sorotan untuk pengembangan sistem yaitu sistem identifikasi yang sekarang ini sedang banyak digunakan di era modern ialah proses mengidentifkasi menggunakan informasi biologis seperti wajah, retina, dan bagian anggota tubuh lainnya. Salah satu identifikasi yang memiliki tingkat keakuratan tinggi yaitu wajah. Wajah setiap orang memiliki keunikan yang berbeda-beda untuk di identifikasi. Identifikasi tersebut dapat digunakan untuk membuka sebuah kunci brangkas yang ber isi barang-barang [1]. Wajah merupakan salah satu bagian dari manusia yang memiliki ciri berbeda dari yang lain-nya. Wajah dapat digunakan untuk mengenali seseorang, seperti kebutuhan absensi, pendataan penduduk, dan sistem keamanan, dengan menggunakan sistem pengenalan wajah [2]. Karena memiliki ciri yang berbeda wajah menjadi salah satu sistem keamanan yang sulit ditembus, seperti dari pencahayaan, warna kulit, potongan rambut, kacamata serta posisi wajah yang bebeda yang dalam keadaan menunduk, menoleh atau mengadah [3]. Pada penelitian sebelumnya sudah ada sistem pengenalan wajah menggunakan metode Hidden Markov Models (HMM) yang mencapai tingkat akurasi sebesar 84,28%, dengan database 70 gambar yang terdiri dari 10 individu dengan masing-masing memiliki perbedaan. Dan penelitian ini pun sudah menggunakan eigenfaces juga untuk mendukung posisi dari tampak wajah mulai dari tampak depan, atas, bawah, kanan, kiri, ukuran kecahayan dan latar belakang [4]. Dalam penelitian yang akan dibahas pada penulisan ini yang berjudul "Perancangan Sistem Keamanan Brangkas Menggunakan Pengenalan Wajah Berbasis Android". Dalam tahap peracnangan sistem yang akan dibangun ini penulis menggunakan metode LBPH yang memiliki tingkat akurasi paling besar saat ini yaitu 95,56%. Dengan menggunakan eigenfaces untuk mendukung tampak citra yang lebih luas sehingga pengguna tak perlu harus selalu tegak lurus menghadap kamera [5][6]. Eigenfaces menggunakan analisa komponen utama dari wajah atau foto wajah. Analisa ini dilakukan hanya menggunakan fitur yang sangat penting untuk pengenalan wajah. Eigenfaces adalah satu set vector eigen yang digunakan untuk membaca wajah manusia. Metode ini Dikembangkan oleh (Sirivich dan Kriby. 1987) dan digunakan oleh Matthew Turk dan Alex Pentland dalam klasifikasi wajah [7][8].
... Wajah merupakan salah satu bagian dari manusia yang memiliki ciri berbeda dari yang lain-nya. Wajah dapat digunakan untuk mengenali seseorang, seperti kebutuhan absensi, pendataan penduduk, dan sistem keamanan, dengan menggunakan sistem pengenalan wajah [2]. Karena memiliki ciri yang berbeda wajah menjadi salah satu sistem keamanan yang sulit ditembus, seperti dari pencahayaan, warna kulit, potongan rambut, kacamata serta posisi wajah yang bebeda yang dalam keadaan menunduk, menoleh atau mengadah [3]. ...
Article
Full-text available
The level of security in terms of access is one of the main priorities of everyone to improve the security system that feels the need for improvement following the development of modern technology. This study discusses a security system using Android-based face recognition. The aim of this research is that the safe safety system has a better level of security than the previous system. The initial stage to build this system, the authors do the literature data collection stage as a basis for the theory and system development methods used by software designers before is the waterfall method, in general this method is divided into several stages, including: Analysis, Design, Program Code and Unit Testing. For the method used in the research of this system is the eigenfaces algorithm method for the detection of facial objects in the initial process of image training. As well as the Local Binary Patterns algorithm method and Histrogram Equalization at the stage of reading the user's face recognition image accurately which has an accuracy of face reading up to 95.56%. The results of the user's face data will be processed in Wemos D1 and the data will be sent and stored in a database. The results of data from face recognition data will be used again as user data to open the safe. The conclusion, the system can read the user's face in real time and can work well for safe security systems
... From then on, the same idea was used in different domains of object recognition and human identification. The theoretical aspects and the approach were described in detail [16]. ...
... Eigenface method is one of the face recognition that used Principal Component Analysis that finds a set of projection vectors designed such that the projected data retains the most information about the original data [10], in other hand it was a method for feature extraction that has been proven to be effectively applied in calculations for digital face recognition [11]. However the success of facial detection was affected by some noise factors such as salt & pepper noise, Gaussian noise or Poisson noise to every test image [12]. ...
Conference Paper
Full-text available
Based on data that was extracted from Indonesia Central Bureau of Statistics there were has been theft cases as much as 125.869 times during 2015, consists of Crime against Property / Goods with Violence 11.856 cases, and Crimes against Property / Goods Non-Violent 114.013 cases[1], theft often occurs in empty homes that no occupants, theft is also common in homes that have security cameras, cameras that were installed cannot provide prevention or warning to homeowners. It can be anticipated if the homeowner gets information about the condition of the house in real time wherever he is. This technology is designed to created smart home system that was integrated by the security method especially in face detection, Eigenface method as the image processing method used to detect home occupants to avoid thieves, so home owner can find out if there are thieves in the house wherever homeowners are, the unknown face will activate the built-in alarm in the house and will send a message to the homeowner if the unidentified face is detected by the eigenface method, other sensors that were include such as water level, gas sensor, flame sensor, vibration, light, and motion, all of the sensors were managed by combination of two microcontroller Raspberry PI 3 as the data server and Arduino as the central of sensor circuit. Homeowners can monitor, get notifications about how safe the condition of the house and controlled all of the sensors, fence, and from smartphone via android application.
Article
Full-text available
In this study, the existing techniques of face recognition are to be encountered along with their pros and cons to conduct a brief survey. The most general methods include Eigenface (Eigenfeatures), Hidden Markov Model (HMM), geometric based and template matching approaches. This survey actually performs analysis on these approaches in order to constitute face representations which will be discussed as under. In the second phase of the survey, factors affecting the recognition rates and processes are also discussed along with the solutions provided by different authors.
Article
Full-text available
In this work, we use the PCA based eigenface method to build a face recognition system that have recognition accuracy more than 97% for the ORL database and 100% for the CMU databases. However, the main goal of this research is to identify the characteristics of eigenface based face recognition while, (1) the number of eigenface features or signatures in the training and test data is varied; (2) the amount of noise in the training and test data is varied; (3) the level of blurriness in the training and test data is varied; (4) the image size in the training and test data is varied; (5) the variations in facial expression, pose and illumination are incorporated in the training and test data; and (6) different databases with different characteristic for example with aligned images and non-aligned images, bright and dark image are used. We have observed that, (1) in general the increase of the number of signatures on images increases the recognition rate, however, the recognition rate saturates after a certain amount of increase; (2) the increase in the number of samples used in the calculation of covariance matrix in the PCA increases the recognition accuracy for a given number of individuals to identify; (3) the increase in noise and blurriness have different affect on the recognition accuracy; (4) the reduction in image-size has very minimal effect on the recognition accuracy; (5) if less number of individuals are supposed to be recognized then the recognition accuracy increases; (6) alignment of the facial images increases recognition accuracy; and (7) expression and pose have minimal effect on the recognition rate while illumination has great impact on the recognition accuracy. KeywordsFace recognition–Principle component analysis (PCA)–Eigenface–Covariance matrix–Face database
Conference Paper
Full-text available
In this work, we use the PCA based method to build a face recognition system with a recognition rate more than 97% for the ORL and 100% for the CMU databases. However, the main goal of this research is to identify the characteristics of face recognition rates while, i) the number of training and test data is varied; ii) the amount of noise in the training and test data is varied; iii) the level of blurriness in the training and test data is varied; iv) the image size in the training and test data is varied; and v) different databases are used with aligned images. We have observed that, i) in general the increase of the number of signature on images increases the recognition rate, however, the recognition rate saturates after a certain amount of increase; ii) the increase in the number of samples used in the calculation of covariance matrix increases the recognition accuracy for a given number of individuals to identify; iii) the increase in noise and blurriness affects the recognition accuracy; iv) the reduction in image-size has very minimal effect on the recognition accuracy; v) if less number of individuals are supposed to be recognized then the recognition accuracy increases; and vi) aligned images used increases the recognition accuracy.
Conference Paper
Full-text available
3D face recognition(s) systems improve current 2D image-based approaches, but in general they are required to deal with larger amounts of data. Therefore, a compact representation of 3D faces is often crucial for a better manipulation of data, in the context of 3D face applications such as smart card identity verification systems. We propose a new compact 3D representation by focusing on the most significant parts of the face. We introduce a generative learning approach by adapting Hidden Markov Models (HMM) to work on 3D meshes. The geometry of local area around face fiducial points is modeled by training HMMs which provide a robust pose invariant point signature. Such description allows the matching by comparing the signature of corresponding points in a maximum-likelihood principle. We show that our descriptor is robust for recognizing expressions and performs faster than the current ICP-based 3D face recognition systems by maintaining a satisfactory recognition rate. Preliminary results on a subset of the FRGC 2.0 dataset are reported by considering subjects under different expressions.
Article
Face recognition has received substantial attention from researchers in biometrics, pattern recognition field and computer vision communities. Face recognition can be applied in Security measure at Air ports, Passport verification, Criminals list verification in police department, Visa processing, Verification of Electoral identification and Card Security measure at ATM's. Principal Component Analysis (PCA)is a technique among the most common feature extraction techniques used in Face Recognition. In this paper, a face recognition system for personal identification and verification using Principal Component Analysis with different distance classifiers is proposed. The test results in the ORL face database produces interesting results from the point of view of recognition success, rate, and robustness of the face recognition algorithm. Different classifiers were used to match the image of a person to a class (a subject) obtained from the training data. These classifiers are: the City-Block Distance Classifier, the Euclidian distance classifier, the Squared Euclidian Distance Classifier, and the Squared Chebyshev distance Classifier. The Euclidian Distance Classifier produces a recognition rate higher than the City-Block Distance Classifier which gives a recognition rate higher than the Squared Chebyshev Distance Classifier. Also, the Euclidian Distance Classifier gives a recognition rate similar to the squared Euclidian Distance Classifier.
Article
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
Article
Face recognition technology has evolved as an enchanting solution to perform identification and the verification of identity claims. By advancing the feature extraction methods and dimensionality reduction techniques in the pattern recognition applications, number of facial recognition systems has been produced with distinctive degrees of success. In this paper, we have presented the biometric face recognition approach based on Multilinear Principal Component Analysis (MPCA) and Locality Preserving Projection (LPP) which enhance performance of face recognition. The methodology of the approach consists of face image preprocessing, dimensionality reduction using MPCA, feature Extraction using LPP and face recognition using L2 similarity distance measure. The proposed approach is validated with FERET and AT&T database of faces and compared with the existing MPCA and LDA approach in performance. Experimental results show the effectiveness of the proposed approach for face recognition with good recognition accuracy.
Article
Two-dimensional principal component analysis (2DPCA) is based on the 2D images rather than 1D vectorized images like PCA, which is a classical feature extraction technique in face recognition. Many 2DPCA-based face recognition approaches pay a lot of attention to the feature extraction, but fail to pay necessary attention to the classification measures. The typical classification measure used in 2DPCA-based face recognition is the sum of the Euclidean distance between two feature vectors in a feature matrix, called distance measure (DM). However, this measure is not compatible with the high-dimensional geometry theory. So a new classification measure compatible with high-dimensional geometry theory and based on matrix volume is developed for 2DPCA-based face recognition. To assess the performance of 2DPCA with the volume measure (VM), experiments were performed on two famous face databases, i.e. Yale and FERET, and the experimental results indicate that the proposed 2DPCA + VM can outperform the typical 2DPCA + DM and PCA in face recognition.
Article
In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.