Conference PaperPDF Available

LBPH based improved face recognition at low resolution

Authors:
LBPH Based Improved Face Recognition At Low Resolution
Aftab Ahmed
School of Information & Software Engineering,
University of Electronic Science & Technology of
China
Chengdu, China
e-mail: aftabahmed@ibacc.edu.pk
Jiandong Guo
School of Information & Software Engineering,
University of Electronic Science & Technology of
China
Chengdu, China
e-mail: jdguo@uestc.edu.cn
Fayaz Ali
School of Information & Software Engineering,
University of Electronic Science & Technology of China
Chengdu, China
e-mail: fayazdharejo40@gmail.com
Farha Deeba
School of Information & Software Engineering,
University of Electronic Science & Technology of China
Chengdu, China
e-mail: farahdeebauestc@hotmail.com
Awais Ahmed
School of Information & Software Engineering,
University of Electronic Science & Technology of China
Chengdu, China
e-mail: engr.awais86@yahoo.com
AbstractAutomatic individual face recognition is the most
challenging query from the past decade in computer vision.
However, the law enforcement agencies are inadequate to
identify and recognize any person through the video
monitoring cameras further efficiently; the blur conditions,
illumination, resolution, and lighting are still the major
problems in face recognition. Our proposed system operates
better at the minimum low resolution of 35px to identify the
human face in various angles, side poses and tracking the face
during human motion. We have designed the dataset (LR500)
for training and classification. This paper employs the Local
Binary Patterns Histogram (LBPH) algorithm architecture to
address the human face recognition in real time at the low level
of resolution.
Keywords-face recognition; LBPH; low resolution; feature
extraction
I. INTRODUCTION
Currently, the Face recognition becomes the more
important topic in computer vision and having much
importance in many applications such as for security,
surveillance, banking and so on. But it becomes more
challengeable because of accuracy and efficiency. Over the
years, many scholars have developed variety kinds of face
recognition algorithms, including Sparse Coding (SC)
algorithm [1], Local Binary Pattern (LBP) algorithm [2],
Histograms of Oriented Gradients (HOG) algorithm [3],
Linear Discriminant Analysis (LDA) algorithm [4], and
Gabor feature algorithm [5].These all algorithms provide
accuracy rate between 50% - 76% [6. Compared with the
above algorithms the LBPH algorithm can not only
recognize the front face, but also recognize the side face,
with 90% accuracy rate [6].
II. WORK FLOW OF FACE RECOGNITION SYSTEM
Figure 1. Face recognition system work flow.
Mostly face recognition system includes four main parts:
information acquisition module, feature extraction module,
classification module and training classifier database module
144
,QWHUQDWLRQDO&RQIHUHQFHRQ$UWLILFLDO,QWHOOLJHQFHDQG%LJ'DWD
978-1-5386-6987-7/18/$31.00 ©2018 IEEE
[7]. The image information collected by the learning
acquisition module which will be used as a test sample for
analysis. In the feature extraction module, which can
represent human identity information is extracted and
examined. In the classification module, the classifier trained
by the database is used to classify test samples to determine
the identification information of individuals.
A. Face Detection
We have used OpenCV which presents a Haar cascade
classifier [8], [12], which is used for face detection. The Haar
cascade classifier uses the AdaBoost algorithm to detect
multiple facial features. First, it reads the image to be
detected and converts it into the gray image, then loads Haar
cascade classifier to decide whether it contains a human face.
If so, it proceeds to examine the face features and draw a
rectangular frame on the detected face. Otherwise, it
continues to test the next picture.
B. Feature Extraction
The LBP operator is applied to describe the contrast
information of a pixel to its neighborhood pixels. The
original LBP operator is defined in the window of 3*3.
Using the median pixel value as the threshold of the window,
it compares with the gray value of the adjacent 8 pixels. If
the neighborhood pixel value is larger or equal compare to
the median pixel value, the value of pixel position is marked
as 1, otherwise marked as (0) [9]. The function is defined as
shown in equation 1. It can be illustrated in Figure 2.
 
()= 1, ≥0
0, <0   
Figure 2. Original LBP Operator.
In this way, 8 points in the 3*3 neighborhood are
compared to generate 8-bit binary numbers. Changing it to
decimal numbers, the LBP values of the middle pixel points
of the window are obtained, which is used to display the
texture features of the region. The current LBPH algorithm
uses an improved circular LBP operator. It can be
represented by Figure 3 and equation 2.
Figure 3. Circular LBP Operator.

= (−)2−1
=0 (2)
The gray value GP of P neighborhoods of the pixel C, the
radius of which is R. GC is the gray value pixel value C
(xc,yc). This algorithm makes the LBP operator no longer
limited to fixed radius and neighborhood and can meet the
needs of more different size and texture features. For each
pixel of an image, it computes its LBP eigenvalues. Then
these eigenvalues can form the LBP feature spectrum. The
LBPH algorithm uses the histogram of the LBP characteristic
spectrum as the feature vector for classification. It divides a
picture into several sub regions, then extracts LBP feature
from each pixel of the sub-region, establishing a statistical
histogram of the LBP characteristic spectrum in each sub
region, so that each sub region can using a statistical
histogram to describe the whole picture through a number of
statistical histogram components. The advantage is to reduce
the error that the image is not fully aligned with a certain
range.
C. Dataset LR500
We have designed our own database named LR500,
which stores 500 images of each person. It is created on the
basis of face detection. Make different facial expressions and
postures to a scene and detect faces. The saved pictures are
stored in the same folder to form the generated face database.
During image acquisition step, the dataset images have been
converted into gray scale images for features extraction; and
then normalized those images for good recognition results.
Normalization technique has been applied on all images to
remove noise and set the alignment position of images.
Figure 4. Test Images of Face Database LR500.
III. FACE RECOGNITON ALGORITHM
To perform the face recognition system here the Local
Binary Pattern Algorithm has been applied. The LBP
operator is used in local features through Local Binary
Pattern acts which shorten the local special arrangement of a
face image [10]. The LBP operator is the number of binary
ratios of pixels intensities within the pixel of center and it’s
around eight pixels. It can be shown in below equation.
(,)=
7
=0 (−)2 (3)
Where ic indicates the value of the center pixel and (xc,
yc), shows eight surrounding pixels information. Therefore, it
is very helpful in determining the face features. From the
original matrix Features of the image are extracted then these
values are compared with the center pixel values, the later
binary code is generated.
The Algorithm works as below:
1. First, we need to start with temp=0
2. Where I, is the training for each image
3. H=0,then Initialize the pattern histogram
145
4. Calculate the model label of LBP
5. Keep adding the corresponding bin by 1.
6. Get the greatest LBP feature during each face image and
then merging into the unique vector.
7. It's time to compare the features.
8. Finally, if it resembles with the stored database the image
is recognized.
Figure 5. LBPH algorithm flowchart
A. Feature Vectors
In order to receive the feature vectors, the pattern for
each pixel is obtained [11]. To represent all faces efficiently,
the image has to be subdivided into K2 regions, i-e 82= 64
regions. A histogram with each potential label is composed.
Each bin in a histogram gives the information about a pattern.
While the feature vectors can be obtained from the
histograms. So we can say that each regional histogram hold
of P (P 1) + 3 bins: P (P 1).
To achieve the area with a distance with the help of the
LBP system from the edges of the image, if it's not then it
means some area on the border of the image is not used.
For the image (NxM), the feature vector is designed with
the help of calculating the LBP code for all pixels (Xc, Yc)
with xc є {R + 1,. .., N R} and yc {R + 1, . . . , M R}.
If an image is divided into k × k regions, then the
histogram for region (kx, ky), with kxє {1, . . . , k} and kyє
{1, . . . , k}, Mathematically, :
,= , ,(,)= (),=1,…….(−1)+3 (4)
x
R+1,………….,N
KKx= 1
(Kx1)N
K+1,…….NR Kx= K
(Kx1)N
K+1,…….K
xN
K else
(5)
R+1,………….,M
K Ky= 1
Ky1M
K+1,…….MR Ky= K
Ky1M
K+1,…….K
yM
K else
(6)
in which L is the label of binary i and
()= 1,  
0,   (7)
The three distinct levels of locality of the face can be
determined from feature vector: the labels include
information at the little environmental level and design
architecture of histograms provides information about the
face.
IV. RESULTS AND DISCUSSION
In this experiment, each image in the face database has
the distinct ID number. First, prepare the face database, and
then extract the LBP texture features of each test image.
Finally, classify and recognize the face information. For this
test we have collected 2500 face images, those face images
are taken with a TTQ HD 1080px camera.
We compare the input face images with database face
images and work as if the given appearance images, after
extracting features compared with the dataset so finally we
can figure-out the face image is favorably recognized
otherwise the face image would not be recognized. It can be
shown in Figure 6.
Figure 6. unknown person.
Based on the algorithm, this information of face image of
known and an unknown identity is compared with the face
image of known individuals from the available database. In
the research, we have performed major three tasks, capture,
train, and recognize the face images by using the camera.
A. Face Detection
In face detection step, the system detects the face in an
input image via camera and captures the gray scale image.
146
B. Training Face Images
After image acquisition and pre-processing task, we have
to perform dataset training. For training phase, the training
recognizer is applied to store the histogram values of face
images.
Figure 7. Face detection.
TABLE I. TRAINING IMAGES STATISTICS
Total
Recognized
Unrecognized
Training
Images
Images
Images Time
2500
2470
30
35 sec
Figure 8. Dataset training.
C. Recognize Face Image
The final task is to recognize face images. The Haar
cascaded classifier and training recognizer will be used for
face recognition. The classifier will compare the stored face
images with input face images. If the face features of input
images matched with the database images, the recognition
result will be displayed on the camera screen.
@35px @45px
Figure 9. Recognizing face images.
TABLE II. RECOGNITION ACCURACY RATE COMPARISON
Algorithm
35px
LBPH 94% 90%
V. CONCLUSION AND FUTURE IMPROVEMENTS
We used Local Binary Patterns at low resolution for the
face recognition. It essentially contains three major parts, i-e
the representation of the face, feature extraction, and finally
classification. While in Face representation describes the
input of face behaves and moreover, it limits the algorithms
for the detection and recognition. Further, for feature
extraction, this LBPH histogram found a novel result and
finally we classify input detected face compare with the
proposed DATASET (LR500). Then we can analyze our
system either recognized a known person or unknown person.
In future, this proposed approach will be more beneficial
for security agencies to identify criminals, whose have
criminal record in database. It will help to recognize any
unknown or known person in surveillance area at low
resolutions due to long distance of camera and observed
subject.
VI. ACKNOWLEDGEMENT:
This work is supported by Vice Professor Jiandong Guo
and School of Information and Software Engineering,
University of Electronic Science and Technology of China.
REFERENCES
[1] J. Olshausan B A, Field D J.Emergence of simple-cell receptive field
properties by learning a sparse code for natural images. Nature, 1996,
381(6583):607-609
[2] J. CHAO W L, DING J J, LIU J Z. Facial expression recognition
based on improved local binary pattern and class-regularized locality
preserving projection. Signal Processing, 2015, 117:1-10.
[3] J. HU Liqiao, QIU Runhe. Face recognition based on adaptive
weighted HOG. Computer Enigeering and Applications, 2017, 53(3):
164-168.
[4] J. Yu yan JIANG, Ping LI, Qing WANG. Labeled LDA model based
on shared background topic. Acta Electronica Sinica, 2015, 2013, (9):
1794-1799.
[5] J. WU Qi, WANG Tang-hong, LI Zhan-li. Imporved face recognition
algorithm based on Gabor feature and collaborative representation.
Computer Engineering and Design, 2016, 37(10): 2769-2774.
[6] Aruni Singh, Sanjay Kumar Singh, Shrikant Tiwari, Comparison of
face Recognition Algorithms on Dummy Faces, The International
Journal of Multimedia & Its Applications (IJMA) Vol.4, No.4,
August 2012.
[7] XueMei Zhao, ChengBing Wei, A Real-time Face Recognition
System Based on the Improved LBPH Algorithm, 2017 IEEE 2nd
International Conference on Signal and Image Processing.
[8] Varun Garg, Kritika Garg, Face Recognition Using Haar Cascade
Classifier, Journal of Emerging Technologies and Innovative
Research (JETIR) , December 2016, Volume 3, Issue 12.
[9] Hongshuai Zhang, Zhiyi Qu Liping, Yuan GangLi, A Face
Recognition Method Based on LBP Feature for CNN, 2017 IEEE 2nd
Advanced Information Technology, Electronic and Automation
Control Conference (IAEAC).
[10] T. Chen, Y. Wotao, S. Z. Xiang, D. Comaniciu, and T. S. Huang,
“Total variation models for variable lighting face recognition” IEEE
Transactions on Pattern Analysis and Machine Intelligence,
28(9):1519{1524, 2006
[11] Zhao and R. Chellappa “Robust face recognition using symmetric
shape from-shading” Technical Report, Center for Automation
Research, University of Maryland, 1999
[12] Zheng Xiang, Hengliang Tan, Wienling Ye. The excellent properties
of dense gird-based HOG features on face recognition compare to
gabor and LBP, 2018 volume issue 99.
147
... The database matching module compares the extracted feature vector of the input image to the feature vectors of known faces stored in the database and returns the closest match. [19,20]. LBPH outperformed Eigenfaces and Fisherfaces methods in handling lighting and pose variations and showed superior performance across different variables such as lighting conditions, object distance, and subject age [21]. ...
Article
Attendance tracking has long posed challenges in educational institutions due to the inefficiency and error-prone nature of traditional paper-based methods. In response, many institutions have embraced web technologies and automated attendance systems, incorporating biometrics, QR codes, barcodes, and RFID-based technologies. However, the applicability of these systems may vary across different educational settings. This paper introduces a web-based student attendance management system that combines facial recognition technology and QR codes to address the challenges associated with manual attendance tracking in a university college. The system leverages a centralized database for streamlined monitoring and auditing of attendance records, offering the flexibility to choose between face recognition and QR code attendance marking options. User acceptance tests were conducted to evaluate the system's effectiveness, and the results indicate that the proposed system greatly improves attendance tracking transparency and demonstrates high usability based on positive user ratings. Additionally, the preference for face recognition over QR code scanning was evident. Incorporating these technologies into the automated attendance system represents a substantial advancement in educational technology, offering an accurate and efficient way of recording attendance.
... For the above reasons, we propose an improved efficient neck, a newly designed Shape CIoU and Recursion Bottleneck Transformer for small object detection in drone-captured images. Firstly, considering that small or tiny objects possess low pixels [14], [15] and there exists dramatic scale variation between different objects in aerial images, a new neck structure called Efficient Neck by integrating Content-Aware ReAssembly of Features [16](CARAFE) up-sampling operator and small object detection layer. Efficient Neck can better cope with the drastic changes in different object scales and obtain better detection results, such as detecting more tiny objects. ...
Article
Full-text available
As an emerging consumer electronic product, the use of unmanned aerial vehicle(UAV) for a variety of tasks has received growing attention and favor in the enterprise or individual consumer electronics market in recent years. The deep neural network based object detectors are convenient to embed into the UAV product, however, the drone-captured images could bring the potential challenges of object occlusion, large scale difference and complex background to these methods because they are not desinged for the detection of small and tiny objects within the aerial images. To address the problem, we propose an improved YOLO paradigm called SR-YOLO with an Efficient Neck, Shape CIoU and Recursion Bottleneck Transformer for better object detection performance in consumer-level UAV products. Firstly, an efficient neck structure is presented to retain richer features through a small object detection layer and an up-sampling operator suitable for small object detection. Secondly, we design a new prediction box loss function called shape complete-IoU(SCIoU), which utilizes a width (height) limiting factor to alleviate the deficiency that the CIoU only focuses on aspect ratios by taking into account both the aspect ratio and the ratio of the two boxes’ widths. Moreover, combined with recurrent neural network and multi-head self-attention mechanism at the cyclic manner, a recursive bottleneck transformer is constructed to relieve the impact of highly dense scene and occlusion problems exists in UAV images. We conduct the extensive experiments on two public datasets of VisDrone2019 and TinyPerson, where the results show that the proposed model surpasses the compared YOLO by 8.1% and 3.2% in mAP50 respectively. In addition, the analysis and case study also validate our SR-YOLO’s superiority and effectiveness.
Conference Paper
In this modern era, technological innovation continues to be developed, allowing new technologies that make human work easier, such as biometric technology. An example of biometric technology that needs to be encouraged in using is facial recognition technology. Reportedly, there were 8,493 home theft criminal cases in 2019. However, many people in Indonesia still use security systems in the form of conventional keys. Based on the problem, a tool was developed that can detect faces to access the door of the house. This face detection uses OpenCv, and image processing uses the LBPH and Haarcascade methods. The system will recognize the faces that the user has registered. If a face that has not been registered tries to enter the system, the system will take a picture of the face that is trying to enter the system and then send it to the homeowner via a telegram bot. The results of this research show the results of user testing in real time under different lighting conditions. The best condition has an average confidence level of 69.44% with an accuracy value of 96% with an error of 4%.
Article
School buses are the main way kids get to school every day. The Student Tracking System in School Bus Using Face Recognition and IoT is a smart idea to make school transportation safer and better. This system uses fancy technology to watch over students while they’re on the bus. It uses facial recognition to figure out who’s getting on and off the bus. Each student’s face is put into the system, and it checks if they’re the right person in real-time. The bus also has special devices connected to the internet (IoT) to track where it is in real-time using GPS. This helps parents, school people, and transportation folks see where the bus is and where it’s going. The Student Tracking System in School Bus Using Face Recognition and IoT is a super modern solution to make school transportation safer and more efficient
Article
Full-text available
To effectively represent facial features in complex environments, a face recognition method based on dense grid histograms of oriented gradients (HOG) is proposed. First, the face image is divided by numerous dense grids from which the HOG features are extracted. Then, all the grid HOG feature vectors are composed to realize the feature expression of the whole face, and the nearest neighbor classifier is used for recognition. In the FERET face database with complex changes in illumination, time and environment, we test the gamma illumination correction, the spatial gradient direction, the size of the block, the standardization, and the face image resolution to find and analyze the optimal HOG parameters for face recognition. Finally, we compare our dense grid HOG with the two famous local facial feature extraction methods: the Gabor wavelet and the local binary pattern (LBP) on face recognition. The experimental results show that the dense grid HOG method is more suitable for the variations in time and environment. The feature extraction times of the dense grid HOG and LBP are similar. However, the dense grid HOG method uses fewer dimensions to obtain a better recognition rate than LBP. Moreover, the dense grid HOG feature extraction time greatly outperforms the Gabor wavelet feature.
Article
Full-text available
In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i) Preparation of dummy face database of 110 subjects ii) Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii) Critical analysis of these types of algorithms on dummy face database
Article
Full-text available
Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric shape-from-shading (SFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the symmetric SFS algorithm as a tool to obtain a prototype image which is illumination-normalized. Applying traditional SFS algorithms to real images of complex objects (in terms of their shape and albedo variations) such as faces is very challenging. It is shown that the symmetric SFS algorithm has a unique point-wise solution. In practice, given a single real face image with complex shape and varying albedo, even the symmetric SFS algorithm cannot guarantee the recovery of accurate and complete shape information. For the particular problem of face recognition, we utilize the fact that all faces share a similar shape making the direct computation of the prototype image from a giv...
Article
Full-text available
In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i) Preparation of dummy face database of 110 subjects ii) Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii) Critical analysis of these types of algorithms on dummy face database. KEYWORDS Face recognition, dummy face, dummy face database and biometrics.
Article
Full-text available
The receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.
Article
LDA (Latent Dirichlet Allocation) is widely used in text analysis and images processing. However, LDA and most of its modifications are unsupervised learning models, which are not appropriate for classification especially multi-label classification problem. Through the study on the multi-label documents and LDA models, this paper proposes a new Labeled LDA model, namely Shared Background Topics Labeled LDA (SBTL-LDA). In this new model, each label has not only a set of local topics, but also has several background (global) topics. Experienmental results show that SBTL-LDA can decrease the affect of similarities and dependence between different topics and because the label of document is mapped as a combination of local topics and shared topics, so it has a high accuracy when learning from multi-Labeled documents. In addition, this model can be viewed as a semi-supervised clustering model which can utilize the information of labels and outperfom other models.
Article
This paper provides a novel method for facial expression recognition, which distinguishes itself with the following two main contributions. First, an improved facial feature, called the expression-specific local binary pattern (es-LBP), is presented by emphasizing the partial information of human faces on particular fiducial points. Second, to enhance the connection between facial features and expression classes, class-regularized locality preserving projection (cr-LPP) is proposed, which aims at maximizing the class independence and simultaneously preserving the local feature similarity via dimensionality reduction. Simulation results show that the proposed approach is very effective for facial expression recognition.