Figure - available from: Mathematical Problems in Engineering
This content is subject to copyright. Terms and conditions apply.
Schematic diagram of LBP feature experiment.

Schematic diagram of LBP feature experiment.

Source publication
Article
Full-text available
In allusion to the shortcomings of traditional facial expression recognition (FER) that only uses a single feature and the recognition rate is not high, a FER method based on fusion of transformed multilevel features and improved weighted voting SVM (FTMS) is proposed. The algorithm combines the transformed traditional shallow features and convolut...

Similar publications

Preprint
Full-text available
Facial expressions recognition (FER) of 3D face scans has received a significant amount of attention in recent years. Most of the facial expression recognition methods have been proposed using mainly 2D images. These methods suffer from several issues like illumination changes and pose variations. Moreover, 2D mapping from 3D images may lack some g...

Citations

... The literature is replete with studies that have investigated training SVMs with features extracted from CNNs (see, for example, [7,10,19,[21][22][23][24][25]). SVMs are a preferred classifier mainly because of the computational resources needed to fine-tune CNNs. ...
Article
Full-text available
Features play a crucial role in computer vision. Initially designed to detect salient elements by means of handcrafted algorithms, features now are often learned using different layers in convolutional neural networks (CNNs). This paper develops a generic computer vision system based on features extracted from trained CNNs. Multiple learned features are combined into a single structure to work on different image classification tasks. The proposed system was derived by testing several approaches for extracting features from the inner layers of CNNs and using them as inputs to support vector machines that are then combined by sum rule. Several dimensionality reduction techniques were tested for reducing the high dimensionality of the inner layers so that they can work with SVMs. The empirically derived generic vision system based on applying a discrete cosine transform (DCT) separately to each channel is shown to significantly boost the performance of standard CNNs across a large and diverse collection of image data sets. In addition, an ensemble of different topologies taking the same DCT approach and combined with global mean thresholding pooling obtained state-of-the-art results on a benchmark image virus data set.
... The study in (8) reports a Subject Independent accuracy of 88.5% on JAFFE dataset & 93.2 % on CK+ dataset. Gabor features, LBP features, CNN deep features, Joint geometric features and mixed features are implemented in (9) for emotion recognition with an average recognition rates of 94.75% and 96.86%, JAFFE and CK+ respectively. An Appearance Network and Geometric Network is combined to form a Deep Joint Spatiotemporal Network (DJSTN) (10) in which the authors have applied a 3D convolution on Face images to extract spatial and temporal features. ...
Article
Full-text available
Having experienced more than a year of pandemic, a variety of applications such as online classrooms, virtual office meetings, conferences, online games, Social media & Networks, Mobile applications, and many other infotainment areas have made humans live with gadgets and respond to them. However, all these applications have an impact on human behavioral transformation. It is very significant for employers to understand the emotions of their employees in the era of online office & work from home concept to increase productivity. Learning and identifying emotions from the human face has its application in all online portals when physical contact could not be achieved. Ojbective: Human Facial emotions can be learned using enormous feature descriptors that extract image features. While local feature descriptors retrieve pixel-level information, global feature descriptors extract the overall image information. Both of the feature descriptors quantify the image information, however, they don’t provide complete and relevant information. Hence, this research work aims to improve the existing local feature descriptor to perform globally for emotion recognition. Method: Our proposed feature descriptor, Patch-SIFT collects features from multiple patches within an image. This strategy is evolved to globally apply the local feature descriptor as a hybridization paradigm. The extracted features are trained and tested on an ensemble model. Findings: The Proposed Feature descriptor (Patch-SIFT) performance with ensemble model is found to produce an improved accuracy of 98% compared with existing feature descriptors and Machine learning classifiers. Novelty: This research work tries to evolve a new Feature descriptor algorithm based on SIFT algorithm for an efficient emotion recognition system that works without the need for any additional GPU or huge dataset.
... The study in (8) reports a Subject Independent accuracy of 88.5% on JAFFE dataset & 93.2 % on CK+ dataset. Gabor features, LBP features, CNN deep features, Joint geometric features and mixed features are implemented in (9) for emotion recognition with an average recognition rates of 94.75% and 96.86%, JAFFE and CK+ respectively. An Appearance Network and Geometric Network is combined to form a Deep Joint Spatiotemporal Network (DJSTN) (10) in which the authors have applied a 3D convolution on Face images to extract spatial and temporal features. ...
Conference Paper
The present pandemic situation has made people not go out anywhere because it’s getting difficult to learn the concepts briefly for both lecturers and students. Concepts are learned through videos, but reading is also an important aspect of learning. This paper talks about providing books and notes in online mode to read and recommend books based on facial expressions captured from the user. This paper aims to extract faces from an image, extract the expression (eyes and lips) from it and also classify them into six types of emotions, which are Happy, Fear, Anger, Surprise, Neutral and Sad. The algorithm used for facial expression recognition is the Convolutional Neural Network algorithm, also known as CNN.