Figure 2 - uploaded by Xiaochen Chen
Content may be subject to copyright.
22 feature points defined on a generic face model.

22 feature points defined on a generic face model.

Source publication
Conference Paper
Full-text available
3D facial range models can be created by static range scanners or real-time dynamic 3D imaging systems. One of the major obstacles for analyzing such data is lack of correspondences of features (or vertices) due to the variable number of vertices across individual models or 3D model sequences. In this paper, we present an effective approach to auto...

Similar publications

Article
Full-text available
Facial expressions convey exhaustive information about human emotions and the most interactive way of social collaborations, despite differences in ethnicity, culture, and geography. Due to cultural differences, the variations in facial structure, facial appearance, and facial expression representation are the main challenges to the facial expressi...
Conference Paper
Full-text available
This research project was an opportunity to carry out a study to better understand Consumer Behavior and influence of Brands in the Decision Making Process. We conducted a test product with the two main Portuguese beers: The Super Bock and Sagres. We’ve conducted an extensive review of literature, on Consumer Behavior, Perception, Attention Memory,...
Article
Full-text available
In the late 1970s, analysis of facial expressions to unveil emotional states began to grow and flourish along with new technologies and software advances. Researchers have always been able to document what consumers do, but understanding how consumers feel at a specific moment in time is an important part of the product development puzzle. Because...
Article
Full-text available
Emotions have been studied for a long time and results show that they play an important role in human cognitive functions. In fact, emotions play an extremely important role during the communication between people. And the human face is the most communicative part of the body for expressing emotions; it is recognized that a link exists between faci...

Citations

... These techniques usually use alignment methods to obtain better results. In the tracking-based techniques, the work of Rosato et al. [26] dealt with the problem of lack of correspondence of features (or vertices) due to the variable number of vertices in a single model or 3D model sequence for facial expression recognition. A method for automatically establishing vertex correspondence between input scans or dynamic sequences is proposed. ...
... Twenty-two feature points are extracted on 2D face textures derived using a deformable template method [27]. In [26], the composition of descriptors and classifiers is the same as in [28], but in [26], 2D face textures are generated using a corner-preserving mapping and a model adaptation algorithm. And in 3D facial model-based techniques, zhang et al. [27] proposed a new 4D spatiotemporal "Nebula" feature to improve the performance of expression and facial motion analysis. ...
... Twenty-two feature points are extracted on 2D face textures derived using a deformable template method [27]. In [26], the composition of descriptors and classifiers is the same as in [28], but in [26], 2D face textures are generated using a corner-preserving mapping and a model adaptation algorithm. And in 3D facial model-based techniques, zhang et al. [27] proposed a new 4D spatiotemporal "Nebula" feature to improve the performance of expression and facial motion analysis. ...
Preprint
Micro-expressions are nonverbal facial expressions that reveal the covert emotions of individuals, making the micro-expression recognition task receive widespread attention. However, the micro-expression recognition task is challenging due to the subtle facial motion and brevity in duration. Many 2D image-based methods have been developed in recent years to recognize MEs effectively, but, these approaches are restricted by facial texture information and are susceptible to environmental factors, such as lighting. Conversely, depth information can effectively represent motion information related to facial structure changes and is not affected by lighting. Motion information derived from facial structures can describe motion features that pixel textures cannot delineate. We proposed a network for micro-expression recognition based on facial depth information, and our experiments have demonstrated the crucial role of depth maps in the micro-expression recognition task. Initially, we transform the depth map into a point cloud and obtain the motion information for each point by aligning the initiating frame with the apex frame and performing a differential operation. Subsequently, we adjusted all point cloud motion feature input dimensions and used them as inputs for multiple point cloud networks to assess the efficacy of this representation. PointNet++ was chosen as the ultimate outcome for micro-expression recognition due to its superior performance. Our experiments show that our proposed method significantly outperforms the existing deep learning methods, including the baseline, on the $CAS(ME)^3$ dataset, which includes depth information.
... A wide variety of classification techniques have been employed in 3D facial expression recognition systems. These include methods such as discriminant analysis, 5,21,21 linear classifiers, 22 nearest neighborhood rating, 23 grouping, 24 probability and maximum classifiers. [25][26][27][28][29][30] More recently the use of Artificial Intelligence (AI) has taken enormous proportions in the classifier studies and several other areas, such as Artificial Neural Networks (ANN). ...
... A wide variety of classification techniques have been employed in 3D facial expression recognition systems. These include methods such as discriminant analysis, 5,21,21 linear classifiers, 22 nearest neighborhood rating, 23 grouping, 24 probability and maximum classifiers. [25][26][27][28][29][30] More recently the use of Artificial Intelligence (AI) has taken enormous proportions in the classifier studies and several other areas, such as Artificial Neural Networks (ANN). ...
... Then, given a sequence of the testing set, the trained SV M classifier is recruited to classify it into one of the eight possible classes/expressions. The numbers in bold indicate the achieved performance of the proposed method Although there are plenty of methods using 2D images, 2D videos and 3D images for facial expression recognition [12,18,26,39,40,42,47,48,55,65,70,72,87,89,93,94,98], we are focused on the 3D video modality techniques [7,14,33,50,54,66,76,82,90,102]. Only the technique presented in [14] is tested on the BP4D-Spontaneous data set and thus can be reliably compared to the proposed method. ...
Article
Full-text available
This work introduces a new scheme for action unit detection in 3D facial videos. Sets of features that defi�ne action unit activation in a robust manner are proposed. These features are computed based on eight detected facial landmarks on each facial mesh that involve angles, areas and distances. Support vector machine classifi�ers are then trained using the above features in order to perform action unit detection. The proposed AU detection scheme is used in a dynamic 3D facial expression retrieval and recognition pipeline, highlighting the most important AUs, in terms of providing facial expression information, and at the same time, resulting in better performance than state- of-the-art methodologies.
... Most of the methods have been developed for facial expression recognition from static 3D facial expression data [37][38][39][40][41]. However, recently, the research community has proposed methods that employ dynamic 3D facial expression data for this purpose [42,43]. The methods for 3D facial expression recognition usually consist of two main stages: feature extraction, and selection and classification of features, as in 2D face expression analysis. ...
Article
This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three set of features extracted from the face image: entropy, brightness and local binary pattern. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. “frequency-tuned salient region detection”. The idea is to use saliency maps to find appropriate weights or values for extracted features (i.e. brightness and entropy). To validate the performance of saliency detection algorithm against human visual system, we have performed a visual experiment. Eye movements of 15 subjects were recorded with an eye-tracker in free viewing conditions as they watch a collection of 54 videos selected from Cohn-Kanade facial expression database. Results of the visual experiment provided the evidence that obtained saliency maps conforms well with human fixations data. Finally, evidence of the proposed framework’s performance is exhibited through satisfactory classification results on Cohn-Kanade database, FG-NET FEED database and Dartmouth database of children’s faces.
... The produced 2D mapped images implicate more and precise shape cues compared to the depth images. Rosato, Chen, and Yin (2008) utilized the conformal mapping algorithm as an intermediate step to build a vertex correspondence between 3D expressive faces in the 2D space to track a set of features' movements. Recently, Zeng, Li, Chen, Morvan, and Gu (2013) explored the influence of the surface representation encoded in the conformal images. ...
... For classification, a probabilistic expression model was learned on the generalized manifold. In [RCY08], the composition of the descriptor and the classifier are the same as in [CVTV05] but in [RCY08] the 2D face texture is generated using a conformal mapping and model adaptation algorithm. The proposed coarse to-fine model adaptation approach between the planar representations was used and the correspondences are extrapolated back to the 3D meshes. ...
... For classification, a probabilistic expression model was learned on the generalized manifold. In [RCY08], the composition of the descriptor and the classifier are the same as in [CVTV05] but in [RCY08] the 2D face texture is generated using a conformal mapping and model adaptation algorithm. The proposed coarse to-fine model adaptation approach between the planar representations was used and the correspondences are extrapolated back to the 3D meshes. ...
... A Linear Discriminant Analysis (LDA) classifier is implemented for the classification process. In [SCRY10], another version of [RCY08] is presented. Instead of a LDA classifier, a spatio-temporal Hidden Markov Model (HMM) is implemented. ...
... Only a few efforts have been reported on the automatic recognition of 3D facial expression (e.g. [31], [19], [16], [42], [8]). Theses methods can be roughly classified into two categories: i) 2D landmark based [31], [19]; ii) learning based [42]. ...
... [31], [19], [16], [42], [8]). Theses methods can be roughly classified into two categories: i) 2D landmark based [31], [19]; ii) learning based [42]. By using the one-to-one correspondence of 2D facial image and 3D facial surface, 2D landmark based methods first automatically detect a set of facial landmarks on the 2D facial image and then map them to its corresponding 3D surface. ...
... Recently, conformal geometry has been introduced as a powerful surface shape analysis tool, such as surface matching, recognition and stitching [34] [9] surface registration [40], 3D face recognition [28] and 3D FER [19] [22]. Different from the previous works, in this paper, for the first time, we propose to use the unique surface conformal representation, i.e., conformal factor image (CFI) and mean curvature image (MCI), as expression features of the proposed 3D FER framework. ...
Conference Paper
Full-text available
We propose a general and fully automatic framework for 3D facial expression recognition by modeling sparse representation of conformal images. According to Riemann Geometry theory, a 3D facial surface S embedded in ℝ3, which is a topological disk, can be conformally mapped to a 2D unit disk D through the discrete surface Ricci Flow algorithm. Such a conformal mapping induces a unique and intrinsic surface conformal representation denoted by a pair of functions defined on D, called conformal factor image (CFI) and mean curvature image (MCI). As facial expression features, CFI captures the local area distortion of S induced by the conformal mapping; MCI characterizes the geometry information of S. To model sparse representation of conformal images for expression classification, both CFI and MCI are further normalized by a Mobius transformation. This transformation is defined by the three main facial landmarks (i.e. nose tip, left and right inner eye corners) which can be detected automatically and precisely. Expression recognition is carried out by the minimal sparse expression-class-dependent reconstruction error over the conformal image based expression dictionary. Extensive experimental results on the BU-3DFER dataset demonstrate the effectiveness and generalization of the proposed framework.
... Furthermore, these approaches can also have difficulties in dealing with non exaggerated and non prototypical facial expressions as they ignore appearance features. In the literature there also exist model-based approaches which typically fit a deformable face model to an input 3D face model [21, 23, 24]. For instance, Ramanathan et al. proposed in [21] a Morphable Expression Model (MEM) and used the fitted model parameters for facial expression recognition. ...
Article
Textured 3D face models capture precise facial surfaces along with the associated textures, making it possible for an accurate description of facial activities. In this paper, we present a unified probabilistic framework based on a novel Bayesian Belief Network (BBN) for 3D facial expression and Action Unit (AU) recognition. The proposed BBN performs Bayesian inference based on Statistical Feature Models (SFM) and Gibbs–Boltzmann distribution and feature a hybrid approach in fusing both geometric and appearance features along with morphological ones. When combined with our previously developed morphable partial face model (SFAM), the proposed BBN has the capacity of conducting fully automatic facial expression analysis. We conducted extensive experiments on the two public databases, namely the BU-3DFE dataset and the Bosphorus dataset. When using manually labeled landmarks, the proposed framework achieved an average recognition rate of 94.2% and 85.6% for the 7 and 16 AU on face data from the Bosphorus dataset respectively, and 89.2% for the six universal expressions on the BU-3DFE dataset. Using the landmarks automatically located by SFAM, the proposed BBN still achieved an average recognition rate of 84.9% for the six prototypical facial expressions. These experimental results demonstrate the effectiveness of the proposed approach and its robustness in landmark localization errors.
... For classification, a probabilistic expression model was learned on the generalized manifold. In [22] the composition of the descriptor and the classifier are the same as in [21] but in [22] the 2D face texture is generated using a conformal mapping and model adaptation algorithm. The proposed coarseto-fine model adaptation approach between the planar representations was used and the correspondences are extrapolated back to the 3D meshes. ...
... For classification, a probabilistic expression model was learned on the generalized manifold. In [22] the composition of the descriptor and the classifier are the same as in [21] but in [22] the 2D face texture is generated using a conformal mapping and model adaptation algorithm. The proposed coarseto-fine model adaptation approach between the planar representations was used and the correspondences are extrapolated back to the 3D meshes. ...
Conference Paper
Full-text available
This survey addresses methodologies for 3D mesh video retrieval including 3D mesh video action/motion retrieval and 3D mesh video facial expression recognition. They all involve retrieval procedures and, consequently, classification methods in order to identify similar actions/motions and facial expressions. The approaches are primarily categorized according to the 3D model representation that they use and their feature extraction and classification methods. Comparative data for the most promising methods is given, mainly on publicly available datasets.
... A conformal mapping is a function that maps points in the mesh into a new domain, whilst preserving angles between edges in the mesh. This idea is used in order to produce 2D representations of the 3D data in [65,74]. Circle pattern conformal mappings are employed to convert the data into a 2D planar mesh. ...
... The majority of systems developed have attempted recognition of expressions from static 3D facial expression data [91–97,68,98,69,99– 101,67] . However, more recent works employ dynamic 3D facial expression data for this purpose [21,74,10210310456,57]. The features extracted for static and dynamic systems can differ greatly, due to the nature of data. ...
... This method achieved an average area under the ROC curve of 96.2% when tested on the same data. The authors in [74] used conformal mappings to convert the 3D meshes to 2D planar meshes and find correspondences, as described in Section 2. An example of the conformal mapping representation found for the face data inFig. 11c can be seen inFig. ...
Article
Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed.