Fig 1 - uploaded by Russ Greiner
Content may be subject to copyright.
Two classifiers on set of positive and negative examples. Note various sub-clusters of positive examples. 

Two classifiers on set of positive and negative examples. Note various sub-clusters of positive examples. 

Source publication
Conference Paper
Full-text available
While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection projects use a (Viola/Jones) style "cascade" of Adaboost-based classifiers to interpret (sub)images — e.g., to identify which regions contain faces. We extend this met...

Contexts in source publication

Context 1
... one of our learned classifiers might do very well on one cluster, but relatively poorly on another. Consider, for example, the examples shown in Figure 1, and notice the positive instances can be grouped into 3 clusters. (Here, imagine every instance labeled "+" corresponds to a HappyFace, "3" to a SadFace and "2" to an An- gryFace". ...
Context 2
... give the motivation to our approach first and then present the details of our algorithm. Figure 1 shows two classifiers C 1 and C 2 that are each, independently, trying to separate positive from negative examples; each line of C 1 (resp., C 2 ) denotes a linear separator, whose intersection corresponds to the classifier. At a high level, the positive examples can be approximately grouped into three sub-clusters. ...
Context 3
... is also important to choose the d 1 most effective classifiers. The positive instances in Figure 1 include members of a left-most sub-cluster, labeled "3" and a right-most one labeled "+".We can see that the positive instances in these two sub-clusters have different ranges for their X-values, while they have a similar range for the Y -value. This means it is easy to separate the left-most cluster from the right-most if we project the data into the X-axis, but this is not true if we consider projection into the Y -axis. ...

Similar publications

Chapter
Recognizing facial expressions via algorithms has been a problematic mission among researchers from fields of science. Numerous methods of emotion recognition were previously proposed based on one scheme using one data set or using the data set as it is collected to evaluate the system without performing extra pre-processing steps such as data bala...
Article
Full-text available
Abstract For smart living applications, personal identification as well as behavior and emotion detection becomes more and more important in our daily life. For identity classification and facial expression detection, facial features extracted from face images are the most popular and low-cost information. The face shape in terms of landmarks estim...
Conference Paper
Full-text available
Automatic analysis of human facial expression is one of the challenging problems in machine vision systems. The most expressive way humans display emotion is through facial expression. In this paper, we extend texture based facial expression recognition, with a method of 2D image processing implemented for extraction of features and new neural netw...
Article
Full-text available
Neural network classifying method is used in this work to perform facial expression recognition. The processed expressions were the six most pertinent facial expressions and the neutral one. This operation was implemented in three steps. First, a neural network, trained using Zernike moments, was applied to the set of the well known Yale and JAFFE...
Article
A methodology for automatic facial expression recognition in image sequences is proposed, which makes use of the Candide wire frame model and an active appearance algorithm for tracking, and support vector machine (SVM) for classification. A face is detected automatically from the given image sequence and by adapting the Candide wire frame model pr...

Citations

... The second contribution of the Viola-Jones detector is building a specific feature based classifier using an AdaBoost algorithm. The third contribution of the Viola-Jones algorithm is identifying a cascade structure, which consists of combining many complex classifiers [18]. The cascade object detector eliminate unimportant areas such as an image's background, and; focuses on the important areas of the image that contain a given object such as a facial region [39,42]. ...
Article
Full-text available
Due to the significant growth of video data over the Internet, video steganography has become a popular choice. The effectiveness of any steganographic algorithm depends on the embedding efficiency, embedding payload, and robustness against attackers. The lack of the preprocessing stage, less security, and low quality of stego videos are the major issues of many existing steganographic methods. The preprocessing stage includes the procedure of manipulating both secret data and cover videos prior to the embedding stage. In this paper, we address these problems by proposing a novel video steganographic method based on Kanade-Lucas-Tomasi (KLT) tracking using Hamming codes (15, 11). The proposed method consists of four main stages: a) the secret message is preprocessed using Hamming codes (15, 11), producing an encoded message, b) face detection and tracking are performed on the cover videos, determining the region of interest (ROI), defined as facial regions, c) the encoded secret message is embedded using an adaptive LSB substitution method in the ROIs of video frames. In each facial pixel 1 LSB, 2 LSBs, 3 LSBs, and 4 LSBs are utilized to embed 3, 6, 9, and 12 bits of the secret message, respectively, and d) the process of extracting the secret message from the RGB color components of the facial regions of stego video is executed. Experimental results demonstrate that the proposed method achieves higher embedding capacity as well as better visual quality of stego videos. Furthermore, the two preprocessing steps increase the security and robustness of the proposed algorithm as compared to state-of-the-art methods.
... The second contribution of the Viola-Jones detector is building a specific feature based classifier using an AdaBoost algorithm. The third contribution of the Viola-Jones algorithm is identifying a cascade structure, which consists of combining many complex classifiers [18]. The cascade object detector eliminate unimportant areas such as an image's background, and; focuses on the important areas of the image that contain a given object such as a facial region [39,42]. ...
Thesis
Full-text available
In this work, we investigate pattern detection algorithms. We have made two contributions. First, we modified the discriminative filtering (DF) technique, that detects patterns using two-dimensional filtering, in order to obtain robust discriminative filters. This is achieved by designing filters for the highest energy principal components of the patterns. Then, we developed a method for pattern detection referred to as inner product detector (IPD), that is optimum in the sense of minimizing the mean-squared detection error. The IPD uses the inner product for determining if a candidate is a pattern of interest. We demonstrate that the discriminative filtering and the correlation filters are particular cases of the IPD. We also demonstrate how to design robust IPDs using principal components. The performance of the proposed methods is evaluated in the context of fiducial points detection in human faces using cross validation. This is performed for two face databases (BioID and Feret), using, respectively, 503 and 2004 labeled images. For comparison, we develop similar methods using linear and nonlinear SVM classifiers. The proposed methods provide competitive results when compared with the results of SVM-based methods.