Fig 4 - uploaded by Benjamin J Balas
Content may be subject to copyright.
Examples of happy and sad faces with closed and open mouths, used in Experiment 2. Filtering operations were applied to these images in the same manner described for Experiment 1 

Examples of happy and sad faces with closed and open mouths, used in Experiment 2. Filtering operations were applied to these images in the same manner described for Experiment 1 

Source publication
Article
Full-text available
Face recognition depends critically on horizontal orientations (Goffaux & Dakin, Frontiers in Psychology, 1(143), 1-14, 2010): Face images that lack horizontal features are harder to recognize than those that have this information preserved. We asked whether facial emotional recognition also exhibits this dependency by asking observers to categoriz...

Context in source publication

Context 1
... A group of 21 undergraduate students (11 females/10 males) from North Dakota State University participated in this experiment. All reported normal or corrected-to- normal vision, provided written informed consent, and received course credit for their participation. Stimuli Twenty-six individual face images (13 male/13 female) expressing posed emotions (26 happy/26 sad) were taken from the NimStim Face Set (Tottenham et al., 2009) and were presented 650 × 506 pixels in size. Each emotion was presented in two versions (Fig. 4): One expressed each emotion with a closed mouth, and the other with an opened mouth (156 closed mouth/156 open mouth). The process of filtering our stimuli to obtain horizontally filtered and vertically filtered images was identical to that described in Experiment 1. We note, however, that our third condition in this task (horizontal + vertical) was composed of images containing the combination of the horizontal and vertical orientation subbands from the horizontal and vertical conditions. As compared to Experiment 1 (in which energy at all orientations was included at low spatial frequencies), this third set of images contained restricted orientation information at all spatial frequencies, allowing us to more closely examine the additive effects of horizontal and vertical orientation information. To distinguish between the two types of control images used in Experiment 1, we refer to the third orientation condition in this task as “ horizontal + vertical, ” to contrast these stimuli with the “ broadband ” stimuli employed in Experiment 1. Design We used a within-subjects design with the factors Emotion (happy, sad), Mouth Openness (open, closed), Image Orientation (upright or sideways), and Filter Orientation (vertical, horizontal, horizontal + vertical). Participants completed a total of 624 trials, broken up into four blocks — 156 upright with closed mouth, 156 upright with open mouth, 156 rotated with closed mouth, and 156 rotated with open mouth — with the factors Emotion and Filter Orientation randomized within each block. The block order was counterbalanced across participants. Procedure All stimulus display parameters and response collection routines were identical to those described in Experiment ...

Similar publications

Article
Full-text available
Basic facial emotion recognition is suggested to be negatively affected by puberty onset reflected in a “pubertal dip” in performance compared to pre- or post-puberty. However, findings remain inconclusive. Further, research points to an own-age bias, i.e., a superior emotion recognition for peer faces. We explored adolescents’ ability to recognize...

Citations

... Focussing on the influences of spatial frequency content, we show that decoding the emotional content within face images relies heavily on horizontal and diagonal image contrasts, while decoding which images will be perceived first relies on vertical image contrasts. Note that previous studies have shown also that horizontal, low cycles per degree, contrast energy is relevant for differentiation emotional expressions 41 , overlapping with our analysis concerning the emotional content of faces (Fig. 3) but not with our analyses concerning access to awareness (Fig. 2). The finding that vertical spatial frequency content is most relevant for predicting access to awareness (Fig. 2) is partially, but not fully, in line with a study on the depth of suppression by Yang and Blake 25 . ...
Article
Full-text available
Emotional faces have prioritized access to visual awareness. However, studies concerned with what expressions are prioritized most are inconsistent and the source of prioritization remains elusive. Here we tested the predictive value of spatial frequency-based image-features and emotional content, the sub-part of the image content that signals the emotional expression of the actor in the image as opposed to the image content irrelevant for the emotional expression, for prioritization for awareness. Participants reported which of two faces (displaying a combination of angry, happy, and neutral expressions), that were temporarily suppressed from awareness, was perceived first. Even though the results show that happy expressions were prioritized for awareness, this prioritization was driven by the contrast energy of the images. In fact, emotional content could not predict prioritization at all. Our findings show that the source of prioritization for awareness is not the information carrying the emotional content. We argue that the methods used here, or similar approaches, should become standard practice to break the chain of inconsistent findings regarding emotional superiority effects that have been part of the field for decades.
... The authors found that task-irrelevant fearful faces only captured attention if presented in their cardinal orientation, indicating that emotion recognition was relevant for the capture of attention (cf. Huynh and Balas, 2014). Salience, in contrast, was insufficient to explain the capture of attention, as salience was the same for cardinal and inverted orientations. ...
Article
Full-text available
In two experiments, we tested whether fearful facial expressions capture attention in an awareness-independent fashion. In Experiment 1, participants searched for a visible neutral face presented at one of two positions. Prior to the target, a backward-masked and, thus, invisible emotional (fearful/disgusted) or neutral face was presented as a cue, either at target position or away from the target position. If negative emotional faces capture attention in a stimulus-driven way, we would have expected a cueing effect: better performance where fearful or disgusted facial cues were presented at target position than away from the target. However, no evidence of capture of attention was found, neither in behavior (response times or error rates), nor in event-related lateralizations (N2pc). In Experiment 2, we went one step further and used fearful faces as visible targets, too. Thereby, we sought to boost awareness-independent capture of attention by fearful faces. However, still, we found no significant attention-capture effect. Our results show that fearful facial expressions do not capture attention in an awareness-independent way. Results are discussed in light of existing theories.
... Another candidate low-level image feature is its local edge orientations: not all oriented edges within a face image are thought to be equally relevant for emotion recognition. For example, horizontal edges are thought to be among the most relevant for recognition 33 . However, this information was based on the Fourier content of the images and as such does not specify to what structure in the face the horizontal edges belong to. ...
Article
Full-text available
Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.
... Another candidate stimulus property is its local edge orientations: not all oriented edges within a face image are thought to be equally relevant for emotion recognition. For example, horizontal edges are thought to be among the most relevant for recognition (Huynh & Balas, 2014). However, this information was based on the Fourier content of the images and as such does not specify to what structure in the face the horizontal edges belong to. ...
Preprint
Full-text available
Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.
... Recently spatial orientations have garnered attention for the role horizontal information appears to play in various aspects of face processing, such as detection , identification (Dakin & Watt, 2009;Goffaux & Dakin, 2010;Goffaux, Duecker, Hausfeld, Schiltz, & Goebel, 2016;Goffaux & Greenwood, 2016;Goffaux & Schiltz, 2015;Pachai, Sekuler, Bennett, Schyns, & Ramon, 2017), and emotional facial expression recognition Balas, Huynh, Saville, & Schmidt, 2015;Duncan et al., 2017;Huynh & Balas, 2014;Yu, Chai, & Chung, 2018). Horizontal information also supports behavioral signatures of face-processing specialization, such as the face inversion effect (Goffaux et al., 2010Pachai, Sekuler, & Bennett, 2013). ...
... Horizontal information also supports behavioral signatures of face-processing specialization, such as the face inversion effect (Goffaux et al., 2010Pachai, Sekuler, & Bennett, 2013). Interestingly, this information is object based, and a 90°image rotation will induce a shift toward the vertical image-that is, horizontal facial-structure (Huynh et al., 2014). This is in line with recent evidence suggesting that orientation tuning for faces is in fact flexible and depends on task demands (Goffaux, 2019). ...
... ability, horizontal tuning for cars (online supplemental Figure 1), and sensitivity to horizontal gratings (see, for details, online supplemental materials), r Partial ϭ 0.39, 95% CI [0.08, 0.64], p Ͻ .05. This is in line with previous results suggesting that the horizontal tuning observed in face recognition is task specific (Goffaux, 2019;Huynh et al., 2014) because the correlation between processing ability and horizontal tuning for faces is not predicated upon an overall better use of horizontal image structure. Thus, it may be that face recognition expertise begets selectivity to horizontal facial structure (see also Pachai et al., 2017), which would then be reflected in the more systematic deployment of the optimal (horizontally tuned) processing strategy (Royer et al., 2018). ...
Article
Full-text available
In recent years, horizontal spatial information has received attention for its role in face perception. One study, for instance, has reported an association between horizontal tuning for faces and face identification ability measured within the same task. A possible consequence of this is that the correlation could have been overestimated. In the present study, we wanted to reexamine this question. We first measured face processing ability on the Cambridge Face Memory Test +, the Cambridge Face Perception Test, and the Glasgow Face Matching Test. A single ability score was extracted using a principal components analysis. In a separate task, participants also completed an identification task in which faces were randomly filtered on a trial basis using orientation bubbles. This task allowed the extraction of individual orientation profiles and horizontal tuning scores for faces. We then measured the association between horizontal tuning for faces and the face-processing ability score and observed a significant positive correlation. Importantly, this relation could not be accounted for by other factors such as object-processing ability, horizontal tuning for cars, or greater sensitivity to horizontal gratings. Our data give further credence to the hypothesis that horizontal facial structure plays a crucial role in face processing. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... In the face image, for example, the eyebrows are defined by relatively coarse and horizontally-oriented contrast whereas a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 eyelashes are primarily defined by fine and vertical luminance variations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image ( [7][8][9][10][11]; see also [12][13][14][15] for evidence on emotional expression processing). This line of research offers a systematic and objective characterization of the visual information driving human face perception. ...
Article
Full-text available
Vision begins with the encoding of contrast at specific orientations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image; this range conveys the main morphological features of the face. In contrast, the vertical structure of the eye region seems to deliver optimal cues to gaze direction. The present work investigates whether the human face processing system flexibly tunes to vertical information contained in the eye region when processing gaze direction. Alternatively, face processing may invariantly rely on the horizontal range, supporting the domain specificity of orientation tuning for faces and the gateway role of horizontal content to access any type of facial information. Participants judged the gaze direction of faces staring at a range of lateral positions. They additionally performed an identification task with upright and inverted face stimuli. Across tasks, stimuli were filtered to selectively reveal horizontal (H), vertical (V), or combined (HV) information. Most participants identified faces better based on horizontal than vertical information confirming the horizontal tuning of face identification. In contrast, they showed a vertically-tuned sensitivity to gaze direction. The logistic functions fitting the “left” and “right” response proportion as a function of gaze direction were indeed steeper when based on vertical than on horizontal information. The finding of a vertically-tuned processing of gaze direction favours the hypothesis that visual encoding of face information flexibly switches to the orientation channel carrying the cues most relevant to the task at hand. It suggests that horizontal structure, though predominant in the face stimulus, is not a mandatory gateway for efficient face processing. The present evidence may help better understand how visual signals travel the visual system to enable rich and complex representations of naturalistic stimuli such as faces.
... Indeed, the average perceptual strategy used by a group of observers may not necessarily predict the use of information in the most skilled individuals in a given task. For instance, previous results show that the mouth region (Blais, Roy, Fiset, Arguin, & Gosselin, 2012;Calvo, Fernández-Martín, & Nummenmaa, 2014) and tuning for horizontal information (Balas & Huynh, 2015;Duncan et al., 2017;Huynh & Balas, 2014) are particularly diagnostic for the task of facial expression categorization. However, recent evidence suggests that individual differences in utilization of horizontal information were predicted by the diagnosticity of the eye area, and not the mouth (Duncan et al., 2017). ...
... Notably, low-level differences between emotional faces are crucial to their emotional expressivity, making these two variables difficult to experimentally unconfound while maintaining the naturalistic nature of our faces. One possible extension might be to assess rivalry dynamics for mouth curvature by modifying our paradigm to include inverted faces, which preserve the differences in low-level visual features but impair emotion recognition (e.g., Huynh & Balas, 2014;McKelvie, 1995;Sato, Kochiyama, & Yoshikawa, 2011). Moreover, under emotional competition, we demonstrated multisensory perceptual enhancements. ...
Article
Full-text available
Binocular rivalry occurs when two percepts, each presented to a single eye, compete for perceptual dominance. Across two experiments, we investigated whether emotional music influenced perceptual dominance of an emotionally congruent face. In the first experiment, participants heard music (happy, threatening, none) while viewing a positive or negative emotional face pitted against a neutral face or emotional faces pitted against each other. Several key findings emerged. As expected, emotional faces significantly dominated over neutral faces, irrespective of music. For emotional face pairings, negative faces were predominantly reported as initial percepts. Interestingly, this negativity bias was transient and did not persist for the duration of the trial. Rather, positive faces dominated perception throughout trials. Moreover, emotional music affected rivalry dynamics such that congruent music drove attention toward congruent emotional percepts and incongruent music suppressed incongruent percepts. In a second experiment with the same group of participants, we investigated whether explicit attention modulated binocular rivalry of emotional faces. We demonstrated that attention affected both initial and sustained percepts by suppressing automatic emotional biases and stabilizing attention-congruent expressions. Together, our results demonstrate the importance of investigating multisensory expression perception in transient and sustained contexts, the role of emotion as a mediator of sensory integration across perceptual modalities, and the influence of attention on emotional competition in binocular rivalry.
... In fact, Smith et al (2005) showed that the facial regions diagnostic of a certain emotion expression are different for different expressions and share very little overlapping in their locations on a face image. Also, by examining only happy and sad expressions, Huynh and Balas (2014) found that the magnitude of the preference of horizontal orientation (compared to vertical) can be modulated by factors such as mouth openness. ...
... We selected stimuli from the NimStim Set of Facial Expressions, a standardized database of naturally posed photographs of professional actors (Tottenham et al., 2009). As shown by Huynh and Balas (2014), the openness of the mouth can influence the emotion-dependent reliance on horizontally orientated face information. To examine the effect of filter orientation without the possible interfering effect of mouth openness, only closed-mouth versions were used in the study. ...
... Consistent with previous work (e.g. Huynh & Balas, 2014;Goffaux & Greenwood, 2016;Duncan et al, 2017), we found that the spatial information that lies around the horizontal orientation captures primary changes of facial features across expressions and is the most important information for recognizing facial expressions for young adults with normal vision. In addition, we further showed that for all four facial expressions (angry, fearful, happy and sad), recognition performance was virtually identical for filter orientations of −30°, horizontal (0°) and 30°. ...
... Both affective facial expressions and nonverbal vocalizations are considered motivationally salient stimuli that engage an individuals' attention in an automatic way (Hawk et al., 2009;Liu et al., 2012;Pell et al., 2015). When processing facial expressions, individuals mostly rely on horizontal information provided by the face to identify the underlying emotion (Huynh & Balas, 2014). As the emotion-space association is believed to emerge from an individual's past experiences (Lakoff & Johnson, 1980), it is plausible that affective facial information is more strongly associated with horizontal space representations (Kong, 2013). ...
Article
Full-text available
In verbal communication, affective information is commonly conveyed to others through spatial terms (e.g. in “I am feeling down”, negative affect is associated with a lower spatial location). This study used a target location discrimination task with neutral, positive and negative stimuli (words, facial expressions, and vocalizations) to test the automaticity of the emotion-space association, both in the vertical and horizontal spatial axes. The effects of stimulus type on emotion-space representations were also probed. A congruency effect (reflected in reaction times) was observed in the vertical axis: detection of upper targets preceded by positive stimuli was faster. This effect occurred for all stimulus types, indicating that the emotion-space association is not dependent on sensory modality and on the verbal content of affective stimuli.