Examples of Congruent (fearful face on fear body) and Incongruent (fearful face on angry body; fearful face on sad body) stimuli from Experiments 1a and 1b. All expressions used in these studies were obtained from the NimStim face set [44] and all models provided consent for publication of the photos in publications and on the web. Models for body expressions have given written informed consent, as outlined in the PLOS consent form, to publication of their photograph.

Examples of Congruent (fearful face on fear body) and Incongruent (fearful face on angry body; fearful face on sad body) stimuli from Experiments 1a and 1b. All expressions used in these studies were obtained from the NimStim face set [44] and all models provided consent for publication of the photos in publications and on the web. Models for body expressions have given written informed consent, as outlined in the PLOS consent form, to publication of their photograph.

Source publication
Article
Full-text available
The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether...

Similar publications

Article
Full-text available
Across two studies, we theorize and empirically investigate passion as a moderator of the negative affective consequences of fear of failure in early-stage entrepreneurship. We test our hypotheses in two field studies of naturally occurring affective events—namely, pitching competitions—and we complement self-reported measures of negative affect wi...
Article
Full-text available
Sufficient feedback is in the core of Student centred-learning. Text based feedback has certain limitations and can be seen by students as generic rather than personalised. Video feedback is welcoming alternative to personalised and individualised reflection on student's works and greatly valued by students. Such personalised connection between tut...

Citations

... Dynamic properties of facial expressions including timing, duration and intensity are shared across different channels, which enhance recognition by increasing the salience of emotion-relevant attributes 228,229 . Other channels, such as voice pitch and body posture, convey emotion independently from the face and provide potentially complementary information for emotion recognition 224,227,[230][231][232] . Multimodal expressions might therefore enhance emotion recognition when individual channels are ambiguous in their content and/or performance is not at ceiling levels. ...
... Together, information from the body and face is integrated rapidly and automatically into a new gestalt, such that the same facial expression is perceived differently as a function of body posture even when the observer is instructed to attend only to the face [272][273][274][275][276] . In general, facial expression recognition is facilitated by congruent, and disrupted by incongruent, body posture, and these effects are greater for dynamic than static displays 232,[275][276][277][278][279] . The quality of movement (such as its expansiveness, speed and jerkiness) further varies between emotions and can make emotionally neutral actions such as walking and sign language (even for non-signers) expressive 280,281 . ...
Article
Full-text available
Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into how emotions are perceived in the face, they necessarily leave out any role of dynamic information. In this Review, we synthesize evidence from vision science, affective science and neuroscience to ask when, how and why dynamic information contributes to emotion recognition, beyond the information conveyed in static images. Dynamic displays offer distinctive temporal information such as the direction, quality and speed of movement, which recruit higher-level cognitive processes and support social and emotional inferences that enhance judgements of facial affect. The positive influence of dynamic information on emotion recognition is most evident in suboptimal conditions when observers are impaired and/or facial expressions are degraded or subtle. Dynamic displays further recruit early attentional and motivational resources in the perceiver, facilitating the prompt detection and prediction of others' emotional states, with benefits for social interaction. Finally, because emotions can be expressed in various modalities, we examine the multimodal integration of dynamic and static cues across different channels, and conclude with suggestions for future research. Sections
... We hypothesized that pupil dilations will be modulated by the emotional content of the body expressions because dilations are driven by the LC-NE system which modulates arousal levels (Hepach & Westermann, 2016;Sirois & Brisson, 2014;Tummeltshammer et al., 2019). Anger and fear expressions tend to be more emotionally arousing than happy and neutral expressions (He et al., 2018;Mondloch et al., 2013). ...
... We therefore predicted that arousing body expressions would increase pupil dilation. Fear and anger tend to be more arousing than happy body expressions (Hoehl et al., 2017;Kret et al., 2013;Mondloch et al., 2013). Consistent with this, we found that fear body expressions elicited larger dilations than neutral expressions around 1 sec after the onset of the body image and for about 600 msec. ...
Article
Full-text available
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds’ fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analysed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).
... Several studies have already demonstrated the suitability, in neurotypical populations, of the Flanker task as a way to measure cognitive control in coordinating response selection to complex visual stimuli potentially of specific relevance to AN pathology, such as high vs. low calorie foods (Forestell et al., 2012;Meule et al., 2012) and (in)congruent body-related representations (as conveyed, e.g., by hands, faces, or entire bodies; Fusco et al., 2022a;Mondloch et al., 2013;Oldratiet al., 2020;Petrucci & Pecchinenda, 2017). However, no study has yet implemented a Flanker task with body-related stimuli that may elicit evidence of altered conflict processing in populations of eating disorder patients. ...
Article
Full-text available
Cognitive and affective impairments in processing body image have been observed in patients with Anorexia Nervosa (AN) and may induce the hypercontrolled and regulative behaviors observed in this disorder. Here, we aimed to probe the link between activation of body representations and cognitive control by investigating the ability to resolve body-related representational conflicts in women with restrictive AN and matched healthy controls (HC). Participants performed a modified version of the Flanker task in which underweight and overweight body images were presented as targets and distractors; a classic version of the task, with letters, was also administered as a control. The findings indicated that performance was better among the HC group in the task with bodies compared to the task with letters; however, no such facilitation was observed in AN patients, whose overall performance was poorer than that of the HC group in both tasks. In the task with body stimuli, performance among patients with AN was the worst on trials presenting underweight targets with overweight bodies as flankers. These results may reflect a dysfunctional association between the processing of body-related representations and cognitive control mechanisms that may aid clinicians in the development of optimal individualized treatments.
... Body postures are considered static compared to gestures, which involve moving different body parts (e.g., arms, head, fingers, legs, and hands), but both contain relevant emotional information. For example, in [68] when an angry person expresses their anger, they tend to adopt a dominant body posture to infer their emotions. In contrast, the body of an anxious person appears weak and passive and shows avoidance and reclining tendencies. ...
Article
Full-text available
Trust is a fundamental element in human relationships, playing a crucial role in decision-making processes. Despite its significance, numerous dimensions of perceived trustworthiness remain unexplored and warrant further investigation. Previous literature has highlighted the influence of emotions and sentiments on how individuals perceive trustworthiness, with visual, vocal, and behavioral cues serving as essential markers. This preliminary study aims to expand the existing knowledge in the field by investigating trustworthiness traits manifested through facial and gesture expressions in emotional videos across diverse cultural contexts. To address this objective, an annotation platform was developed to collect annotation data using the benchmarked One-Minute-Gradual Emotion (OMG) audiovisual corpus, enabling the annotation of actors’ perceived trustworthiness levels alongside other inquiries related to emotional state, gesture, activeness, comfort, and speech integrity. The findings of this study demonstrate a positive correlation between higher levels of speaker activity, faster gesturing, and a relaxed demeanor with increased levels of trust gained from audiences. The proposal presented in this paper holds potential for future studies focused on trustworthiness annotation, facilitating the measurement of trust-related features. Moreover, this research serves as a critical step towards understanding the foundations of trustworthiness in the development of synthetic agents that require perceived trustworthiness, particularly in domains involving negotiations or emergency situations where rapid data collection plays a pivotal role in saving lives.
... Body postures also influence the emotional facial expressions. Hence, it has a powerful influence on perception of facial displays of emotion by adults and children [26,27,28]. Sylvain Guimond et al (2012) assessed the correlation between posture and personality traits [29]. ...
Article
Full-text available
ABSTRACT Background: During lectures, usually students sit in an awkward position, for prolonged period of time and that may cause postural instability. For a good posture, bilateral landmarks should be on same level, when viewed from front or behind. Therefore, both shoulders should also be on same level as well. Any alteration in level of shoulders in healthy individual may lead to deformity in spine or extremity. The objective of this study was to analyze the level of both shoulders in the physical therapy students and to find its correlation with the perception of students about their shoulder balance. Method: An observational (cross – sectional) study was conducted on students of Doctor in Physical Therapy (DPT) from colleges of Physical Therapy, Karachi. 100 Students were selected by Simple Random Sampling technique. Data from students was collected by administering a questionnaire. It includes close-ended questions. Afterwards, the level of both shoulders of the students, were assessed by using Scoliosis Meter. Results: Response from students showed that 79% of them assumed that both shoulders are in same level. When level of shoulder of students was assessed by scoliosis meter, it showed that 37% students have absolute level shoulder. Spearman’s Correlation coefficient (r = 0.046, p= 0.65) showed a weak, positive correlation between perception of the students about shoulder level and assessment of shoulder tilt. Conclusion: This showed that the perception of students about level of both shoulders was not correlated to the actual levels of the shoulders. Hence, as they were not assuming it uneven, so they may not pay any attention to keep themselves straight.
... It provided a framework for the automatic identification of dynamic and fixed emotional body gestures that combined facial and speech gestures to improve recognition of a person's emotions. Paper [16] defines facial expressions by matching them with body positions. The work demonstrated that the effects and expressions are more evident when the major irritations on the face are similar to those highlighted in the body. ...
Article
Full-text available
Given the current COVID-19 pandemic, medical research today focuses on epidemic diseases. Innovative technology is incorporated in most medical applications, emphasizing the automatic recognition of physical and emotional states. Most research is concerned with the automatic identification of symptoms displayed by patients through analyzing their body language. The development of technologies for recognizing and interpreting arm and leg gestures, facial features, and body postures is still in its early stage. More extensive research is needed using artificial intelligence (AI) techniques in disease detection. This paper presents a comprehensive survey of the research performed on body language processing. Upon defining and explaining the different types of body language, we justify the use of automatic recognition and its application in healthcare. We briefly describe the automatic recognition framework using AI to recognize various body language elements and discuss automatic gesture recognition approaches that help better identify the external symptoms of epidemic and pandemic diseases. From this study, we found that since there are studies that have proven that the body has a language called body language, it has proven that language can be analyzed and understood by machine learning (ML). Since diseases also show clear and different symptoms in the body, the body language here will be affected and have special features related to a particular disease. From this examination, we discovered that it is possible to specialize the features and language changes of each disease in the body. Hence, ML can understand and detect diseases such as pandemic and epidemic diseases and others.
... Meanwhile, these two expressions do not exist in isolation, but influence each other to a large extent (Civile & Obhi, 2016). The recognition of body expressions will affect that of the facial expressions, especially when the two are inconsistent (Mondloch et al., 2013;Van den Stock & de Gelder, 2014; Van den Stock et al., 2007). ...
Article
The bodily expressive action stimulus test (BEAST) is developed to provide a set of standardized emotional stimuli for experimental investigations of emotion and attention, and the consistency has been validated in adult populations abroad. However, the consistency of this test in the Chinese population is unclear. To this end, 42 images of each category of emotion (happiness, sadness, fear, and anger) were selected from 254 images of the original stimulus set to further examine the consistency of the BEAST in Chinese population. Thirty‐one Chinese college students and 41 Chinese preschool children participated in this study. All of them were asked to complete an emotion recognition and judgment task. Results showed that adults had a high degree of consistency in rating these pictures, while the children's consistency was at a medium level. For adults, sadness was the easiest to recognize, followed by fear and anger, while happiness was the hardest to recognize. For children, fear was the easiest to recognize, anger and sadness were second, and happiness was also the hardest to recognize. At the same time, adults were more accurate in identifying happiness and sadness than children. For adults, they were more likely to confuse positive emotions with negative emotions. They tended to mistake sadness, fear, and anger for happiness. For children, they were more likely to identify sadness as fear and happiness. They also tended to recognize anger as fear. These results indicate that the recognition performance of BEAST images for Chinese and Western adults are roughly the same, however, in the same cultural context, the recognition performance of adults and children are very different, and generally the recognition accuracy rate of adults is higher than that of children.
... The perception of faces is also influenced by the presence of bodies or other contextual features, even when participants are deliberately attempting to ignore non-facial information (Aviezer et al., 2011(Aviezer et al., , 2012bLecker et al., 2017;Mondloch et al., 2013;Nelson & Mondloch, 2017). Therefore, emotion perception from faces presented alone not only omits important expressive behavior of the body typically used in emotion perception, but may inaccurately characterize typical perception of facial expressions as these are not perceived independent of bodies but are influenced by bodily expressions. ...
Article
Full-text available
Although emotion expressions are typically dynamic and include the whole person, much emotion recognition research uses static, posed facial expressions. In this study, we created a stimulus set of dynamic, naturalistic expressions drawn from professional tennis matches to determine whether movement would result in better recognition. We examined participants’ judgments of static versus dynamic expressions when viewing an isolated face, an isolated body, or a whole person. Dynamic expressions increased recognition of whether the player had won or lost the point. In addition, recognition improved when the whole person was presented as opposed to only the face or body. However, overall recognition of wins and losses was poor, with recognition for isolated faces being poorer than chance for winning players. Our findings highlight the importance of incorporating dynamic stimuli and support previous research showing that recognition of naturalistic expressions differs greatly from the commonly-used posed and isolated facial expressions of emotion. Using a wider range of naturalistic stimuli should be incorporated into future research to better understand how emotion recognition functions in daily life.
... These orthogonal dimensions (valence and arousal) define participants' emotional similarity space, wherein proximities reflect the similarity among stimuli [1]. This was replicated both in adults and children [2][3][4], using simple stimuli, such as words [5][6][7], objects [8,9], and faces [10][11][12][13], and with more complex stimuli, such as real world photographs [14][15][16]. Based on this line of research, an increasing number of studies aim to decode the nature of emotions in the brain [17], particularly where and how valence and Symmetry 2021, 13, 2091 2 of 10 arousal are represented, by computing the correlation between behavioural and neural measures of similarity [18][19][20][21]. ...
... One of the most controversial findings in the emotional similarity literature is related to asymmetries in similarity judgements between different levels of valence (i.e., negative vs. positive). Specifically, in a series of experiments, Koch et al. (2016) demonstrated that 'good is more alike than bad', that is, there is higher similarity among positive than negative emotional stimuli [5,13]. By contrast, others report higher semantic relatedness among negative than randomly selected non-emotional pictures [22] and wider generalisation in conditioned than unconditioned stimuli in healthy controls [23]. ...
Article
Full-text available
Is Mr. Hyde more similar to his alter ego Dr. Jekyll, because of their physical identity, or to Jack the Ripper, because both evoke fear and loathing? The relative weight of emotional and visual dimensions in similarity judgements is still unclear. We expected an asymmetric effect of these dimensions on similarity perception, such that faces that express the same or similar feeling are judged as more similar than different emotional expressions of same person. We selected 10 male faces with different expressions. Each face posed one neutral expression and one emotional expression (five disgust, five fear). We paired these expressions, resulting in 190 pairs, varying either in emotional expressions, physical identity, or both. Twenty healthy participants rated the similarity of paired faces on a 7-point scale. We report a symmetric effect of emotional expression and identity on similarity judgements, suggesting that people may perceive Mr. Hyde to be just as similar to Dr. Jekyll (identity) as to Jack the Ripper (emotion). We also observed that emotional mismatch decreased perceived similarity, suggesting that emotions play a prominent role in similarity judgements. From an evolutionary perspective, poor discrimination between emotional stimuli might endanger the individual.
... Third, we tested for asymmetries in contextual influence (e.g., whether an angry posture/scene influenced categorization of a disgust face as much as a disgust posture/scene influenced categorization of an angry face). We did not make specific predictions given the exploratory nature of this question (though see Mondloch et al., 2013). Finally, we examined the effect of scene congruency (whether the scene matched the face, the scene matched the posture, or the scene was neutral) on face categorizations and posture categorizations. ...
... Some were novel to the current study (Anger-Joy vs. Joy-Anger; Joy-Sadness vs. Sadness-Joy) and some EMOTION FACES, POSTURES, AND SCENES confirmed prior research (Sadness-Fear vs. Fear-Sadness;Aviezer et al., 2012b). Additionally, some asymmetries found in previous research were not replicated in the present study (e.g., Anger-Fear vs. Fear-Anger;Aviezer et al. 2012b; Anger-Sadness vs. Sadness-Anger;Mondloch et al., 2013; Disgust-Fear vs. Fear-Disgust; ...
Article
Full-text available
There is ongoing debate as to whether emotion perception is determined by facial expressions or context (i.e., non-facial cues). The present investigation examined the independent and interactive effects of six emotions (anger, disgust, fear, joy, sadness, neutral) conveyed by combinations of facial expressions, bodily postures, and background scenes in a fully crossed design. Participants viewed each face-posture-scene (FPS) combination for 5 s and were then asked to categorize the emotion depicted in the image. Four key findings emerged from the analyses: (1) For fully incongruent FPS combinations, participants categorized images using the face in 61% of instances and the posture and scene in 18% and 11% of instances, respectively; (2) postures (with neutral scenes) and scenes (with neutral postures) exerted differential influences on emotion categorizations when combined with incongruent facial expressions; (3) contextual asymmetries were observed for some incongruent face-posture pairings and their inverse (e.g., anger-fear vs. fear-anger), but not for face-scene pairings; (4) finally, scenes exhibited a boosting effect of posture when combined with a congruent posture and attenuated the effect of posture when combined with a congruent face. Overall, these findings highlight independent and interactional roles of posture and scene in emotion face perception. Theoretical implications for the study of emotions in context are discussed. Supplementary information: The online version contains supplementary material available at 10.1007/s42761-021-00061-x.