Figure 1 - uploaded by Monica L. Mills
Content may be subject to copyright.
(a) Surprised and fearful expressions share similar morphological features primarily in the top half of the face. (b) An example of the stimuli used across all levels of valence and image filters. Interest areas, indicated by the white box around the eyes and mouth, are exemplified here on the intact stimuli.  

(a) Surprised and fearful expressions share similar morphological features primarily in the top half of the face. (b) An example of the stimuli used across all levels of valence and image filters. Interest areas, indicated by the white box around the eyes and mouth, are exemplified here on the intact stimuli.  

Source publication
Article
Full-text available
Surprised expressions are interpreted as negative by some people, and as positive by others. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological structures (e.g. widened eyes), but these similarities are primarily in the upper part of the face (eyes). We hypothesised, then, t...

Contexts in source publication

Context 1
... pictures were grey-scaled and were normalised in terms of res- olution (75 dots per inch), contrast and luminance. Each intact image (broad spatial frequency (BSF)) was filtered using the procedure described in Neta and Whalen (2010) in order to create two versions of each face: one comprising primarily the HSF infor- mation and one comprising primarily the LSF infor- mation (see Figure 1(b)). Spatial-frequency content in the original image was filtered in order to create two versions of each face: one comprising primarily the HSF information (high-pass cut-off of 24 cycles per image) and one comprising primarily the LSF infor- mation (low-pass cut-off of six cycles per image), con- sistent with previous work (e.g. ...
Context 2
... our specific interest in gaze behaviour towards the mouth and the eyes, these regions were identified as interest areas (Figure 1(b)). For each inter- est area, we examined two commonly studied depen- dent measures that emphasise early eye movements: first run dwell time (FRDT -the amount of time spent in an interest area the first time it was fixated) and first fixation time (FFT -relative to the onset of the image, how quickly an interest area was fixated). ...
Context 3
... was no significant difference in FRDT on the eyes (t(50) = 0.47, p > .6, d = 0.07; Supplemental Figure 1). 1 Also, there was no significant difference between trials on which fear was rated as positive and negative for both FRDT on the mouth and on the eyes (p's > .5). ...

Similar publications

Article
Full-text available
The present study sought to determine whether contextual information available when viewing social interactions from third-person perspectives, may influence observers' perception of the interactants' facial emotion. Observers judged whether the expression of a target face was happy or fearful, in the presence of a happy, aggressive or neutral inte...
Preprint
Full-text available
The amount of fear of a potential threat is oftentimes proportional to the overlap in shared features with a known threat. An adaptive threat learning system should therefore extract the most relevant feature of a known threat to help successfully detect and appropriately respond to potential threats in the future. But what if the most salient feat...
Article
Full-text available
Infants demonstrate an attentional bias toward fearful facial expressions that emerges in the first year of life. The current study investigated whether this attentional bias is influenced by experience with particular face types. Six-month-old (n = 33) and 9-month-old (n = 31) Caucasian infants' spontaneous preference for fearful facial expression...
Preprint
Full-text available
Deep learning (DL) models are widely used to provide a more convenient and smarter life. However, biased algorithms will negatively influence us. For instance, groups targeted by biased algorithms will feel unfairly treated and even fearful of negative consequences of these biases. This work targets biased generative models' behaviors, identifying...

Citations

... (2) the total fixation duration; (3) the total fixation count and (4) the first fixation time (i.e., the time from the beginning of image onset to the location of the first fixation in each AOI for the image), which would reflect how quickly an AOI is fixated (Huang et al., 2019;Neta et al., 2017) and is used as a measure of attentional priority (Thompson et al., 2019) of the participants. The shorter the first fixation time of an AOI, the stronger attentional priority of the AOI. ...
Article
Full-text available
The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.
... Positive facial expressions are associated with more zygomatic facial muscle activity than negative facial expressions, and negative facial expressions (such as sad and angry expressions) are associated with higher corrugator muscle activity [17][18][19]. A smiling mouth (AU12) has been found to enhance perceived pleasantness and contribute more to happiness than positive eyes [20][21][22][23]. The activation of brow lowering (AU4) is more often associated with negative expressions [15,24,25]. ...
Article
Full-text available
Recent research on intense real-life faces has shown that although there was an objective difference in facial activities between intense winning faces and losing faces, viewers failed to differentiate the valence of such expressions. In the present study, we explored whether participants could perceive the difference between intense positive facial expressions and intense negative facial expressions in a forced-choice response task using eye-tracking techniques. Behavioral results showed that the recognition accuracy rate for intense facial expressions was significantly above the chance level. For eye-movement patterns, the results indicated that participants gazed more and longer toward the upper facial region (eyes) than the lower region (mouth) for intense losing faces. However, the gaze patterns were reversed for intense winning faces. The eye movement pattern for successful differentiation trials did not differ from failed differentiation trials. These findings provided preliminary evidence that viewers can utilize intense facial expression information and perceive the difference between intense winning faces and intense losing faces produced by tennis players in a forced-choice response task.
... Evidence showing misinterpretation of emotional expressions due to face masks lies in agreement with previous research into the effects of occlusion of the lower part of the face in several emotional expressions (e.g. [18][19][20][21] ). This especially holds for the identification of happy expressions, for which people rely more on the mouth-region; in contrast, for identifying angry expressions, the eye-region seems to be the most prominent diagnostic cue 21-26 . ...
... Here, we found that mask had a decreasing effect on the strength of the relationship between stimulus emotion and drift rate for masked faces (v-slope mask ), suggesting that less information was available during the decision process. These effects are in line with studies showing that covering the mouth decreases the amount of available information to correctly recognize and identify an emotional expression [7][8][9][10][11][12][13][14][14][15][16][18][19][20][21] . ...
Article
Full-text available
During the COVID-19 pandemic, the use of face masks has become a daily routine. Studies have shown that face masks increase the ambiguity of facial expressions which not only affects (the development of) emotion recognition, but also interferes with social interaction and judgement. To disambiguate facial expressions, we rely on perceptual (stimulus-driven) as well as preconceptual (top-down) processes. However, it is unknown which of these two mechanisms accounts for the misinterpretation of masked expressions. To investigate this, we asked participants (N = 136) to decide whether ambiguous (morphed) facial expressions, with or without a mask, were perceived as friendly or unfriendly. To test for the independent effects of perceptual and preconceptual biases we fitted a drift–diffusion model (DDM) to the behavioral data of each participant. Results show that face masks induce a clear loss of information leading to a slight perceptual bias towards friendly choices, but also a clear preconceptual bias towards unfriendly choices for masked faces. These results suggest that, although face masks can increase the perceptual friendliness of faces, people have the prior preconception to interpret masked faces as unfriendly.
... Interestingly, in perception both visually (Boucher & Carlson, 1980;Ekman, 1973) and auditorily (Van Bezooijen, Otto & Heenan, 1983), fear is confusable with surprise but not vice versa. This then seems to be related to the fact that surprise can be either positive or negative (Vrticka, 2014), and it is the negative surprise that is similar to fear (Neta et al., 2017), while positive surprise is actually similar to joy (Van Bezooijen, Otto & Heenan, 1983). Note that if the features shared with joy and fear are removed from the surprise facial expression, only a startle reaction (Susskind et al., 2008) is left, which is brief and likely mainly shown only in the facial expression. ...
Chapter
Full-text available
The phonetics of emotion is about the acoustic-phonetic properties of the emotional facets of human vocalization. Conventionally, these properties are studied as correlates of a person’s internal states arising from reactions to the environment, where the internal states are defined by influential psychological theories of emotion. A more recent perspective, however, views emotion as an evolved mechanism for motivating actions to proactively interact with other individuals, including, in particular, the production of emotional expressions. From this perspective, the acoustic properties of emotional vocalization are devised to actively influence the listeners in ways that may benefit the vocalizer. Interestingly, the meanings of these acoustic properties could be interpreted with knowledge of speech acoustics accumulated over the years. A key encoding mechanism is body-size projection, whereby vocal properties associated with emotions like anger make the vocalizer sound large to dominate the listener, while properties associated with emotions like joy make the vocalizer sound small to appease the listener. Body-size projection is encoded through three acoustic dimensions—pitch, voice quality and formant dispersion. Furthermore, body-size projection is likely accompanied by additional iconic encoding mechanisms also aimed at influencing the listener in specific ways. The acoustic properties associated with these mechanisms are not yet fully clear. Further exploration of the body-size projection principle and identification of additional mechanisms may drive much of the research activity in the coming decades.
... If the categorical process is involved in perceiving ambiguous facial expressions, then different expression combinations may affect the behavioural measurements associated with expression categorization, such as categorization accuracy, reaction time for expression categorization, and face-viewing gaze distribution. For instance, previous eye-tracking studies on categorizing prototypical facial expressions have suggested preferential gaze allocation at the expression-representative local facial regions (e.g., eyes in angry faces; Eisenbarth & Alpers, 2011;Guo, 2012;Schurgin et al., 2014), and such gaze allocation could be tightly coupled with expression categorization performance (Adolphs et al., 2005) and bias (Green & Guo, 2018;Neta et al., 2017). ...
Article
Full-text available
In contrast to prototypical facial expressions, we show less perceptual tolerance in perceiving vague expressions by demonstrating an interpretation bias, such as more frequent perception of anger or happiness when categorizing ambiguous expressions of angry and happy faces that are morphed in different proportions and displayed under high- or low-quality conditions. However, it remains unclear whether this interpretation bias is specific to emotion categories or reflects a general negativity versus positivity bias and whether the degree of this bias is affected by the valence or category of two morphed expressions. These questions were examined in two eye-tracking experiments by systematically manipulating expression ambiguity and image quality in fear- and sad-happiness faces (Experiment 1) and by directly comparing anger-, fear-, sadness-, and disgust-happiness expressions (Experiment 2). We found that increasing expression ambiguity and degrading image quality induced a general negativity versus positivity bias in expression categorization. The degree of negativity bias, the associated reaction time and face-viewing gaze allocation were further manipulated by different expression combinations. It seems that although we show a viewing condition-dependent bias in interpreting vague facial expressions that display valence-contradicting expressive cues, it appears that the perception of these ambiguous expressions is guided by a categorical process similar to that involved in perceiving prototypical expressions.
... positively (Neta et al., 2017). Indeed, fearful and surprised expressions share morphological similarities in the upper half of the face (e.g., widening of the eyes), as evidenced by overlapping facial AUs (i.e., inner/outer brow and upper eyelid raising, AUs 1, 2, and 5; Du & Martinez, 2015), and action in the lower half of the face is helpful for distinguishing between the expressions (Farah et al., 1998). ...
... Unexpectedly, face masks did not meaningfully impact valence judgments of surprised expressions. Given that faster fixations on the mouth are associated with more positive judgments of surprise (Neta et al., 2017;Neta & Dodd, 2018), we expected face masks would lead to more negative judgments of surprise Figure 6. Differences in non-judgment processes (d; e.g., response execution) for each expression and condition. ...
Article
Face masks that prevent disease transmission obscure facial expressions, impairing nonverbal communication. We assessed the impact of lower (masks) and upper (sunglasses) face coverings on emotional valence judgments of clearly valenced (fearful, happy) and ambiguously valenced (surprised) expressions, the latter of which have both positive and negative meaning. Masks, but not sunglasses, impaired judgments of clearly valenced expressions compared to faces without coverings. Drift diffusion models revealed that lower, but not upper, face coverings slowed evidence accumulation and affected differences in non-judgment processes (i.e., stimulus encoding, response execution time) for all expressions. Our results confirm mask-interference effects in nonverbal communication. The findings have implications for nonverbal and intergroup communication, and we propose guidance for implementing strategies to overcome mask-related interference.
... This speculation is partially substantiated by recent research showing that individuals who watched videos under the question-embedded condition demonstrated a shorter time to first fixation on content areas corresponding to embedded questions (Yang et al., 2021). Time to first fixation signifies the amount of time it takes someone to first pay attention to an area of interest on the screen and provides information about visual search speed and which aspects of a scene are prioritised (Neta et al., 2017). Other research has revealed that interpolated testing in video lectures can reduce mind wandering and increase task-relevant behaviour such as note-taking (Schacter & Szpunar, 2015). ...
Article
Full-text available
This study investigates the impact of embedded questions in pre-class instructional videos on learner perceptions (cognitive load, emotional engagement, satisfaction, judgement of learning), video engagement (total views, total viewing time), and learning performance (retention, transfer). The research occurred in a real flipped classroom environment. We designed a quasi-experiment in which 86 university students from two natural classes watched pre-class instructional videos featuring procedural knowledge with or without interpolated true or false questions. Students were asked to practice the operation steps introduced in the videos. While they practiced operations, they could either pause the videos or let the videos continue playing. Face-to-face contact time was utilised to consolidate and extend previewed content with student-centred, instructor-facilitated problem-solving activities. Results revealed no discernible effects from embedded questions in pre-class videos on cognitive load, emotional engagement, satisfaction, judgement of learning, total views, knowledge retention, or knowledge transfer. We speculate that the various in-class practice activities and frequent access to procedural knowledge videos offset the cognitive benefits derived from question-embedded videos. Learners who viewed question-embedded videos presented significantly reduced total viewing time, likely because the embedded questions scaffolded them in sustaining attention and efficiently pinpointing the exact information needed. Future research should identify boundary conditions for embedding questions in instructional videos (e.g. learning mode, type of knowledge) rather than indiscriminately applying this design strategy.
... Time to first fixation represents the amount of time taken to first pay attention to specific AOIs in a video scene. It can provide information about learners' visual search speed or how certain aspects of the scene are prioritised (Neta et al., 2017). Fixation dispersion represents how fixations are spread across a scene, and can be used to signify an internal deviation in the content of thoughts from ongoing tasks (Faber et al., 2020). ...
Article
Full-text available
Eye tracking technology is increasingly used to understand individuals’ non-conscious, moment-to-moment processes during video-based learning. This review evaluated 44 eye tracking studies on video-based learning conducted between 2010 and 2021. Specifically, the review sought to uncover how the utilisation of eye tracking technology has advanced understandings of the mechanisms underlying effective video-based learning and what type of caution should be exercised when interpreting the findings of these studies. Four important findings emerged from the analysis: (1) not all the studies explained the mechanisms underlying effective video-based learning through employing eye tracking technology, and few studies disentangled the complex relationship between eye tracking metrics and cognitive activities these metrics represent; (2) emotional factors potentially serve to explain the processes that facilitate video-based learning, but few studies captured learners’ emotional processes or evaluated their affective gains; (3) ecological validity should be improved for eye tracking research on video-based learning through methods such as using eye tracking systems that have high tolerance for head movements, allowing learners to take control of the pacing of the video, and communicating the learning objectives of the video to participants; and (4) boundary conditions, including personal (e.g. age, prior knowledge) and environmental factors (e.g. the topic of videos, type of knowledge), must be considered when interpreting research findings. The findings of this review inspire a number of propositions for designing and interpreting eye tracking research on video-based learning.
... One type of oft-used emotional stimulus is faces, which constitute a unique stimulus class that is highly familiar and can convey biologically relevant social and affective information (Haxby et al., 2000;Neta et al., 2017;Neta and Whalen, 2011). Emotional faces can elicit strong neural responses in visual regions, including the fusiform gyrus and superior temporal sulcus, as well as in the amygdala (Haxby et al., 2000;Todorov et al., 2011;Vuilleumier et al., 2001). ...
Article
Cognitive control allows individuals to flexibly and efficiently perform tasks by attending to relevant stimuli while inhibiting distraction from irrelevant stimuli. The antisaccade task assesses cognitive control by requiring participants to inhibit a prepotent glance towards a peripheral stimulus and generate an eye movement to the mirror image location. This task can be administered with various contextual manipulations to investigate how factors such as trial timing or emotional content interact with cognitive control. In the current study, 26 healthy adults completed a mixed antisaccade and prosaccade fMRI task that included task irrelevant emotional faces and gap/overlap timing. The results showed typical antisaccade and gap behavioral effects with greater BOLD activation in frontal and parietal brain regions for antisaccade and overlap trials. Conversely, there were no differences in behavior based on the emotion of the task irrelevant face, but trials with neutral faces had greater activation in widespread visual regions than trials with angry faces, particularly for prosaccade and overlap trials. Together, these effects suggest that a high level of cognitive control and inhibition was required throughout the task, minimizing the impact of the face presentation on saccade behavior, but leading to increased attention to the neutral faces on overlap prosaccade trials when both the task cue (look towards) and emotion stimulus (neutral, non-threatening) facilitated disinhibition of visual processing.
... Evidence showing misinterpretation of emotional expressions due to face masks lies in agreement with previous research into the effects of occlusion of the lower part of the face in several emotional expressions (e.g. [18][19][20][21]. This especially holds for the identi cation of happy expressions, for which people rely more on the mouth-region; in contrast, for identifying angry expressions, the eye-region seems to be the most prominent diagnostic cue [21][22][23][24][25][26] . ...
Preprint
Full-text available
During the COVID-19 pandemic, the use of face masks has become a daily routine. Studies have shown that face masks increase the ambiguity of facial expressions which not only affects (the development of) emotion recognition, but also interferes with social interaction and judgement. To disambiguate facial expressions, we rely on perceptual (stimulus-driven) as well as judgmental (preconception) processes. However, it is unknown which of these two mechanisms accounts for the misinterpretation of masked expressions. To investigate this, we asked participants ( N = 136) to decide whether ambiguous (morphed) facial expressions, with or without a mask, were perceived as friendly or unfriendly. To test for the independent effects of perceptual and judgmental biases we fitted a drift-diffusion model (DDM) to the behavioral data of each participant. Results show that face masks induce a clear loss of information leading to a slight perceptual bias towards friendly choices, but also a clear judgmental bias towards unfriendly choices for masked faces. These results suggest that, although face masks can increase the perceptual friendliness of faces, people have the prior preconception to interpret masked faces as unfriendly.