Chapter

Using Facial EMG to Track Emotion During Language Comprehension: Past, Present, and Future

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Beyond recognizing words, parsing sentences, building situation models, and other cognitive accomplishments, language comprehension always involves some degree of emotion too, with or without awareness. Language excites, bores, or otherwise moves us, and studying how it does so is crucial. This chapter examines the potential of facial electromyography (EMG) to study language-elicited emotion. After discussing the limitations of self-report measures, we examine various other tools to tap into emotion, and then zoom in on the electrophysiological recording of facial muscle activity. Surveying psycholinguistics, communication science, and other fields, we provide an exhaustive qualitative review of the relevant facial EMG research to date, exploring 55 affective comprehension experiments with single words, phrases, sentences, or larger pieces of discourse. We discuss the outcomes of this research, and evaluate the various practices, biases, and omissions in the field. We also present the fALC model, a new conceptual model that lays out the various potential sources of facial EMG activity during language comprehension. Our review suggests that facial EMG recording is a powerful tool for exploring the conscious as well as unconscious aspects of affective language comprehension. However, we also think it is time to take on a bit more complexity in this research field, by for example considering the possibility that multiple active generators can simultaneously contribute to an emotional facial expression, by studying how the communicator’s stance and social intention can give rise to emotion, and by studying facial expressions not just as indexes of inner states, but also as social tools that enrich everyday verbal interactions.Key wordsReviewEMGFacial electromyographyPsycholinguisticsCommunication scienceEmotionPsychophysiologySimulationMimicryEvaluation

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Opinion formation and information processing are affected by unconscious affective responses to stimuli—particularly in politics. Yet we still know relatively little about such affective responses and how to measure them. In this study, we focus on emotional valence and examine facial electromyography (fEMG) measures. We demonstrate the validity of these measures, discuss ways to make measurement and analysis more robust, and consider validity trade-offs in experimental design. In doing so, we hope to support scholars in designing studies that will advance scholarship on political attitudes and behavior by incorporating unconscious affective responses to political stimuli—responses that have too often been neglected by political scientists.
Chapter
Full-text available
The potential significance of facial EMG responses as an index of language processing is discussed. First, functional and metabolic differences between different facial muscles are presented, as well as the consequences which such differences may have for activating the brain through afferent feedback signals from the facial skin. Different facial muscles, particularly those in the upper and lower part of the face, are differently controlled by the brain which is related to functional differences between these muscle groups. Next, it is described how facial EMG signals can be recorded, how surface electrodes should be placed, and how EMG responses should be analyzed and standardized. Following these methodological issues, the potential significance of different categories of facial EMG responses during behavioral studies, including language processing, is outlined, in particular emotional responses, responses indicating attention to external stimuli, and responses related to mental effort demanded by cognitive information processing tasks. Finally, the validity of facial muscle activity during linguistic processes is specifically addressed. It is also suggested how the validity of EMG responses as an index of subvocal speech, an important aspect of language processing, may be improved.Key wordsFacial musclesMeasuring EMGEmotionAttentionInformation processingMental effortLinguistic processesSubvocal speech
Article
Full-text available
Many of our everyday emotional responses are triggered by language, and a full understanding of how people use language therefore also requires an analysis of how words elicit emotion as they are heard or read. We report a facial electromyography experiment in which we recorded corrugator supercilii, or “frowning muscle”, activity to assess how readers processed emotion-describing language in moral and minimal in/outgroup contexts. Participants read sentence-initial phrases like “Mark is angry” or “Mark is happy” after descriptions that defined the character at hand as a good person, a bad person, a member of a minimal ingroup, or a member of a minimal outgroup (realizing the latter two by classifying participants as personality “type P” and having them read about characters of “type P” or “type O”). As in our earlier work, moral group status of the character clearly modulated how readers responded to descriptions of character emotions, with more frowning to “Mark is angry” than to “Mark is happy” when the character had previously been described as morally good, but not when the character had been described as morally bad. Minimal group status, however, did not matter to how the critical phrases were processed, with more frowning to “Mark is angry” than to “Mark is happy” across the board. Our morality-based findings are compatible with a model in which readers use their emotion systems to simultaneously simulate a character’s emotion and evaluate that emotion against their own social standards. The minimal-group result does not contradict this model, but also does not provide new evidence for it.
Article
Full-text available
Spontaneous emotionally congruent facial responses (ECFR) to others’ emotional expressions can occur by simply observing others’ faces (i.e., smiling) or by reading emotion related words (i.e., to smile). The goal of the present study was to examine whether language describing political leaders’ emotions affects voters by inducing emotionally congruent facial reactions as a function of readers’ and politicians’ shared political orientation. Participants read sentences describing politicians’ emotional expressions, while their facial muscle activation was measured by means of electromyography (EMG). Results showed that reading sentences describing left and right-wing politicians “smiling” or “frowning” elicits ECFR for ingroup but not outgroup members. Remarkably, ECFR were sensitive to attitudes toward individual leaders beyond the ingroup vs. outgroup political divide. Through integrating behavioral and physiological methods we were able to consistently tap on a ‘favored political leader effect’ thus capturing political attitudes towards an individual politician at a given moment of time, at multiple levels (explicit responses and automatic ECFR) and across political party membership lines. Our findings highlight the role of verbal behavior of politicians in affecting voters’ facial expressions with important implications for social judgment and behavioral outcomes.
Article
Full-text available
Based on the analysis of 190 studies (18,573 participants), we estimate that the average silent reading rate for adults in English is 238 words per minute (wpm) for non-fiction and 260 wpm for fiction. The difference can be predicted by taking into account the length of the words, with longer words in non-fiction than in fiction. The estimates are lower than the numbers often cited in scientific and popular writings. The reasons for the overestimates are reviewed. The average oral reading rate (based on 77 studies and 5,965 participants) is 183 wpm. Reading rates are lower for children, old adults, and readers with English as second language. The reading rates are in line with maximum listening speed and do not require the assumption of reading-specific language processing. Within each group/task there are reliable individual differences, which are not yet fully understood. For silent reading of English non-fiction most adults fall in the range of 175 to 300 wpm; for fiction the range is 200 to 320 wpm. Reading rates in other languages can be predicted reasonably well by taking into account the number of words these languages require to convey the same message as in English.
Article
Full-text available
It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Article
Full-text available
Standard neurocognitive models of language processing have tended to obviate the need for incorporating emotion processes, while affective neuroscience theories have typically been concerned with the way in which people communicate their emotions, and have often simply not addressed linguistic issues. Here, we summarise evidence from temporal and spatial brain imaging studies that have investigated emotion effects on lexical, semantic and morphosyntactic aspects of language during the comprehension of single words and sentences. The evidence reviewed suggests that emotion is represented in the brain as a set of semantic features in a distributed sensory, motor, language and affective network. Also, emotion interacts with a number of lexical, semantic and syntactic features in different brain regions and timings. This is in line with the proposals of interactive neurocognitive models of language processing, which assume the interplay between different representational levels during on-line language comprehension.
Article
Full-text available
Our affective state is influenced by daily events and our interactions with other people, which, in turn, can affect the way we communicate. In two studies, we investigated the influence of experiencing success or failure in a foosball (table soccer) game on participants’ affective state and how this in turn influenced the way they report on the game itself. Winning or losing a match can further influence how they view their own team (compared to the opponent), which may also impact how they report on the match. In Study 1, we explored this by having participants play foosball matches in two dyads. They subsequently reported their affective state and team cohesiveness, and wrote two match reports, one from their own and one from their opponent’s perspective. Indeed, while the game generally improved participants’ moods, especially winning made them happier and more excited and losing made them more dejected, both in questionnaires and in the reports, which were analyzed with a word count tool. Study 2 experimentally investigated the effect of affective state on focus and distancing behavior. After the match, participants chose between preselected sentences (from Study 1) that differed in focus (mentioning the own vs. other team) or distancing (using we vs. the team name). Results show an effect for focus: winning participants preferred sentences that described their own performance positively while losing participants chose sentences that praised their opponent over negative sentences about themselves. No effect of distancing in pronoun use was found: winning and losing participants equally preferred the use of we vs. the use of their own team name. We discuss the implications of our findings with regard to models of language production, the self-serving bias, and the use of games to induce emotions in a natural way.
Article
Full-text available
Studies concerning personal attachment have successfully used loved familiar faces to prompt positive affective and physiological reactions. Moreover, the processing of emotional words shows similar physiological patterns to those found with affective pictures. The objective of this study was to assess whether the passive viewing of loved names would produce a pattern of subjective and physiological reactivity similar to that produced by the passive viewing of loved faces. The results showed that, compared to neutral (unknown) and famous names, loved names produced a biphasic pattern of heart rate deceleration‐acceleration, heightened skin conductance and zygomaticus muscle activity, inhibition of corrugator muscle activity, and potentiation of the startle reflex response. This pattern of physiological responses was accompanied by subjective reports of higher positive affect and arousal for loved names than for neutral and famous ones. These findings highlight not only the similarity but also the differences between the affective processing of identity recognition by loved faces and names.
Article
Full-text available
Facial electromyography research shows that corrugator supercilii (“frowning muscle”) activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or “re-enactment” of emotion, as part of the retrieval of word meaning (e.g., of “furious”) and/or of building a situation model (e.g., for “Mark is furious”). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader’s affective evaluation of what language describes (e.g., when we like Mark being furious). In a previous experiment (‘t Hart et al., 2018) we demonstrated that neither language-driven simulation nor affective evaluation alone seem sufficient to explain the corrugator patterns that emerge during online language comprehension in these complex cases. Those results showed support for a multiple-drivers account of corrugator activity, where both simulation and evaluation processes contribute to the activation patterns observed in the corrugator. The study at hand replicates and extends these findings. With more refined control over when precisely affective information became available in a narrative, we again find results that speak against an interpretation of corrugator activity in terms of simulation or evaluation alone, and as such support the multiple-drivers account. Additional evidence suggests that the simulation driver involved reflects simulation at the level of situation model construction, rather than at the level of retrieving concepts from long-term memory. In all, by giving insights into how language-driven simulation meshes with the reader’s evaluative responses during an unfolding narrative, this study contributes to the understanding of affective language comprehension.
Article
Full-text available
When people make decisions about listening, such as whether to continue attending to a particular conversation or whether to wear their hearing aids to a particular restaurant, they do so on the basis of more than just their estimated performance. Recent research has highlighted the vital role of more subjective qualities such as effort, motivation, and fatigue. Here, we argue that the importance of these factors is largely mediated by a listener's emotional response to the listening challenge, and suggest that emotional responses to communication challenges may provide a crucial link between day-to-day communication stress and long-term health. We start by introducing some basic concepts from the study of emotion and affect. We then develop a conceptual framework to guide future research on this topic through examination of a variety of autonomic and peripheral physiological responses that have been employed to investigate both cognitive and affective phenomena related to challenging communication. We conclude by suggesting the need for further investigation of the links between communication difficulties, emotional response, and long-term health, and make some recommendations intended to guide future research on affective psychophysiology in speech communication.
Article
Full-text available
Smiles, produced by the bilateral contraction of the zygomatic major muscles, are one of the most powerful expressions of positive affect and affiliation and also one of the earliest to develop [1]. The perception-action loop responsible for the fast and spontaneous imitation of a smile is considered a core component of social cognition [2]. In humans, social interaction is overwhelmingly vocal, and the visual cues of a smiling face co-occur with audible articulatory changes on the speaking voice [3]. Yet remarkably little is known about how such 'auditory smiles' are processed and reacted to. We have developed a voice transformation technique that selectively simulates the spectral signature of phonation with stretched lips and report here how we have used this technique to study facial reactions to smiled and non-smiled spoken sentences, finding that listeners' zygomatic muscles tracked auditory smile gestures even when they did not consciously detect them.
Chapter
Full-text available
Imagine the thrill of navigating your raft through a stretch of whitewater rapids—the inherent danger, the racing of your heart, the tension in your arms, and the smile on your face once you’ve reached calmer waters. You feel great joy, and relief! Emotional events such as this are multidimensional phenomena including an appraisal of the situation, autonomic and muscular responses, as well as an associated representation of the physiological response, which forms the basis of an emotional feeling state (James, 1884). These feelings are perceptual states of our personal subjective experience and are therefore a form of consciousness (Clore, 1994; LeDoux, 1994). On the other hand, emotional processing which includes the evaluation of and response to emotional stimuli does not require consciousness. These nonconscious aspects of emotion will be the focus of this essay.
Article
Full-text available
In the context of colorectal cancer screening, we aimed to compare the effectiveness of different emotion-laden narratives, to investigate the specific emotions elicited at both subjective and physiological levels, and to test the effects of emotions explicitly expressed by the narrative character. Study 1 used a between-participants design comparing four conditions: relief-based narrative, regret-based narrative, control (test-uptake only) narrative, and standard invitation material (no-narrative condition). Study 2 used a mixed design, with the narrative content as a within-participants factor and whether emotions were expressed by the narrative character or not as between-participants factor. The main outcome measures were: intention to undergo testing (Studies 1 and 2), knowledge, risk perception, proportion of informed choices (Study 1), subjective emotional responses, changes in skin conductance, heart rate, and corrugator muscle activity (Study 2). In Study 1, relative to the non-narrative condition (51%), only the relief-based narrative significantly increased intention to undergo testing (86%). Relative to the standard invitation material, the narrative conditions did not decrease knowledge, alter risk perception, or decrease the proportion of informed choices. In Study 2, the relief-based narrative elicited the lowest self-reported negative affect, and received greater implicit attention, as suggested by the larger heart rate decrease. Making the emotions experienced by the narrative character explicit decreased negative affect, as indicated by the lower skin conductance and corrugator responses during reading. Our findings provide support for the use of a relief-based narrative with emotions expressed by the character in addition to the standard information material to promote colorectal cancer screening.
Article
Full-text available
Grounded theories of cognition claim that concept representation relies on the systems for perception and action. The sensory-motor grounding of abstract concepts presents a challenge for these theories. Some accounts propose that abstract concepts are indirectly grounded via image schemas or situations. Recent research, however, indicates that the role of sensory-motor processing for concrete concepts may be limited, providing evidence against the idea that abstract concepts are grounded via concrete concepts. Hybrid models that combine language and sensory-motor experience may provide a more viable account of abstract and concrete representations. We propose that sensory-motor grounding is important during acquisition and provides structure to concepts. Later activation of concepts relies on this structure but does not necessarily involve sensory-motor processing. Language is needed to create coherent concepts from diverse sensory-motor experiences. This article is part of the theme issue ‘Varieties of abstract concepts: development, use and representation in the brain’.
Article
Full-text available
Emotion concepts are important. They help us to understand, experience and predict human behaviour. Emotion concepts also link the realm of the abstract with the realm of bodily experience and actions. Accordingly, the key question is how such concepts are created, represented and used. Embodied cognition theories hold that concepts are grounded in neural systems that produce experiential and motor states. Concepts are also contextually situated and thus engage sensorimotor resources in a dynamic, flexible way. Finally, on that framework, conceptual understanding unfolds in time, reflecting embodied as well as linguistic and cultural influences. In this article, we review empirical work on emotion concepts and show how it highlights their grounded, yet dynamic and context-sensitive nature. The conclusions are consistent with recent developments in embodied cognition that allow concepts to be linked to sensorimotor systems, yet be flexibly sensitive to current representational and action needs. This article is part of the theme issue ‘Varieties of abstract concepts: development, use and representation in the brain’.
Article
Full-text available
Facial electromyography research shows that corrugator supercilii (“frowning muscle”) activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or “reenactment” of emotion, as part of the retrieval of word meaning (e.g., of “furious”) and/or of building a situation model (e.g., for “Mark is furious”). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). To examine what happens in such cases, we independently manipulated simulation valence and moral evaluative valence in short narratives. Participants first read about characters behaving in a morally laudable or objectionable fashion: this immediately led to corrugator activity reflecting positive or negative affect. Next, and critically, a positive or negative event befell these same characters. Here, the corrugator response did not track the valence of the event, but reflected both simulation and moral evaluation. This highlights the importance of unpacking coarse notions of affective meaning in language processing research into components that reflect simulation and evaluation. Our results also call for a re-evaluation of the interpretation of corrugator EMG, as well as other affect-related facial muscles and other peripheral physiological measures, as unequivocal indicators of simulation. Research should explore how such measures behave in richer and more ecologically valid language processing, such as narrative; refining our understanding of simulation within a framework of grounded language comprehension.
Article
Full-text available
Much emotion research has focused on the end result of the emotion process, categorical emotions, as reported by the protagonist or diagnosed by the researcher, with the aim of differentiating these discrete states. In contrast, this review concentrates on the emotion process itself by examining how (a) elicitation, or the appraisal of events, leads to (b) differentiation, in particular, action tendencies accompanied by physiological responses and manifested in facial, vocal, and gestural expressions, before (c) conscious representation or experience of these changes (feeling) and (d) categorizing and labeling these changes according to the semantic profiles of emotion words. The review focuses on empirical, particularly experimental, studies from emotion research and neighboring domains that contribute to a better understanding of the unfolding emotion process and the underlying mechanisms, including the interactions among emotion components. Expected final online publication date for the Annual Review of Psychology Volume 70 is January 4, 2019. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication , and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics.
Article
Full-text available
The study examined how individual words occurring in mediated messages impact listeners’ emotional and cognitive responses. Scripts from actual radio advertisements were altered by replacing original words with target words that varied in valence—either positive, negative, or neutral. The scripts were then reproduced by non-professional speakers. Real-time processing of the target words was examined through the use of psychophysiological measures of dynamic emotional and cognitive responses collected from subjects (n=55) and time-locked to the stimuli. Recognition memory provided a measure of encoding efficiency. As predicted listeners had greater frown muscle responses following the onset of negatively valenced words compared to positively valenced words. Results also showed that positively valenced words elicited orienting responses in listeners but negatively valenced words did not. Recognition data show that positively valenced words were encoded better than neutrally valenced words, followed by negatively valenced words, which was consistent with the finding for the impact of emotional words on orienting responses. Keywords: emotional words, orienting response, valence, memory, audio processing
Article
Full-text available
This study examined the impact of narrative and emotion on processing of African American breast cancer survivor messages. We employed a two (narrative: present/absent) × three (emotional valence: pleasant/unpleasant/mixed) × four (message repetition) within-subjects experimental design. Findings indicated narrative messages with both pleasant and unpleasant emotional content (mixed) showed the greatest attention (heart rate deceleration) and negative emotional response (corrugator supercillii) while unpleasant narratives showed the least. Surprisingly, non-narrative messages showed the opposite pattern of results, where unpleasant messages showed the greatest attention and emotional response while non-narrative messages with mixed emotional content showed the least. These data initially point to the conclusion that attention for narrative material depends on the valence of emotion expressed in the message, which has both theoretical and practical implications.
Article
Full-text available
Festinger (1957) described cognitive dissonance as psychological discomfort that resulted from a cognitive inconsistency. Discussion of dissonance for the past 60 years has focused on the classic paradigms and the motivation to reduce dissonance, but some have noted that this represents a narrow application of Festinger’s ideas (Gawronski & Brannon, in press). Recent research has suggested, but not demonstrated, that simple cognitive inconsistencies may also evoke the affective and motivational state of dissonance (e.g., E. Harmon-Jones, Harmon-Jones, & Levy, 2015). In the current experiments, participants read sentences that ended with incongruent or congruent final words. In Study 1, sentences with incongruent endings led to more negative implicit affect than did sentences with congruent endings. Study 2 replicated this finding, with the addition of self-report and facial electromyography. These findings indicate that simple inconsistencies can evoke dissonance.
Article
Full-text available
According to embodiment theories, language and emotion affect each other. In line with this, several previous studies investigated changes in bodily responses including facial expressions, heart rate or skin conductance during affective evaluation of emotional words and sentences. This study investigates the embodiment of emotional word processing from a social perspective by experimentally manipulating the emotional valence of a word and its personal reference. Stimuli consisted of pronoun-noun pairs, i.e., positive, negative, and neutral nouns paired with possessive pronouns of the first or the third person (“my,” “his”) or the non-referential negation term (“no”) as controls. Participants had to quickly evaluate the word pairs by key presses as either positive, negative, or neutral, depending on the subjective feelings they elicit. Hereafter, they elaborated the intensity of the feeling on a non-verbal scale from 1 (very unpleasant) to 9 (very pleasant). Facial expressions (M. Zygomaticus, M. Corrugator), heart rate, and, for exploratory purposes, skin conductance were recorded continuously during the spontaneous and elaborate evaluation tasks. Positive pronoun-noun phrases were responded to the quickest and judged more often as positive when they were self-related, i.e., related to the reader’s self (e.g., “my happiness,” “my joy”) than when related to the self of a virtual other (e.g., “his happiness,” “his joy”), suggesting a self-positivity bias in the emotional evaluation of word stimuli. Physiologically, evaluation of emotional, unlike neutral pronoun-noun pairs initially elicited an increase in mean heart rate irrespective of stimulus reference. Changes in facial muscle activity, M. Zygomaticus in particular, were most pronounced during spontaneous evaluation of positive other-related pronoun-noun phrases in line with theoretical assumptions that facial expressions are socially embedded even in situation where no real communication partner is present. Taken together, the present results confirm and extend the embodiment hypothesis of language by showing that bodily signals can be differently pronounced during emotional evaluation of self- and other-related emotional words.
Article
Full-text available
People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.
Article
Full-text available
It is a common experience-and well established experimentally-that music can engage us emotionally in a compelling manner. The mechanisms underlying these experiences are receiving increasing scrutiny. However, the extent to which other domains of aesthetic experience can similarly elicit strong emotions is unknown. Using psychophysiology, neuroimaging, and behavioral responses, we show that recited poetry can act as a powerful stimulus for eliciting peak emotional responses, including chills and objectively measurable goosebumps that engage the primary reward circuitry. Importantly, while these responses to poetry are largely analogous to those found our music, their neural underpinnings show important differences, specifically with regard to the crucial role of the nucleus accumbens. We also go beyond replicating previous music-related studies by showing that peak aesthetic pleasure can co-occur with physiological markers of negative affect. Finally, the distribution of chills across the trajectory of poems provides insight into compositional principles of poetry.
Article
Full-text available
Language and emotions are closely linked. However, previous research suggests that this link is stronger in a native language (L1) than in a second language (L2) that had been learned later in life. The present study investigates whether such reduced emotionality in L2 is reflected in changes in emotional memory and embodied responses to L2 in comparison to L1. Late Spanish/English bilinguals performed a memory task involving an encoding and a surprise retrieval phase. Facial motor resonance and skin conductance (SC) responses were recorded during encoding. The results give first indications that the enhanced memory for emotional vs. neutral content (EEM effect) is stronger in L1 and less present in L2. Furthermore, the results give partial support for decreased facial motor resonance and SC responses to emotional words in L2 as compared to L1. These findings suggest that embodied knowledge involved in emotional memory is associated to increased affective encoding and retrieval of L1 compared to L2.
Article
Full-text available
In this debate with Lisa Feldman Barrett, I defend a view of emotions as biological functional states. Affective neuroscience studies emotions in this sense, but it also studies the conscious experience of emotion ("feelings"), our ability to attribute emotions to others and to animals ("attribution", "anthropomorphizing"), our ability to think and talk about emotion ("concepts of emotion", "semantic knowledge of emotion"), and the behaviors caused by an emotion ("expression of emotions", "emotional reactions"). I think that the most pressing challenge facing affective neuroscience is the need to carefully distinguish between these distinct aspects of "emotion". I view emotion states as evolved functional states that regulate complex behavior, in both people and animals, in response to challenges that instantiate recurrent environmental themes. These functional states, in turn, can also cause conscious experiences (feelings), and their effects and our memories for those effects also contribute to our semantic knowledge of emotions (concepts). Cross-species studies, dissociations in neurological and psychiatric patients, and more ecologically valid neuroimaging designs should be used to partly separate these different phenomena.
Article
Full-text available
ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions.
Article
Full-text available
While the basic nature of irony is saying one thing and communicating the opposite, it may also serve additional social and emotional functions, such as projecting humor or anger. Emoticons often accompany irony in computer-mediated communication, and have been suggested to increase enjoyment of communication. In the current study, we aimed to examine online emotional responses to ironic versus literal comments, and the influence of emoticons on this process. Participants read stories with a final comment that was either ironic or literal, praising or critical, and with or without an emoticon. We used psychophysiological measures to capture immediate emotional responses: electrodermal activity to directly measure arousal and facial electromyography to detect muscle movements indicative of emotional expressions. Results showed higher arousal, reduced frowning, and enhanced smiling for messages with rather than without an emoticon, suggesting that emoticons increase positive emotions. A tendency toward less negative responses (i.e., reduced frowning and enhanced smiling) for ironic than literal criticism, and less positive responses (i.e., enhanced frowning and reduced smiling) for ironic than literal praise suggests that irony weakens the emotional impact of a message. The present findings indicate the utility of a psychophysiological approach in studying online emotional responses to written language.
Article
Full-text available
In this article, we describe a paradigm using text-based vignettes for the study of social and cultural norm violation. Towards this aim, a range of scenarios depicting instances of norm violations was generated and tested with respect to their ability in evoking subjective and physiological responses. In Experiment 1, participants evaluated 29 vignettes on how upsetting, excusable and realistic the described behaviour appeared to be. Based on those ratings we selected and extended three norm violation vignettes for Experiment 2 in which participants' physiological responses were obtained in addition to their subjective ratings. In both studies, the vignettes were successful in eliciting negative responses to norm violations and were significantly affected by the perceivers' level of ethnocultural empathy. The trait measure of cultural empathy further predicted facial electromyography (EMG) activity at muscle sites associated with disgust (M. Levator Labii), thereby suggesting a potential moral response to norm-violating scenarios. We discuss the methodological merits and implications of this vignettes paradigm for investigating perceived norm transgressions and make recommendations for future work.
Article
Full-text available
The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with “up” responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are based on implicit associations between stimulus valence and response, there is no influence of arousal. In line with the valence-arousal conflict theory, arousal seems to affect participants' approach-withdrawal tendencies only when such tendencies are made explicit by the task, and a minimal degree of processing depth is required.
Article
What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the “basic six”—anger, disgust, fear, happiness, sadness, and surprise. Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al., p. 1). Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion. Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice. Traditional models—both the basic six and affective-circumplex model (valence and arousal)—capture a fraction of the systematic variability in emotional response. In contrast, emotion-related responses (e.g., the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion. Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally. Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods.
Article
Comprehension models do not often account for the multifaceted and emotionally charged nature of reading in real-world settings. In addition, studies of how reader emotions influence comprehension often yield conflicting findings due to lack of specificity regarding the process, emotion, and task under investigation. The PET (Process, Emotion, Task) framework considers how reader emotions differentially influence comprehension as a function of the specific comprehension process, type of emotion, and task features. It offers testable hypotheses regarding the influence of reader emotion for levels in the tripartite theory, resonance, integration and validation, inferences, and coherence-based retrieval. It also considers how important interactions arise through text, reader, and activity variables. Therefore, the particular comprehension process, the specific emotion, and varying task features can lead to different predictions regarding the effects of emotion on comprehension processes. The PET framework provides organization to guide research and clarify understandings regarding how to support readers.
Article
For decades, the extent to which visual imagery relies on the same neural mechanisms as visual perception has been a topic of debate. Here, we review recent neuroimaging studies comparing these two forms of visual experience. Their results suggest that there is a large overlap in neural processing during perception and imagery: neural representations of imagined and perceived stimuli are similar in the visual, parietal, and frontal cortex. Furthermore, perception and imagery seem to rely on similar top-down connectivity. The most prominent difference is the absence of bottom-up processing during imagery. These findings fit well with the idea that imagery and perception rely on similar emulation or prediction processes.
Book
A leading expert on evolution and communication presents an empirically based theory of the evolutionary origins of human communication that challenges the dominant Chomskian view. Human communication is grounded in fundamentally cooperative, even shared, intentions. In this original and provocative account of the evolutionary origins of human communication, Michael Tomasello connects the fundamentally cooperative structure of human communication (initially discovered by Paul Grice) to the especially cooperative structure of human (as opposed to other primate) social interaction. Tomasello argues that human cooperative communication rests on a psychological infrastructure of shared intentionality (joint attention, common ground), evolved originally for collaboration and culture more generally. The basic motives of the infrastructure are helping and sharing: humans communicate to request help, inform others of things helpfully, and share attitudes as a way of bonding within the cultural group. These cooperative motives each created different functional pressures for conventionalizing grammatical constructions. Requesting help in the immediate you-and-me and here-and-now, for example, required very little grammar, but informing and sharing required increasingly complex grammatical devices. Drawing on empirical research into gestural and vocal communication by great apes and human infants (much of it conducted by his own research team), Tomasello argues further that humans' cooperative communication emerged first in the natural gestures of pointing and pantomiming. Conventional communication, first gestural and then vocal, evolved only after humans already possessed these natural gestures and their shared intentionality infrastructure along with skills of cultural learning for creating and passing along jointly understood communicative conventions. Challenging the Chomskian view that linguistic knowledge is innate, Tomasello proposes instead that the most fundamental aspects of uniquely human communication are biological adaptations for cooperative social interaction in general and that the purely linguistic dimensions of human communication are cultural conventions and constructions created by and passed along within particular cultural groups. Bradford Books imprint
Article
In this article I articulate the Theory of Affective Pragmatics, which combines insights from the Basic Emotion View and the Behavioral Ecology View of emotional expressions. My core thesis is that emotional expressions are ways of manifesting one’s emotions but also of representing states of affairs, directing other people’s behaviors, and committing to future courses of actions. Since these are some of the main things we can do with language, my article’s take home message is that, from a communicative point of view, much of what we can do with language we can also do with nonverbal emotional expressions. © 2018 by the Philosophy of Science Association. All rights reserved.
Article
OBJECTIVE To study the effects of veterinarian communication (ie, the information provided and gaze and body direction) and vaccination style on the emotions and physiologic reactions experienced by clients and on clients' evaluation of the expertise and trustworthiness of the veterinarian. DESIGN Simulation study. PARTICIPANTS 20 small animal clients. PROCEDURES Participants were shown 12 videos of a female veterinarian in which she first provided information about puppy vaccination and then performed the procedure. The veterinarian's behavior varied regarding the information provided about the vaccination (ie, scarce, factual, or emotional), her gaze and body direction (ie, direct or 30° averted), and her vaccination style (ie, routine or emotional). While the participants watched the videos, their corrugator supercilii muscle activity (corrugator supercilii muscles are activated when frowning) and skin conductance activity were measured. Participants also rated the emotions they experienced (ie, valence and arousal) and assessed the veterinarian's behavior (ie, expertise and trustworthiness). RESULTS Overall, emotional information, a direct gaze and body direction, and an emotional vaccination style were associated with more pleasant emotions and higher ratings of the expertise and trustworthiness of the veterinarian's behavior by clients. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that through certain behavioral actions, veterinarians may positively affect the emotions and feelings experienced by clients during veterinary clinic visits, even in the case of vaccination visits, which can be considered routine visits from the viewpoint of the veterinarian.
Article
This study aimed to examine and replicate the first affective response to emotional stimuli, premier expression, which emerges between 0 and 300 milliseconds after the onset of the stimulus. We examined whether this expression can be observed in response to mild emotion inducing words. By recording the facial electromyographic activity (fEMG) over the corrugator supercilii and zygomaticus major muscles and electroencephalogram in 63 undergraduates, the present report provides evidence that premier expression is generated in responding to negative emotional words in the corrugator supercilii muscle. We also found positive correlations between fEMG and ERPs, and identified the relationship between the first visual input of emotional words and the response. We found a positive correlation between ERPs and the premier expression, especially in response to unpleasant stimuli. Given this, we have identified a negative valence, asymmetrical facial response.
Article
To explain how abstract concepts are grounded in sensory-motor experiences, several theories have been proposed. I will discuss two of these proposals, Conceptual Metaphor Theory and Situated Cognition, and argue why they do not fully explain grounding. A central idea in Conceptual Metaphor Theory is that image schemas ground abstract concepts in concrete experiences. Image schemas might themselves be abstractions, however, and therefore do not solve the grounding problem. Moreover, image schemas are too simple to explain the full richness of abstract concepts. Situated cognition might provide such richness. Research in our laboratory, however, has shown that even for concrete concepts, sensory-motor grounding is task dependent. Therefore, it is questionable whether abstract concepts can be significantly grounded in sensory-motor processing.
Article
It is widely accepted that emotional expressions can be rich communicative devices. We can learn much from the tears of a grieving friend, the smiles of an affable stranger, or the slamming of a door by a disgruntled lover. So far, a systematic analysis of what can be communicated by emotional expressions of different kinds and of exactly how such communication takes place has been missing. The aim of this article is to introduce a new framework for the study of emotional expressions that I call the theory of affective pragmatics (TAP). As linguistic pragmatics focuses on what utterances mean in a context, affective pragmatics focuses on what emotional expressions mean in a context. TAP develops and connects two principal insights. The first is the insight that emotional expressions do much more than simply expressing emotions. As proponents of the Behavioral Ecology View of facial movements have long emphasized, bodily displays are sophisticated social tools that can communicate the signaler's intentions and requests. Proponents of the Basic Emotion View of emotional expressions have acknowledged this fact, but they have failed to emphasize its importance, in part because they have been in the grip of a mistaken theory of emotional expressions as involuntary readouts of emotions. The second insight that TAP aims to articulate and apply to emotional expressions is that it is possible to engage in analogs of speech acts without using language at all. I argue that there are important and so far largely unexplored similarities between what we can “do” with words and what we can “do” with emotional expressions. In particular, the core tenet of TAP is that emotional expressions are a means not only of expressing what's inside but also of directing other people's behavior, of representing what the world is like and of committing to future courses of action. Because these are some of the main things we can do with language, the take home message of my analysis is that, from a communicative point of view, much of what we can do with language we can also do with non-verbal emotional expressions. I conclude by exploring some reasons why, despite the analogies I have highlighted, emotional expressions are much less powerful communicative tools than speech acts.
Article
Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background.
Article
Objective: To examine whether emotional functioning can be observed in patients who are behaviourally non-responsive using peripheral markers of emotional functioning. Method: We tested two patients, both diagnosed as being in a vegetative state (VS) following hypoxia secondary to cardiac arrest. Thirty-seven healthy participants with no history of neurological illness served as a control group. The activity of two facial muscles (zygomaticus major, corrugator supercilii) was measured using facial electromyography (EMG) to probe for patterned responses that differentiate between auditorily presented joke and non-joke stimuli in VS patients. Results: One of the two VS patients we tested demonstrated greater zygomatic and reduced corrugator activity in response to jokes compared with non-jokes. Critically, these responses followed the pattern and temporal profile of muscle activity observed in our healthy control sample. Conclusions: Despite their behaviorally non-responsive profile, some patients diagnosed as VS may retain aspects of emotional experience. Significance: Our findings represent, to our knowledge, the first demonstration that a patient diagnosed as VS can exhibit intact emotional responses to humor as assessed by facial EMG. Therefore, our approach may constitute a feasible bedside tool capable of providing novel insight into the mental and emotional lives of patients who are behaviourally non-responsive.
Article
The present study examined whether emotionally congruent facial muscular activation − a somatic index of emotional language embodiment can be elicited by reading subject-verb sentences composed of action verbs, that refer directly to facial expressions (e.g., Mario smiles), but also by reading more abstract state verbs, which provide more direct access to the emotions felt by the agent (e.g., Mario enjoys). To address this issue, we measured facial electromyography (EMG) while participants evaluated state and action verbs sentences. We found emotional sentences including both verb categories to have valence-congruent effects on emotional ratings and corresponding facial muscle activations. As expected, state verb-sentences were judged with higher valence ratings than action verbs-sentences. Moreover, despite emotional congruent facial activations were similar for the two linguistic categories, in a late temporal window we found a tendency for greater EMG modulation when reading action relative to state verbs sentences. These results support embodied theories of language comprehension and suggest that understanding emotional action and state verb sentences relies on partially dissociable motor and emotional processes.
Article
Facial electromyography (EMG) is a psychophysiological technique that is of tenused in the study of emotions. However, there is also some research employing thistechnique to study the facial expression of effort. Using facial EMG, it has been shownthat facial muscle activity is related to effort during mental tasks. It was found that facialEMG amplitude increases with task difficulty and with time on task during a two-choiceserial reaction task. However, a relationship between facial EMG and effort has notalways been found during mental tasks.Recently, we have started to employ facial EMG to study effort during physicaltasks. We have found, by manipulating exercise intensity and muscle fatigue during aleg-extension task, that facial EMG correlates positively with effort during weightliftingexercise. We have also demonstrated that facial EMG reflects exercise intensity duringconstant-workload cycling to exhaustion. Interestingly, we found that facial EMGincreases significantly with exercise duration during high-intensity cycling, but notduring moderate-intensity cycling.A plausible neurobiological mechanism that might explain the relationship betweenfacial EMG and effort is motor overflow. Motor overflow refers to involuntary musclecontractions that may accompany voluntary muscle contractions. This is thought to becaused by spreading of excitation in the motor cortex, because of increased excitability.In healthy adults, motor overflow is usually seen only during tasks that requireconsiderable (physical) effort. This might explain the difference between high-intensity exercise, where the relationship between facial EMG and effort is strong and consistent,and moderate-intensity exercise or mental tasks, where a relationship between facialEMG and effort has not consistently been found.Facial EMG may be used in the future along side rating of perceived effort as anextra, more objective measure of effort. This might have additional value in groups ofpeople who have difficulties with rating effort, or when it is hard to obtain ratings, forexample during maximal efforts of a few seconds and very high-intensity exercise ofshort duration. An additional benefit of facial EMG as a measure of effort is that it is acontinuous measure.
Article
Reading comprehension, much like comprehension of situations and comprehension of oral language, is embodied. In all cases, comprehension is the ability to take effective action on the basis of affordances related to the body, the physical world, and personal goals and cultural norms. In language contexts, action-based comprehension arises from simulating the linguistic content using neural and bodily systems of perception, action, and emotion. Within this framework, a new approach to teaching reading comprehension is described: Teach children how to simulate while reading. The Moved by Reading intervention teaches simulation in two stages. In the first stage, physical manipulation, children manipulate toys to simulate the content of what they are reading. After success in physically manipulating the toys, the children are taught to manipulate the toys in imagination. Research demonstrates that both physical and imagined manipulation leads to large gains in memory and comprehension.