Table 3 - uploaded by Emiliana Simon-Thomas
Content may be subject to copyright.
Positive Emotion Terms and Descriptive Scenarios Used to Prompt Vocal Bursts From Posers Emotion Scenario 

Positive Emotion Terms and Descriptive Scenarios Used to Prompt Vocal Bursts From Posers Emotion Scenario 

Source publication
Article
Full-text available
Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts...

Context in source publication

Context 1
... burst stimuli. Vocal bursts for the 13 positive emotions, collected as described in the method description for Study 1 (see Table 3), were used in Study 2. ...

Similar publications

Article
Full-text available
Islamist terrorist attacks have become a salient threat to Western countries, and news coverage about such crimes is a key predictor of public emotional reactions and policy support. We examine the effects of two key characteristics of terrorism news coverage: (1) the victim’s religion and (2) first-person narratives that facilitate perspective tak...

Citations

... When it comes to the non-verbal communication of emotions, it is apparent that the human voice is becoming increasingly important in psychotherapeutic dialog (Rice and Kerr, 1986;Tomicic and Martínez Guzmán, 2011) and one of the main ways in which emotions are expressed. There is ample research investigating the vocal expression of emotion (Kappas et al., 1991;Bachorowski and Owren, 1995;Bachorowski, 1999;Scherer, 2003;Scherer et al., 2003Scherer et al., , 2011Simon-Thomas et al., 2009) and demonstrating the relationship between emotional state and the acoustic characteristics of vocalizations (Kappas et al., 1991;Bachorowski and Owren, 1995;Banse and Scherer, 1996;Scherer, 2003;Scherer et al., 2003). In the literature this is called the vocal expression of emotion (Kappas et al., 1991;Bachorowski and Owren, 1995;Banse and Scherer, 1996). ...
... In contrast to the situation regarding self-criticism, there has been greater interest over the last few years in examining and differentiating a broader range of emotions such as a variety of positive emotions like compassion (Simon-Thomas et al., 2009;Kamiloğlu et al., 2020). In their review of vocal expressions of positive emotions, Kamiloğlu et al. (2020) systematically compared 108 studies investigating acoustic features across different positive emotions, highlighting differences in pitch, loudness, and speech rate. ...
... For this reason, in this study the expression of compassion and selfcompassion are understood to be equivalents. Taken together these studies show compassion is identified as a low aroused adaptive positive emotion (Simon-Thomas et al., 2009;Kamiloğlu et al., 2020) marked by a moderate pitch and intensity similar to love, kindness (Sauter, 2017;Kamiloğlu et al., 2020). ...
Article
Full-text available
Introduction When it comes to the non-verbal communication of emotions, it is apparent that the human voice is one of the main ways of expressing emotion and is increasingly important in psychotherapeutic dialog. There is ample research focusing on the vocal expression of emotions. However, to date the analysis of the vocal quality of clients’ in-sessional emotional experience remains largely unexplored. Moreover, there is generally a gap within the psychotherapy literature in the understanding of the vocal character of self-compassion, self-criticism, and protective anger. Methods In this study we investigated how clients vocally convey self-compassion, self-protection and self-criticism in Emotion Focused therapy sessions. For this purpose we investigated 12 commercially available Emotion Focused Therapy videos that employed a two chair or empty chair dialog. Praat software was used for the acoustic analysis of the most common features – pitch (known as fundamental frequency or F0) and intensity (voice amplitude, i.e., loudness). Results Results showed that intensity was significantly higher for self-criticism and self-protection than for self-compassion. Regarding pitch the findings showed no significant differences between the three states. Discussion More research analyzing acoustic features in a larger number of cases is required to obtain a deeper understanding of clients’ vocal expression of self-compassion, self-protection and self-criticism in Emotion Focused Therapy.
... Variations emerge from a wide range of sources, including historic context, cultural norms, social class and positioning, gender roles, individual proclivities, and other factors (e.g., Ejova et al., 2021). While variable, psychologists suggest that awe elicits a unique universal physiological response-chills and goosebumps-along with vocal and facial expressions that are cross-culturally recognizable (Cordaro et al., 2016;Maruskin et al., 2012;Shiota et al., 2003;Simon-Thomas et al., 2009). There is also evidence that some primates, including chimpanzees, exhibit awe-like responses to thunder, waterfalls, and other impressive conditions (Goodall, 2005). ...
Article
Full-text available
Archaeologists are increasingly interested in studying the role emotions have played in past human decision making. This paper demonstrates how awe is under-appreciated within archaeology despite it being uniquely available to archaeological research given its connection to monumental architecture and communal rituals. Archaeological engagement with awe is particularly important as psychological research has demonstrated that it is a prosocial emotion that leads to the creation of more extensive and stronger social bonds between individuals. A novel interpretation of Poverty Point (USA) is provided to illustrate the importance of studying awe, as this massive earthwork site was built more than 3000 years ago through large-scale gatherings. Reconsidered as a place of awe, Poverty Point is recast as an emotional locale where larger social and cultural identities and relationships were formed.
... However, the use of only basic emotions can be an issue because they include only one positive emotion, happiness, and therefore participants only need to notice a difference in valence to "recognize" an emotion expression as happy. There has been growing awareness of the value of expanding the types of emotions studied given that this is a very limited range of possible emotions (e.g., Bänziger et al., 2009;Cowen et al., 2019;Simon-Thomas et al., 2009). For example, Cowen et al. (2019) found that participants could recognize 24 different emotions from short vocal bursts. ...
... Emotion recognition varies depending on the emotion expressed whether the individual has an accent or not (Bänziger et al., 2009;Laukka et al., 2016;Scherer & Scherer, 2011). The voice may be particularly well-suited for conveying positive emotions (Sauter & Scott, 2007), as some positive emotions are more easily recognized through voice such as amusement and relief (Shiota et al., 2017;Simon-Thomas et al., 2009). ...
Article
Full-text available
The present study examined individuals' ability to identify emotions being expressed in vocal cues depending on the accent of the speaker as well as the intensity of the emotion being expressed. Australian and Canadian participants listened to Australian and Canadian speakers express pairs of emotions that fall within the same emotion family but vary in intensity (e.g., anger vs. irritation). Accent of listener was unrelated to emotion recognition. Instead, performance varied more based on emotion intensity and sex; Australian and Canadian participants generally found high intensity emotions easier to recognize compared to low intensity emotions as well as emotion conveyed by females compared to males. Participants found it particularly difficult to recognize the expressed emotion of Australian males. The results suggest the importance of considering the context in which emotion recognition is embedded. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
... For example, anger, fear, happiness, sadness, disgust, and surprise have been studied extensively to examine whether these can be perceived from facial expressions. Apart from the above six emotions, some complex emotions, such as contempt, awe, amusement, enthusiasm, and others, can also be perceived from vocal expressions (Simon-Thomas et al., 2009). ...
... We chose vocal expressions rather than facial expressions for comparison with touch because several positive emotions other than happiness can be perceived from voice. It should be noted that several studies have repeatedly shown that vocal expressions can discriminate between positive emotion categories other than happiness, such as amusement, pleasure, relief, and triumph (e.g., Simon-Thomas et al., 2009), although recent research has also reported that various positive emotions can be perceived from facial expressions (Cowen & Keltner, 2020). We adopted the same paradigm used in previous research on touch. ...
Article
Full-text available
Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
... Humans also express emotion via brief, non-speech sounds called vocal bursts, also referred to as "affect bursts", "emotional vocalizations", or "nonverbal vocalizations"-sounds like laughter, cries, sighs, moans, and groans-vocalizations that are not speech, and likely predate it, evolutionarily speaking. In [6] humans were found to be able to distinguish 14 emotional states from these vocal bursts. And a recent paper [7] by Cowen, Keltner, and others showed the ability to distinguish 24 emotional states from these brief vocalizations. ...
Preprint
Full-text available
Vocal Bursts -- short, non-speech vocalizations that convey emotions, such as laughter, cries, sighs, moans, and groans -- are an often-overlooked aspect of speech emotion recognition, but an important aspect of human vocal communication. One barrier to study of these interesting vocalizations is a lack of large datasets. I am pleased to introduce the EmoGator dataset, which consists of 32,040 samples from 365 speakers, 16.91 hours of audio; each sample classified into one of 30 distinct emotion categories by the speaker. Several different approaches to construct classifiers to identify emotion categories will be discussed, and directions for future research will be suggested. Data set is available for download from https://github.com/fredbuhl/EmoGator.
... The 24 dimensions of vocal expression that we uncover were, across four or more cultures, associated with admiration/aesthetic appreciation/awe, adoration/love/ sympathy, anger/distress, awkwardness/embarrassment, boredom, concentration/contemplation/calmness, confusion/doubt, craving/ interest/satisfaction, disappointment, disgust, excitement/triumph, fear/horror, horror only, interest, joy/amusement, pain, pride/triumph, realization, relief, sadness, satisfaction, sexual desire, surprise (positive or negative) and tiredness. These dimensions of meaning largely explain the dimensions of vocal expression found to be recognized as distinct in smaller-scale studies 1,4,13,16,17,26,46 . They also largely overlap with those found to be distinguished in facial expression, encompassing the distinct facial expressions that have been found to occur in similar contexts worldwide 47 and to be depicted in consistent contexts in ancient American sculptures 28 , providing further evidence that a wide range of emotions may have associated expressions with shared emotional meanings across cultures. ...
... The 48 emotions used to rate each vocal burst were derived from a comprehensive examination of the meanings vocal bursts have been previously posited to convey 13,[15][16][17]46,52 and the words that people regularly use to describe emotion-related experiences and expressions 23,[29][30][31] . In both phases, participants listened to each vocal burst and were asked, "What emotions is this person feeling? ...
Article
Full-text available
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called ‘vocal bursts’. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail. ‘Vocal bursts’ such as sighs, shrieks and shouts are human emotional vocalizations. In this study, Brooks et al. reveal similarities and differences in the emotional meaning of vocal bursts across five cultures.
... Non-verbal speech, known as a vocal burst (VB), is a voice signal without meaning by a human being, but could be translated into words such as laughter, groans, and grunts. Recent research [3] shows that vocal bursts can express emotion, even if no meaning appears when we use the vocal burst. The recent work [4,5] shows that the vocal burst could carry the information of 10 basic emotions from a human being, which could make the accuracy of the existing SER system robust. ...
Article
Full-text available
Speech emotion recognition (SER) is one of the most exciting topics many researchers have recently been involved in. Although much research has been conducted recently on this topic, emotion recognition via non-verbal speech (known as the vocal burst) is still sparse. The vocal burst is concise and has meaningless content, which is harder to deal with than verbal speech. Therefore, in this paper, we proposed a self-relation attention and temporal awareness (SRA-TA) module to tackle this problem with vocal bursts, which could capture the dependency in a long-term period and focus on the salient parts of the audio signal as well. Our proposed method contains three main stages. Firstly, the latent features are extracted using a self-supervised learning model from the raw audio signal and its Mel-spectrogram. After the SRA-TA module is utilized to capture the valuable information from latent features, all features are concatenated and fed into ten individual fully-connected layers to predict the scores of 10 emotions. Our proposed method achieves a mean concordance correlation coefficient (CCC) of 0.7295 on the test set, which achieves the first ranking of the high-dimensional emotion task in the 2022 ACII Affective Vocal Burst Workshop & Challenge.
... Many emotions, including anger, happiness, disgust, anger, and fear, can be effectively induced with mental imagery techniques (Siedlecka & Denson, 2019). Moreover, textual scenarios have also been used in emotion production studies to generate a wide range of emotional stimuli in the form of vocalizations (Simon-Thomas et al., 2009) or facial expressions (Cordaro et al., 2018), which can then be further analyzed with semantic space approaches (i.e., having participants rate the emotions portrayed in the vocalizations or facial expressions). ...
Article
Full-text available
One of the key unresolved issues in affective science is understanding how the subjective experience of emotion is structured. Semantic space theory has shed new light on this debate by applying computational methods to high-dimensional data sets containing self-report ratings of emotional responses to visual and auditory stimuli. We extend this approach here to the emotional experience induced by imagined scenarios. Participants chose at least one emotion category label among 34 options or provided ratings on 14 affective dimensions while imagining two-sentence hypothetical scenarios. A total of 883 scenarios were rated by at least 11 different raters on categorical or dimensional qualities, with a total of 796 participants contributing to the final normed stimulus set. Principal component analysis reduced the categorical data to 24 distinct varieties of reported experience, while cluster visualization indicated a blended, rather than discrete, distribution of the corresponding emotion space. Canonical correlation analysis between the categorical and dimensional data further indicated that category endorsement accounted for more variance in dimensional ratings than vice versa, with 10 canonical variates unifying change in category loadings with affective dimensions such as valence, arousal, safety, and commitment. These findings indicate that self-reported emotional responses to imaginative experiences exhibit a clustered structure, although clusters are separated by fuzzy boundaries, and variable dimensional properties associate with smooth gradients of change in categorical judgments. The resultant structure supports the tenets of semantic space theory and demonstrates some consistency with prior work using different emotional stimuli. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... The tool incorporates the use of issue-specific questions asked during an automated telephonic interview to evaluate the presence or absence of risk signatures in the voice. Researchers have provided evidence that perceptions, cognitions, and emotional arousal are communicated through the voice (Cowen et al., 2019;Simon-Thomas et al., 2009). In the automated process, the voice characteristics evaluated are the result of distinct neurocognitive reactions to specific screening questions and have neural correlates (Dedovic et al., 2009;Farrow et al., 2013;Muehlhan et al., 2013). ...
Article
Ever since criminal networks have recognized the profit in oil and energy pipelines, the theft of hydrocarbon-based products has jeopardized the stability and security of global regions. Although numerous pipelines run across land and below the oceans, tankers serve as the most efficient way of transporting crude oil and natural gas between continents. This applied research study describes a novel AI-powered, a voice-based tool that identified human risk in a multi-national Southeast Asian energy company weakened by large-scale internal theft. 78.6 percent of completed automated interviews resulted in risk-positive evaluations. Ground truth from testimonial interviews and an internal investigation verified 92.6 percent of scrutinized flags. Previously undiscovered details were identified by the automated tool regarding the scope, size, and scale of crime issues, involving all job levels and local politicians. Analyses provided evidence of the technology’s non-biased nature and demonstrated that its algorithm-generated outputs may be more dependable than observable behavioural cues. Findings (1) describe a potential decision support tool for detecting risk in situ, (2) contribute to employee fraud and internal theft literature, and (3) indicate that in the southeast Asian energy industry, approval for the approach described and recognition of its contribution are overwhelming.
... Stimuli consisted of nonlinguistic utterances commonly referred to as vocal bursts. These vocalizations predate spoken words (Banse & Scherer, 1996;Cordaro et al., 2016;Prather et al., 2009;Snowdon, 2003) in conveying specific emotional content (Simon-Thomas et al., 2009;Sauter et al., 2010b;Cordaro et al., 2016;Bryant, 2021) which is not only interpreted by conspecifics but, to some extent, by exemplars of other species as well (Filippi et al., 2017;Fritz et al., 2018). Two thousand and thirty-two (2,032) vocal bursts recorded by 56 speakers (26 F, 30 M, age range: 18-35) were collected from two previously published datasets: 425 utterances recorded by 11 professional actors were obtained from the VENEC corpus (Laukka et al., 2013), and 1,607 vocalizations recorded by 45 naïve subjects (countries of origin: United States, India, Kenya, Singapore) retrieved from Cowen and colleagues (Cowen et al., 2019). ...
Article
Full-text available
Vocal bursts are non-linguistic affectively-laden sounds with a crucial function in human communication, yet their affective structure is still debated. Studies showed that ratings of valence and arousal follow a V-shaped relationship in several kinds of stimuli: high arousal ratings are more likely to go on a par with very negative or very positive valence. Across two studies, we asked participants to listen to 1,008 vocal bursts and judge both how they felt when listening to the sound (i.e. core affect condition), and how the speaker felt when producing it (i.e. perception of affective quality condition). We show that a V-shaped fit outperforms a linear model in explaining the valence-arousal relationship across conditions and studies, even after equating the number of exemplars across emotion categories. Also, although subjective experience can be significantly predicted using affective quality ratings, core affect scores are significantly lower in arousal, less extreme in valence, more variable between individuals, and less reproducible between studies. Nonetheless, stimuli rated with opposite valence between conditions range from 11% (study 1) to 17% (study 2). Lastly, we demonstrate that ambiguity in valence (i.e. high between-participants variability) explains violations of the V-shape and relates to higher arousal.