Proportion of correct emotion judgements given by musicians and nonmusicians in the three stimuli modalities when collapsing across emotion. Error bars represent standard errors.

Proportion of correct emotion judgements given by musicians and nonmusicians in the three stimuli modalities when collapsing across emotion. Error bars represent standard errors.

Source publication
Article
Full-text available
Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and a...

Similar publications

Article
Full-text available
Music is a powerful influencer. The ways in which people interpret music are determinative of how the music makes an individual feel and thus, how it influences them. The interpretation of emotion in music has been well-studied in musicology. This study seeks to investigate a specific under-researched factor of emotion recognition in music: the mus...

Citations

... Therefore, in music classroom teaching activities, the only way to awaken students' musical ears is to lead them to soar on the wings of song in the music hall. Farmer, E. et al. emphasized that music training can promote the improvement of emotional recognition ability, which can be used for the rehabilitation of emotional disorders and can also be used to improve the musician's ability to express emotion and empathy [14]. The life of an American musician, especially in the field of music education, brings inspiration, including a music teaching model with strong personal characteristics and measures to collaborate to expand the impact of teaching [15]. ...
Article
Full-text available
This study explores a university music teaching system enhanced by auditory perception technology. It delves into the intricacies of auditory perception technology and its integration with multimodal music education, highlighting the potential applications in university settings. Using short Fourier transform and wavelet transform techniques, the system computes Mel frequency cepstrum coefficients (MFCCs) and first-order differential dynamic music characteristics. These are then utilized to construct the multimodal teaching framework through computer programming languages. The multimodal music teaching system was tested and analyzed using data analysis software. The results showed that the experimental group and the control group produced significant differences (P<0.05) in the four aspects of fluency (0.005), flexibility (0.003), originality (0.001), and the total score of singing skills (0.004) of music singing skills. This study not only enriches theoretical research on multimodal teaching innovations in music but also promotes the development of university music education.
... In that vein, musicality has been linked to skills like empathy, emotional differentiation, mind reading and decision making, all of which could foster emotional processing (Clark et al., 2015;Lima & Castro, 2011;Trimmer & Cuddy, 2008). However, a benefit of musicality for emotional processing seems contained within the auditory modality, as it has not been observed for facial or lexical stimuli (Correia et al., 2022;Farmer et al., 2020;Twaite, 2016;Weijkamp & Sadakata, 2017). Further, a comparison of brain responses to vocal emotions between musicians and non-musicians suggests differences at early stages associated with acoustic analysis (Pinheiro et al., 2015;Rigoulot et al., 2015;Strait et al., 2009). ...
Article
Full-text available
Musicians outperform non‐musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non‐musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi‐professional musicians ( N = 39) and non‐musicians ( N = 38), all socialized in Western music culture. Compared to non‐musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0‐modulated conditions, but was absent in the timbre‐modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time‐varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
... That said, the researchers found that musicians held an advantage over non-musicians in detecting prosodic pitch violations across native and non-native language contexts. Moreover, behavioural studies have found that musicians outperform non-musicians in matching spoken utterances to their intonation melodies 36 and identifying emotional prosody in speech [37][38][39] . Interestingly, similar results were seen in a longitudinal study with 6-year-old children, with those who were randomly assigned to receive 1 year of musical training in the form of keyboard or vocal lessons outperforming those who received no lessons when tested on the identification of emotional prosody in speech 37 . ...
Article
Full-text available
Musical training has been associated with various cognitive benefits, one of which is enhanced speech perception. However, most findings have been based on musicians taking part in ongoing music lessons and practice. This study thus sought to determine whether the musician advantage in pitch perception in the language domain extends to individuals who have ceased musical training and practice. To this end, adult active musicians (n = 22), former musicians (n = 27), and non-musicians (n = 47) were presented with sentences spoken in a native language, English, and a foreign language, French. The final words of the sentences were either prosodically congruous (spoken at normal pitch height), weakly incongruous (pitch was increased by 25%), or strongly incongruous (pitch was increased by 110%). Results of the pitch discrimination task revealed that although active musicians outperformed former musicians, former musicians outperformed non-musicians in the weakly incongruous condition. The findings suggest that the musician advantage in pitch perception in speech is retained to some extent even after musical training and practice is discontinued.
... Hence, the final sample was composed of 31 participants. The participants were from five countries (14 British,11 Chinese, four Indian, one Malaysian, and one Danish) and were all fluent in English. There were 10 participants in the MT group (Mean age = 31.37 ...
... Musicians' emotion processing advantage has been shown primarily for music excerpts 25,26,86 and speech prosody 10,39 , which involve the processing of sound features shared by music and speech 87,88 . Also, our results are in line with those of Correia et al. 32 showing that musicians did not have an advantage when recognising emotions from facial expressions, and with recent evidence showing that musicians are not better than non-musicians in recognising emotions from people's movement and gestures 11 . Taken together, the available evidence indicates that the musicians' advantage in recognising emotion may be confined to the sound domain. ...
Article
Full-text available
Music involves different senses and is emotional in nature, and musicians show enhanced detection of audio-visual temporal discrepancies and emotion recognition compared to non-musicians. However, whether musical training produces these enhanced abilities or if they are innate within musicians remains unclear. Thirty-one adult participants were randomly assigned to a music training, music listening, or control group who all completed a one-hour session per week for 11 weeks. The music training group received piano training, the music listening group listened to the same music, and the control group did their homework. Measures of audio-visual temporal discrepancy, facial expression recognition, autistic traits, depression, anxiety, stress and mood were completed and compared from the beginning to end of training. ANOVA results revealed that only the music training group showed a significant improvement in detection of audio-visual temporal discrepancies compared to the other groups for both stimuli (flash-beep and face-voice). However, music training did not improve emotion recognition from facial expressions compared to the control group, while it did reduce the levels of depression, stress and anxiety compared to baseline. This RCT study provides the first evidence of a causal effect of music training on improved audio-visual perception that goes beyond the music domain.
... The positive associations between music training and emotion processing do not seem to be restricted to musical sounds, as they also extend to vocal emotions (Fuller et al., 2014;Lima & Castro, 2011;Pinheiro et al., 2015;Thompson et al., 2004). However, similar effects are not observed in the visual modality, for faces or multimodal stimuli (Correia et al., 2020;Farmer et al., 2020). The positive link between music training and vocal emotional processing is supported by evidence from electrophysiological (Pinheiro et al., 2015), fMRI (Park et al., 2015), and behavioral (Lima & Castro, 2011;Parsons et al., 2014;Thompson et al., 2004) studies. ...
Article
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians’ brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
... Furthering this idea, we tested two groups of (selfreported) musicians and non-musicians. A wealth of empirical evidence has shown that musical training enhances auditory and pitch processing [59] and the ability to recognize emotions in music [60], and that these effects transfer to recognizing emotions in speech [10,61,62]. It could therefore be expected that musicians should perform differently from non-musicians, either because of an enhanced ability to perceive subtle vocal cues in complex music mixes, because of greater familiarity with e.g. the instrumental royalsocietypublishing.org/journal/rstb Phil. ...
Article
Full-text available
A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
... Most cross-sectional studies ask how trained and untrained listeners recognize vocal emotions, using prosodic stimuli in the majority of cases (e.g., Correia et al., 2020;Dmitrieva et al., 2006;Farmer et al., 2020;Fuller et al., 2014;Lima & Castro, 2011;Park et al., 2015;Pinheiro et al., 2015), but also melodic analogues of emotional prosody (Thompson et al., 2004;Trimmer & Cuddy, 2008) and purely nonverbal vocalizations Parsons et al., 2014;Young et al., 2012). Only a few studies examined emotion recognition for other modalities, including faces Farmer et al., 2020;Weijkamp & Sadakata, 2016) and audiovisual stimuli (Farmer et al., 2020;Weijkamp & Sadakata, 2016). ...
... Most cross-sectional studies ask how trained and untrained listeners recognize vocal emotions, using prosodic stimuli in the majority of cases (e.g., Correia et al., 2020;Dmitrieva et al., 2006;Farmer et al., 2020;Fuller et al., 2014;Lima & Castro, 2011;Park et al., 2015;Pinheiro et al., 2015), but also melodic analogues of emotional prosody (Thompson et al., 2004;Trimmer & Cuddy, 2008) and purely nonverbal vocalizations Parsons et al., 2014;Young et al., 2012). Only a few studies examined emotion recognition for other modalities, including faces Farmer et al., 2020;Weijkamp & Sadakata, 2016) and audiovisual stimuli (Farmer et al., 2020;Weijkamp & Sadakata, 2016). The focus is typically on the recognition of specific emotions (e.g., happiness, sadness), evaluated via forcedchoice tasks in which participants select the emotion being expressed by each stimulus from a list of alternatives. ...
... Most cross-sectional studies ask how trained and untrained listeners recognize vocal emotions, using prosodic stimuli in the majority of cases (e.g., Correia et al., 2020;Dmitrieva et al., 2006;Farmer et al., 2020;Fuller et al., 2014;Lima & Castro, 2011;Park et al., 2015;Pinheiro et al., 2015), but also melodic analogues of emotional prosody (Thompson et al., 2004;Trimmer & Cuddy, 2008) and purely nonverbal vocalizations Parsons et al., 2014;Young et al., 2012). Only a few studies examined emotion recognition for other modalities, including faces Farmer et al., 2020;Weijkamp & Sadakata, 2016) and audiovisual stimuli (Farmer et al., 2020;Weijkamp & Sadakata, 2016). The focus is typically on the recognition of specific emotions (e.g., happiness, sadness), evaluated via forcedchoice tasks in which participants select the emotion being expressed by each stimulus from a list of alternatives. ...
Article
There is widespread interest in the possibility that music training enhances nonmusical abilities. This possibility has been examined primarily for speech perception and domain-general abilities such as IQ. Although social and emotional processes are central to many musical activities, transfer from music training to socioemotional skills remains underexplored. Here we synthesize results from studies examining associations between music training and emotion recognition in voices and faces. Enhancements are typically observed for vocal emotions but not for faces, although most evidence is cross-sectional. These findings are discussed considering the design features of the studies. Future research could explore further the neurocognitive mechanisms underlying musician-related differences in emotion recognition, the role of predispositions, and the implications for broader aspects of socioemotional functioning.
Preprint
Full-text available
The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 ( N = 85) and Study 2 ( N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 ( N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.
Article
Music training is generally assumed to improve perceptual and cognitive abilities. Although correlational data highlight positive associations, experimental results are inconclusive, raising questions about causality. Does music training have far-transfer effects, or do preexisting factors determine who takes music lessons? All behavior reflects genetic and environmental influences, but differences in emphasis—nature versus nurture—have been a source of tension throughout the history of psychology. After reviewing the recent literature, we conclude that the evidence that music training causes nonmusical benefits is weak or nonexistent, and that researchers routinely overemphasize contributions from experience while neglecting those from nature. The literature is also largely exploratory rather than theory driven. It fails to explain mechanistically how music-training effects could occur and ignores evidence that far transfer is rare. Instead of focusing on elusive perceptual or cognitive benefits, we argue that it is more fruitful to examine the social-emotional effects of engaging with music, particularly in groups, and that music-based interventions may be effective mainly for clinical or atypical populations. Expected final online publication date for the Annual Review of Psychology, Volume 75 is January 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.