Figure - available from: Frontiers in Psychology
This content is subject to copyright.
Ecological setting of the study: Studio Acusticum Concert Hall.

Ecological setting of the study: Studio Acusticum Concert Hall.

Source publication
Article
Full-text available
Musical performance is a multimodal experience, for performers and listeners alike. This paper reports on a pilot study which constitutes the first step toward a comprehensive approach to the experience of music as performed. We aim at bridging the gap between qualitative and quantitative approaches, by combining methods for data collection. The pu...

Citations

... Considering the unfeasibility of focusing on each movement while playing, professional musicians engage in anticipatory mental imagery to support their performance (Bishop & Goebl, 2017;Keller, 2012). Frequently, the imagined musical intentions generate associated body movements, as confirmed by several studies (Demos et al., 2017;Desmet et al., 2012;Massie-Laberge et al., 2019;Thompson & Luck, 2012;Visi et al., 2020). Further, expert practitioners use somaesthetic awareness to shift between reflective and unreflective modes of body attention (Toner & Moran, 2015), whereas novices benefit from directing attention to movement execution (Beilock et al., 2004). ...
... According to embodied music cognition theories, the human body mediates meaning formation during music performance and perception Leman, 2008Leman, , 2016Leman & Maes, 2014;Lesaffre et al., 2017). Multiple studies establishing relationships between expressive gestures and musical intentions support that performers rely on sensorimotor mechanisms to encode expression into sound (Demos et al., 2017;Desmet et al., 2012;Leman & Maes, 2014;Massie-Laberge et al., 2019;Thompson & Luck, 2012;Visi et al., 2020). Further, clarinet (Wanderley et al., 2005) and piano (Massie-Laberge et al., 2019;Thompson & Luck, 2012) players commented that musical characteristics influence their movement patterns. ...
Article
Full-text available
Quantitative studies demonstrate that performers’ gestures reflect technical, communicative, and expressive aspects of musical works in solo and group performances. However, musicians’ perspectives and experiences toward body movement are little understood. To address this gap, we interviewed 20 professional and pre-professional saxophone players with the aims of: (1) identifying factors influencing body movement; (2) understanding how body movement is approached in instrumental pedagogy contexts; and (3) collecting ideas about the impact of movements on performance quality. The qualitative thematic analysis revealed that musical features (i.e., musical character, dynamics) constitute a preponderant influencing factor in musicians’ body behavior, followed by previous experiences and physical and psychological characteristics. In the pedagogical dimension, participants presented an increased awareness of the importance of body movement compared to their former tutors, describing in-class implementation exercises and promoting reflection with their students. Still, a lack of saxophone-specific scientific knowledge was highlighted. Regarding performance quality, participants discussed the role of movement in facilitating performers’ execution (i.e., sound emission, rhythmical perception) and enhancing the audience’s experience. We provide insights into how professionals conceive, practice, and teach motor and expressive skills, which can inspire movement science and instrumental embodied pedagogy research.
... However, in the process of driving the architecture, the degree of compactness within the class is also the key to measuring the success of the driving architecture. Therefore, increasing the distance between classes and increasing the compactness within classes are the goals of our drive architecture [2][3][4][5]. In order to solve this problem, we improved DSC and KNNG, taking the distance information between points into consideration, and got new measurement methods, density-aware DSC and densityaware KNNG. ...
... In order to alleviate this problem, the order of the original points was carried out in the experiment. The shuffling operation completely disrupts the original 5 We choose to randomly generate x X1 = udt n ð Þ ...
Article
Full-text available
This paper combines automatic piano composition with quantitative perception, extracts note features from the demonstration audio, and builds a neural network model to complete automatic composition. First of all, in view of the diversity and complexity of the data collected in the quantitative perception of piano automatic composition, the energy efficiency-related state data of the piano automatic composition operation is collected, carried out, and dealt with. Secondly, a perceptual data-driven energy efficient evaluation and decision-making method is proposed. This method is based on time series index data. After determining the time subjective weight through time entropy, the time dimension factor is introduced, and then the subjective time weight is adjusted by the minimum variance method. Then, we consider the impact of the perception period on the perception efficiency and accuracy, calculate and dynamically adjust the perception period based on the running data, consider the needs of the perception object in different scenarios, and update the perception object in real time during the operation. Finally, combined with the level weights determined by the data-driven architecture, the dynamic manufacturing capability index and energy efficiency index of the equipment are finally obtained. The energy efficiency evaluation of the manufacturing system of the data-driven architecture proves the feasibility and scientificity of the evaluation method and achieves the goal of it. The simulation experiment results show that it can reduce the perception overhead while ensuring the perception efficiency and accuracy.
... Forthcoming studies will be built on method-development for multimodal data collection, carried out by the GEMM-cluster (see further [21]), and will thereby provide material for a more comprehensive analysis. This will, among other perspectives, allow for a further study of the perception of variable space in telematic performance. ...
Chapter
Full-text available
In this chapter, we describe a series of studies related to our research on using gestural sonic objects in music analysis. These include developing a method for annotating the qualities of gestural sonic objects on multimodal recordings; ranking which features in a multimodal dataset are good predictors of basic qualities of gestural sonic objects using the Random Forests algorithm; and a supervised learning method for automated spotting designed to assist human annotators. The subject of our analyses is a performance of Fragmente 2 , a choreomusical composition based on the Japanese composer Makoto Shinohara’s solo piece for tenor recorder Fragmente (1968). To obtain the dataset, we carried out a multimodal recording of a full performance of the piece and obtained synchronised audio, video, motion, and electromyogram (EMG) data describing the body movements of the performers. We then added annotations on gestural sonic objects through dedicated qualitative analysis sessions. The task of annotating gestural sonic objects on the recordings of this performance has led to a meticulous examination of related theoretical concepts to establish a method applicable beyond this case study. This process of gestural sonic object annotation—like other qualitative approaches involving manual labelling of data—has proven to be very time-consuming. This motivated the exploration of data-driven, automated approaches to assist expert annotators.
Chapter
Musicians spend more time practicing than performing live, but the process of rehearsal has been understudied. This paper introduces a dataset for using AI and machine learning to address this gap. The project observes the progression of pianists learning new repertoire over long periods of time by recording their rehearsals, generating a comprehensive multimodal dataset, the Rach3 dataset, with video, audio, and MIDI for computational analysis. This dataset will help investigating the way in which advanced students and professional classical musicians, particularly pianists, learn new music and develop their own expressive interpretations of a piece.