Jean Bresson's research while affiliated with UPMC and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (87)


Embodying Spatial Sound Synthesis with AI in Two Compositions for Instruments and 3-D Electronics
  • Article

January 2024

·

33 Reads

·

1 Citation

Computer Music Journal

·

Thibaut Carpentier

·

·

Jean Bresson

The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human-AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human-computer interactive improvisation.

Share

A Polytemporal Model for Musical Scheduling

June 2023

·

11 Reads

This paper describes the temporal model of a scheduler geared towards show control and live music applications. This model relies on multiple inter-related temporal axes, called timescales. Timescales allow scheduling computations using abstract dates and delays, much like a score uses symbolic positions and durations (e.g. bars, beats, and note values) to describe musical time. Abstract time is ultimately mapped onto wall-clock time through the use of time transformations, specified as tempo curves, for which we provide a formalism in terms of differential equations on symbolic position. In particular, our model allows specifying tempo both as a function of time or as a function of symbolic position, and allows piecewise tempo curves to be built from parametric curves.KeywordsSymbolic timeTime transformationsTempo curvesScheduling


Figure 2. Patch synthesising the bell strokes played at the end of the intermezzo storico.
Figure 3. Zoom on 1s of the chord-sequence analysis of La Donna è mobile in AudioSculpt.
Figure 4. Processing a sound model to generate a score (MIDI file) for the Disklavier.
Figure 5. MIDI file of the beginning of the first song of the Disklavier in Digital Performer. Notice the spectral dierence between f passages (e.g. in the first four 3/4 bars, with more notes in the high register) and pp passages (e.g. bars 5 to 8, with reduced pitch range), as well as the pedal controllers: short sustain-pedal strokes (represented by the dark rectangles) in the f passages, followed in the pp passages by a combination of una corda (wider rectangles) and shorter sustain-pedal strokes.
Figure 6. Synthesis of a messa di voce with OM-Chant. Imaginary voices (Part I, Scene 2).

+8

Electronic dramaturgy and computer-aided composition in Re Orso
  • Article
  • Full-text available

April 2020

·

306 Reads

Re Orso (King Bear) is an opera merging acoustic instruments and electronics. 1 The electronics were realised at IRCAM with the assistance of Carlo Laurenzi. The libretto, written by Catherine Ailloud-Nicolas and Giordano Ferrari, is based on a fable by Arrigo Boito. Every moment of the opera is exclusively inspired by and tightly related to the prescriptions of the libretto and the intimate structure of the drama: there are no vocal, instrumental or electronic sounds that do not have a deep connection to and a musical justification in the dramaturgy. In addition, an important compositional objective was that the electronic material be endowed with a clear dramaturgic role, in order to be perceived as a character on its own (actually, several characters) with a personality that develops during the action. 2 Preceded by a short exordium, Re Orso is divided in two parts of approximately 45' (five scenes) and 30' (three scenes) separated by an intermezzo storico. The ensemble leaves the pit at the end of this first part and the singers remain alone with the accompaniment of electronic sounds. Voice and electronics are therefore essential elements of the dramaturgy and of the composition. Both have been written and organised with computer-aided compositional tools. This text explores some of the representative OpenMusic patches developed for this project.

Download



Musical Gesture Recognition Using Machine Learning and Audio Descriptors

September 2018

·

155 Reads

We report preliminary results of an ongoing project on automatic recognition and classification of musical “gestures” from audio extracts. We use a machine learning tool designed for motion tracking and recognition, applied to labeled vectors of audio descriptors in order to recognize hypothetical gestures formed by these descriptors. A hypothesis is that the classes detected in audio descriptors can be used to identify higher-level/abstract musical structures which might not be described easily using standard/symbolic representations.






Citations (58)


... The other, more attuned to ours, is Ycart et al. (2016), which is related to the IRCAM "Open-Music" project. It is also based on a metric tree structure (called "Subdivision Schema"), which is used for steering the quantification process. ...

Reference:

MetricSplit: an automated notation of rhythm aligned with metric structure
A Supervised Approach for Rhythm Transcription Based on Tree SeriesEnumeration

... The literature [10] mentions the use of electronic components to develop a biocomputer that recognizes and recodes musical content for musical composition. The literature [11] mentions how computers are programmed to generate audio and compositional information in the context of computer-composed music and to respond to the performer with enough flexibility to achieve interaction as a problem to be solved. ...

Interactive Composition: New Steps in Computer Music Research
  • Citing Article
  • January 2017

Journal of New Music Research

... The Gestural Interaction Machine Learning Toolkit (GIMLeT) provides an educational toolkit consisting of a series of Max patches and objects for performing gesture analysis and mapping tasks adopting the interactive machine learning (IML) workflow [19]. It uses a set of Max externals including a package for dynamic communication based on the OSC communication protocol, O.-odot, a framework created for the purpose of providing expressive communication protocols between diverse media systems [3]. Berklee Electro Acoustic Pedagogy (BEAP) is a library of patches included in the commercial distribution of Max and provides a flexible, modular approach which allows users to maintain the main functionality of a common eurorack modular synthesizer from creating simple synthesizers and sequencers to LFOs, CV generators and waveshapers. ...

o.OM: structured-functional communication between computer music systems using OSC and Odot
  • Citing Conference Paper
  • September 2016

... Some aspects of the spatial rendering process were also prototyped using the Spat modules integrated into OM# (Garcia et al. 2016). To explore the possibility of spatial filtration, discussed further below in the Spatial Filtering section, synthesized sounds were filtered into different numbers of frequency bands, and each band was spatialized separately by application of an Ambisonic-encoded radiation pattern from the TUB database. ...

Interactive-Compositional Authoring of Sound Spatialization
  • Citing Article
  • November 2016

Journal of New Music Research

... Recent literature exploring the intersection of Quantum Computing (QC) with creative music practice, of which this book is a prime example, have welcomed the potential increase in speed and computational power offered by future fault tolerant QC machines. In the context of Computer Music (CM), a need for more powerful machines is evident in the limitations that classical machines still present for the realtime (or near realtime) control of compositional processes [2]. ...

Computer-Aided Composition of Musical Processes

Journal of New Music Research

... In this case, the users' goal is to alter sound and they interact with sound, instead of with the help of sound. Work in this field is related to tasks like music composition, mixing and editing [38,88,16]. Adams et al. presented a system that allows exploration and alteration of arbitrary parameters of sound on a touchscreen [2]. ...

Trajectoires: A Mobile Application for Controlling Sound Spatialization
  • Citing Conference Paper
  • May 2016

... In the same OpenMusic programming environment, Garcia et al (2015) used interviews and participatory design to contribute to the development of Open Sound Control (OSC) functionality for a communication protocol that allows various external input devices to control positional parameters, such as the GameTrak controller. Favory et al (2015) then extended this work by investigating the use of a multi-touch tablet. In both of these user studies, observations and participant feedback were collected to propose directions for improving the interaction and visualization. ...

Trajectoires: a mobile application for controlling and composing sound spatialization
  • Citing Article
  • October 2015

... Interestingly a same type of model of nested tori has been calculated by Amiot (2013) for the tori of phases for musical scales. For example: the circle of "fifths" can be positioned at rotoids (composed motions of rotations), that circles or spiralizes each sub-unit of the nested tori, together with inscribed triads, major-third, and minor-third relations. ...

Mathematics and Computation in Music
  • Citing Article
  • January 2011

... There has been interest in using other input devices to improve user interactions. In the same OpenMusic programming environment, Garcia et al (2015) used interviews and participatory design to contribute to the development of Open Sound Control (OSC) functionality for a communication protocol that allows various external input devices to control positional parameters, such as the GameTrak controller. Favory et al (2015) then extended this work by investigating the use of a multi-touch tablet. ...

Towards Interactive Authoring Tools for Composing Spatialization
  • Citing Article
  • March 2015

...  Improvisation: which means that computers play along in harmony with human music 68 players, on the spot. Examples of music improvisation applications can be found in (Déguernel, Vincent, & Assayag, 2018), (Nika, Bouche, Bresson, Chemillier, & Assayag, 2015) and (Dubnov, & Assayag, 2012).  Composition: which means that computers aid either partially or fully in composing new musical pieces. ...

Guided improvisation as dynamic calls to an offline model