Conference PaperPDF Available

Creating New Interfaces for Musical Expression

Authors:

Abstract and Figures

Advances in digital audio technologies have led to a situation where computers play a role in most music production and performance. Digital technologies offer unprecedented opportunities for the creation and manipulation of sound, however the flexibilty of these new technologies imply an often confusing array of choices for musical composers and performers. Some artists have faced this challenge by using computers directly to create music and leading to an explosion of new musical forms. However, most would agree that the computer is not a musical instrument, in the same sense as traditional instruments, and it is natural to ask 'how to play the computer' using interface technology appropriate for human brains and bodies. In 2001, we organized the first workshop on New Interfaces for Musical Expression (NIME), to attempt to answer this question by exploring connections with the better established field of human-computer interaction. This course summarizes what has been learned at NIME which has been held annually since that first workshop. We begin with an overview of the theory and practice of new musical interface design, asking what makes a good musical interface and whether there are any useful design principles or guidelines available. We will also discuss topics such as the mapping from human action to musical output, and control intimacy. Practical information about the tools for creating musical interfaces will be given, including an overview of sensors and microcontrollers, audio synthesis techniques, and communication protocols such as Open Sound Control (and MIDI). The remainder of the course will consist of several specific case studies representative of the major broad themes of the NIME conference, including augmented and sensor based instruments, mobile and networked music, and NIME pedagogy.
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
Content may be subject to copyright.
A preview of the PDF is not available
... What exactly musical expression means is a much-debated topic when considering the design of musical instruments [73,126]. Fels describes musical expression as occurring when a player intentionally expresses themselves through the medium of sound [83]. The term is to a great extent bound up with the Western concert tradition: the Oxford Dictionary of Music defines it as "that part of a composer's music such as subtle nuances of dynamics which he has no full means of committing to paper and must be left to the artistic perception and insight of the executant" [176, p. 216]. ...
... Control Intimacy is a term first introduced by Moore when describing the musical dysfunctions of the communication protocol MIDI (Musical Instrument Digital Interface), which at the time of writing his article (1988) was growing in usage and quickly becoming the standard way of interfacing with digital musical systems and instruments [252]. Intimacy has since become a central criterion of DMI design and has been been expanded by many in the field [83,166,251,375]. Almost thirty years later MIDI is still by far the most commonly used protocol in digital musical systems despite the advancements given by Open Sound Control (OSC) [383]. ...
... Fels posits that the end goal of musical instrument design should be for the player to have a high degree of intimacy, to the extent that they embody the instrument: at this point the instrument behaves like an extension of their body -there is a transparent relationship between control and sound [83]. This allows intent and expression to flow from the player, through the instrument, and to the sound, creating music. ...
Thesis
Full-text available
The sense of touch plays a fundamental role in musical performance: alongside hearing, it is the primary sensory modality used when interacting with musical instruments. Learning to play a musical instrument is one of the most developed haptic cultural practices, and within acoustic musical practice at large, the importance of touch and its close relationship to virtuosity and expression is well recognised. With digital musical instruments (DMIs) – instruments involving a combination of sensors and a digital sound engine – touch-mediated interaction remains the foremost means of control, but the interfaces of such instruments do not yet engage with the full spectrum of sensorimotor capabilities of a performer. This poses compelling questions for digital instrument design: how does the nuance and richness of physical interaction with an instrument manifest itself in the digital domain? Which design parameters are most important for haptic experience, and how do these parameters affect musical performance? Built around three practical studies which utilise DMIs as technology probes, this thesis addresses these questions from the point of view of design, of empirical musicology, and of tangible computing. In the first study musicians played a DMI with continuous pitch control and vibrotactile feedback in order to understand how dynamic tactile feedback can be implemented and how it influences musician experience and performance. The results suggest that certain vibrotactile feedback conditions can increase musicians’ tuning accuracy, but also disrupt temporal performance. The second study examines the influence of asynchronies between audio and haptic feedback. Two groups of musicians, amateurs and professional percussionists, were tasked with performing on a percussive DMI with variable action-sound latency. Differences between the two groups in terms of temporal accuracy and quality judgements illustrate the complex effects of asynchronous multimodal feedback. In the third study guitar-derivative DMIs with variable levels of control richness were observed with non-musicians and guitarists. The results from this study help clarify the relationship between tangible design factors, sensorimotor expertise and instrument behaviour. This thesis introduces a descriptive model of performer-instrument interaction, the projection model, which unites the design investigations from each study and provides a series of reflections and suggestions on the role of touch in DMI design.
... Throughout the years NIME has developed a curriculum encompassing diverse design, implementation, and evaluation practices for DMI development [22]. Here we will limit our discussion to learning for musical performance. ...
... NIME's main pedagogy already demonstrates enactive and ecological characteristics. However, this pedagogy is mainly directed towards the design and creation of DMIs [22], and not towards developing musical performance with these systems. Our proposal, instead, is oriented towards the design of musical pedagogies for musical performance as an approach to establish the prolonged use of a DMI. ...
Conference Paper
Full-text available
This paper addresses the prevailing longevity problem of digital musical instruments (DMIs) in NIME research and design by proposing a holistic system design approach. Despite recent efforts to examine the main contributing factors of DMI falling into obsolescence, such attempts to remedy this issue largely place focus on the artifacts establishing themselves, their design processes and technologies. However, few existing studies have attempted to proac-tively build a community around technological platforms for DMIs, whilst bearing in mind the social dynamics and activities necessary for a budding community. We observe that such attempts while important in their undertaking, are limited in their scope. In this paper we will discuss that achieving some sort of longevity must be addressed beyond the device itself and must tackle broader ecosystemic factors. We hypothesize, that a longevous DMI design must not only take into account a target community but it may also require a non-traditional pedagogical system that sustains artistic practice.
... This enables a tailoring to an individual's specific needs and capabilities, both in terms of how they can interact with the system (sensor inputs, gestural capability, or other ways the individual can provide energy to the system), and what the system provides back (feedback mechanism and also content of that mechanism). Making interactions meaningful with mapping between the player's control of the instrument and the sound produced being one of the most dominant issues in the creation of new musical interfaces[21]and each layer of a system allows for meaning to be added and also allows for the system to provide support where needed in a flexible way. ...
... There may be several practitioners involved in the use of the technology; from music leaders to music therapists, to teachers and teaching assistants all with various goals. Sessions could focus on " education in music, education through music, music therapy, or music as a leisure activity "[6 p11]with goals to play as an ensemble, to feel a sense of intimacy with an instrument[21], to provide a therapeutic or educational experience, or playing for fun. There is a need for user friendly systems that understand requirements and attitudes of facilitators in order to be inviting to use[7]. ...
Conference Paper
Full-text available
Music technology can provide unique opportunities to allow access to music making for those with complex needs in special educational needs (SEN) settings. Whilst there is a growing trend of research in this area, technology has been shown to face a variety of issues leading to underuse in this context. This paper reviews issues raised in literature and in practice for the use of music technology in SEN settings. The paper then reviews existing principles and frameworks for designing digital musical instruments (DMIs.) The reviews of literature and current frameworks are then used to inform a set of design considerations for instruments for users with complex needs, and in SEN settings. 18 design considerations are presented with connections to literature and practice. An implementation example including future work is presented, and finally a conclusion is then offered. ACM Classification • Human-centered computing~HCI theory, concepts and models • Human-centered computing~Interface design prototyping • Applied computing~Sound and music computing • Hardware~Sound-based input / output • Hardware~Haptic devices
... Since early works, a large number of different interactions and technologies were explored on the conception of new musical interfaces [12,17]. The use of gestural input parameters to control music in real-time is one of the explored approaches [23]. ...
Chapter
The economic growth and social transformation in the 21st century are hardly based on creativity. To help with the development of this skill, the concept of Creativity Support Tools (CST) was proposed. In this paper, we introduce Musical Brush (MB), an artistic mobile application whose main focus is to allow novices to improvise music while creating drawings. We investigated different types of interactions and audiovisual feedbacks in the context of a mobile application that combines music with drawings in a natural way, measuring their impact on creativity support. In this study, we tested different user interactions with real-time sound generation, including 2D drawings, three-dimensional device movements, and visual representations on Augmented Reality (AR). A user study was conducted to explore the support for creativity of each setup. Results showed the suitability of the association of Musical Brush with augmented reality for creating sounds and drawings as a tool that supports the exploration and expressiveness.
... Since early works, a large number of different interactions and technologies were explored on the conception of new musical interfaces [8]. Among the many possibilities, we highlight the concept of combining drawings with music. ...
Poster
Full-text available
The 21st century's economic growth and social transformation are hardly based on creativity. To help with the development of this skill, the Creativity Support Tools (CST) concept was proposed. Here, we introduce Musical Brush, an artistic-based CST mobile application for music and drawing improvisation. We investigated different types of interaction designs and audiovisual feedbacks, measuring their impact on creativity support. A user study with 26 subjects was conducted to verify the Creativity Support Index (CSI) score of each design. Results showed the suitability of the association of Musical Brush with Augmented Reality (AR) for creating new sounds and drawings.
... These factors can combine to create a significant delay between performer action and resultant sound and impede what Wessel and Wright (2002) describe as the development of control intimacy between performer and instrument. Control intimacy has been described by Fels (2004) as the perceived match of the behavior of an instrument and a performer's control of that instrument, a concept that is connected to the notion of ergoticity (Luciani, Florens, Couroussé, & Castet, 2009): the preservation of energy through both digital and physical components of a system, maintaining an ecologically valid causal link between action and sound to foster an embodied engagement with the instrument (Essl & O'Modhrain, 2006). ...
... These factors can combine to create a significant delay between performer action and resultant sound and impede what Wessel and Wright (2002) describe as the development of control intimacy between performer and instrument. Control intimacy has been described by Fels (2004) as the perceived match of the behavior of an instrument and a performer's control of that instrument, a concept that is connected to the notion of ergoticity (Luciani, Florens, Couroussé, & Castet, 2009): the preservation of energy through both digital and physical components of a system, maintaining an ecologically valid causal link between action and sound to foster an embodied engagement with the instrument (Essl & O'Modhrain, 2006). ...
Article
Full-text available
ASYNCHRONY BETWEEN TACTILE AND AUDITORY feedback (action-sound latency) when playing a musical instrument is widely recognized as disruptive to musical performance. In this paper we present a study that assesses the effects of delayed auditory feedback on the timing accuracy and judgments of instrument quality for two groups of participants: professional percussio-nists and non-percussionist amateur musicians. The amounts of delay tested in this study are relatively small in comparison to similar studies of auditory delays in a musical context (0 ms, 10 ms, 10 ms + 3 ms, 20 ms). We found that both groups rated the zero latency condition as higher quality for a series of quality measures in comparison to 10 ms + 3 ms and 20 ms latency, but did not show a significant difference in rating between 10 ms latency and zero latency. Professional percussio-nists were more aware of the latency conditions and showed less variation of timing under the latency conditions , although this ability decreased as the temporal demands of the task increased. We compare our findings from each group and discuss them in relation to latency in interactive digital systems more generally and experimentally similar work on sensorimotor control and rhythmic performance.
... Musical Expression (NIMEs) (Fels, 2004) has been explored that share a common nature based on real-time musical interaction but characterized as being highly computationally demanding. It is thus argued that designing technologies for a music performance space can inform technology design in other artistic domains (Buxton, 1997). ...
Conference Paper
Full-text available
Date/Time: 27 November 2017, 09:00am - 10:45am Venue: Nile 3 Location: Bangkok Int'l Trade & Exhibition Centre (BITEC) Summary: This course introduces the field of musical interface design and implementation. Participants will learn key aspects of the theory and practice of designing original interactive music technology with case studies including augmented and sensor-based instruments, sonographic instruments, mobile, and networked music-making. Intended Audience: The course is intended to have a broad appeal to interaction designers, game designers, artists, and academic and industry researchers who have a general interest in interaction techniques for multi-modal sonic and live audio-visual expression. Organizer: Dr. Michael Lyons, Professor, Image Arts and Sciences, Ritsumeikan University. Ph.D. Physics, University of British Columbia. Michael has worked in computational neuroscience, pattern recognition, cognitive science, and interactive arts. He was a Research Fellow at Caltech (1992/3), and a Lecturer and Research Assistant Professor at the University of Southern California (1994/96). From 1996-2007 he was a Senior Research Scientist at the Advanced Telecommunications Research International Labs in Kyoto, Japan. He joined the new College of Image Arts and Sciences, Ritsumeikan University, as Full Professor, in 2007. Michael co-founded the New Interfaces for Musical Expression conference. Slides from a course given by Michael Lyons at Siggraph Asia 2017, in Bangkok, Thailand. Co-author Sidney Fels contributed to the content.
Conference Paper
Full-text available
We describe a simple, computationally light, real-time system for tracking the lower face and extracting information about the shape of the open mouth from a video sequence. The system allows unencumbered control of audio synthesis modules by action of the mouth. We report work in progress to use the mouth controller to interact with a physical model of sound production by the avian syrinx.
Article
This item is available only in the original language.
Article
A new application of the well-known process of frequency modulation is shown to result in a surprising control of audio spectra. The technique provides a means of great simplicity to control the spectral components and their evolution in time. Such dynamic spectra are diverse in their subjective impressions and include sounds both known and unknown.
Article
In the past, the synthesis of singing has been investigated from different points of view. These methods include the playback of digitally recorded phonemes, control of parameters that make up the music, and manipulation of assorted parameters. But all these systems and techniques have their drawbacks. In order to address these shortcomings, a digital waveguide filter was used to develop a physically parameterized model of the human vocal tract. On the basis of this model, two voice synthesis systems were developed. One of these systems is a real-time digital signal processing interface program. This program is called SPASM. The graphical features of this software not only display the model of the human vocal tract shape, but they also allow the user to experiment by changing parameters such as pitch and vibrato. The other is a text driven software synthesis program. Simulation is the key feature of this software and the C-language function calls are used to achieve this purpose.
Article
Granular synthesis is a unique method of achieving complex sounds by the generation of high densities of small grains on the order of magnitude of 10-20 msec duration. Current digital signal processing (DSP) hardware offers the potential for real-time implementation of this technique by dividing the responsibility for calculation between the DSP and various levels of controllers. This type of implementation can be regarded as an instance of real-time composition, and therefore it is suggestive of a trend towards systems that combine the complexity associated with studio composition with the spontaneity of live performance. This article describes a real-time implementation of granular synthesis and signal processing.