Conference PaperPDF Available

Towards the Internet of Musical Things

Authors:

Abstract

In this paper we propose to extend the concept of the Internet of Things to the musical domain leading to a subfield coined as the Internet of Musical Things (IoMUT). IoMUT refers to the network of computing devices embedded in physical objects (Musical Things) dedicated to the production and/or reception of musical content. Musical Things, such as smart musical instruments or smart devices , are connected by an infrastructure that enables multidirectional communication, both locally and remotely. The IoMUT digital ecosystem gathers interoperable devices and services that connect performers and audiences to support performer-performer and audience-performers interactions, not possible beforehand. The paper presents the main concepts of IoMUT and discusses the related implications and challenges.
TOWARDS THE INTERNET OF MUSICAL THINGS
Luca Turchet
Department of Network
and Systems Engineering
KTH Royal Institute of Technology
turchet@kth.se
Carlo Fischione
Department of Network
and Systems Engineering
KTH Royal Institute of Technology
carlofi@kth.se
Mathieu Barthet
Centre for Digital Music
Queen Mary University of London
m.barthet@qmul.ac.uk
ABSTRACT
In this paper we propose to extend the concept of the In-
ternet of Things to the musical domain leading to a sub-
field coined as the Internet of Musical Things (IoMUT).
IoMUT refers to the network of computing devices em-
bedded in physical objects (Musical Things) dedicated to
the production and/or reception of musical content. Musi-
cal Things, such as smart musical instruments or smart de-
vices, are connected by an infrastructure that enables mul-
tidirectional communication, both locally and remotely.
The IoMUT digital ecosystem gathers interoperable de-
vices and services that connect performers and audiences
to support performer-performer and audience-performers
interactions, not possible beforehand. The paper presents
the main concepts of IoMUT and discusses the related im-
plications and challenges.
1. INTRODUCTION
Recent years have seen a substantial increase in smart de-
vices and appliances in the home, office and other envi-
ronments that connect wirelessly through local networks
and the Internet. This is the manifestation of the so-called
Internet of Things (IoT), an umbrella term encompassing
the augmentation of everyday physical objects using in-
formation and communication technologies. In the Inter-
net of Things, Things refer to embedded systems that are
connected to the Internet, which are able to interact with
each other and cooperate with their neighbours to reach
common goals [1]. The core technology enabling the IoT
consists of wireless sensors networks (WSNs) [2]. These
are networks of tiny autonomous sensor and actuator nodes
that can be embedded in any physical object for control and
monitoring via wireless transmission. To date, however,
the application of IoT technologies in musical contexts has
received little attention compared to other domains such as
consumer electronics, healthcare, smart cities, and geospa-
tial analysis [3].
In this position paper we propose to extend the concept
of IoT to the musical domain leading to a subfield that we
coin as the Internet of Musical Things 1(IoMUT). Through
1The term “Internet of Musical Things” (or “Internet of Music
Copyright: c
2017 Luca Turchet et al. This is an open-access article distributed
under the terms of the Creative Commons Attribution 3.0 Unported License, which
permits unrestricted use, distribution, and reproduction in any medium, provided
the original author and source are credited.
its technological infrastructure, the IoMUT enables an eco-
system of interoperable devices connecting performers and
audiences, to support novel performer-performer, audience-
performers and audience-audience interactions.
Section 2 examines works and technologies related to the
envisioned IoMUT. In Section 3, we argue for the prospect
of a holistic integration of these technologies to create the
envisioned IoMUT. Sections 4 and 5 discuss the impli-
cations and the current major challenges to establish the
IoMUT, and Section 6 concludes the paper.
2. RELATED WORK
In this section we review key related works on which our
IoMUT vision is founded.
2.1 IoT technologies
Wireless sensors networks (WSNs) design has been the
object of much research, both in academia and industry
(e.g., [5]). This has resulted in the definition of new com-
munication protocols for WSNs, especially for low data
rate and low power consumption, such as IEEE [6], Zig-
bee 2, ROLL 3. Very recently, researchers are investigating
the integration of WSNs with future wireless cellular net-
works, the so called 5G networks. The state-of-the art of
the activity is the narrow-band IoT (NBIoT) [7].
Unfortunately, most of these protocols are not favourable
for the networking of musical instruments. While the most
cutting-edge networks (e.g., the fifth generation of cellu-
lar wireless networks) will deliver very high data rates,
they will also provide communication delays of the order
of 25ms [7]. The interconnection of musical instruments
poses stringent requirements in terms of end-to-end latency
to transmit and receive messages (which will have to be of
the order of milliseconds [8,9]), and the reliability or prob-
ability of successful message receptions (which will have
to be of the oder of 1010 bit error probability). How-
ever, such requirements are not only for the application of
IoT to the networked music domain, but also for emerging
classes of services, such as telepresence, virtual reality, and
Things” ) [4] has previously been employed in the context of specific
semantic audio applications as part of the EPSRC FAST-IMPACt project
(www.semanticaudio.ac.uk), or as challenge to develop wearable
instruments in hack sessions. However, to the best of our knowledge,
what it entails has not been formalised in the wider context of audi-
ence/performer interactions, which is the aim of the initiative described
in this work.
2www.zigbee.org
3www.ietf.org/dyn/wg/charter/roll-charter.html
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-13
mission-critical control. This is pushing the development
of the emerging paradigm of “Tactile Internet” [10] and
millimeter Waves (mmWaves) communications [11,12].
Tactile Internet refers to an Internet network where the
communication delay between a transmitter and a receiver
would be so low that even information associated to hu-
man touch, vision, and audition could be transmitted back
and forth in real-time, making remote interaction experi-
ences, virtual or human, effective, high-quality and realis-
tic. This novel area focuses on the problem of designing
communication networks, both wireless and wired, capa-
ble to ensure ultra-low latency communications, with end-
to-end delays of the oder or few milliseconds. The techno-
logical vision of Tactile Internet is that the round-trip time
to send information from a source to a destination and back
experiences latencies below 5ms. An important aspect of
the Tactile Internet vision is the reliability and low latency
for wireless communications. One of the emerging tech-
nology for the wireless communication is mmWaves. Such
a technology uses wireless frequencies within the range of
10 to 300 GHz and offers data rates of giga bits per seconds
over short distances. These frequencies make possible the
design of small antennas that can be easily embedded in
musical instruments (as proposed in the Smart Instruments
concept [13]). Moreover, the high data rates will enable
the transmission of multimodal content in high resolution.
2.2 Digital ecosystems
Digital ecosystems are the result of recent developments
of digital network infrastructure inheriting from principles
of ecological systems [14]. They are collaborative envi-
ronments where species/agents form a coalition to reach
specific goals. Value is created by making connections
through collective (“swarm”) intelligence and by promo-
ting collaboration. The concepts underlying digital ecosys-
tems match well the proposed goals of the Internet of Mu-
sical Things where multiple actors, smart devices and in-
telligent services interact to enrich musical experiences.
2.3 Networked music performance systems
Networked music performance (NMP) systems were pro-
posed to enable collaborative music creation over a com-
puter network and have been the object of scientific and
artistic investigations [9,15–17]. A notable example is the
ReacTable [18], a tangible interface consisting of a table
capable of tracking objects that are moved on its surface
to control the sonic output. The ReacTable allows multiple
performers to simultaneously interact with objects either
placed on the same table or on several networked tables in
geographically remote locations.
Networked collaborative music creations can occur over
a Wide Area Network (WAN), or a Local Area Network
(LAN) and in particular over a wireless one (WLAN) [17],
and different methods have been proposed for each of these
configurations. In [9], the authors provide a comprehen-
sive overview of hardware and software technologies en-
abling NMP, including low-latency codecs, frameworks,
protocols, as well as perceptually relevant aspects.
2.4 Participatory live music performance systems
Within interactive arts, participatory live music performan-
ce (PLMP) systems capitalising on information and com-
munication technologies have emerged to actively engage
audiences in the music creation process [19]. These sys-
tems disrupt the traditional unidirectional chain of musi-
cal communication from performers to audience members
who are “passive” from the creative point of view (see e.g.,
[19–22]). Interaction techniques for technology-mediated
audience participation have been proposed exploiting a
wide range of media and sensors, from mobile devices [19–
21, 23, 24] to tangible interfaces [25] such as light sticks
[26] (see [19] for a review and classification framework).
Most PLMP systems require the audience to use a sin-
gle type of device and application. Nevertheless, different
types of devices could be exploited simultaneously to en-
rich interaction possibilities. To date, audience creative
participation has mainly been based on manual controls or
gestures using smartphones (e.g., screen touch, tilt). Ex-
pressive modalities could be increased by tracking physio-
logical parameters (e.g., electrodermal activity, heart rate)
[27] [28] at the individual and collective levels using de-
vices specifically designed for this purpose, or by tracking
more complex audience behaviors and body gestures. Fur-
thermore, means of interaction in current PLMP systems
typically rely on the auditory or visual modalities, while
the sense of touch has scarcely been explored to create
more engaging musical experiences.
2.5 Smart Instruments
Recently, a new class of musical instruments has been pro-
posed, the Smart Instruments [13]. In addition to sensor
and actuator enhancements provided in the so-called aug-
mented instruments [29, 30], Smart Instruments are char-
acterised by embedded computational intelligence, a sound
processing and synthesis engine, bidirectional wireless con-
nectivity, an embedded sound delivery system, and a sys-
tem for feedback to the player. Smart Instruments bring
together separate strands of augmented instruments, net-
worked music and Internet of Things technology, offering
direct point-to-point communication between each other
and other portable sensor-enabled devices, without need
for a central mediator such as a laptop. Interoperability
is a key feature of Smart Instruments, which are capable
of directly exchanging musically relevant information with
one another and communicating with a diverse network of
external devices, including wearable technology, mobile
phones, virtual reality headsets and large-scale concert hall
audio and lighting systems.
The company MIND Music Labs 4has recently develo-
ped the Sensus Smart Guitar [13, 31] which, to the best
of our knowledge, is the first musical instrument to en-
compass all of the above features of a Smart Instrument.
Such an instrument is based on a conventional electroa-
coustic guitar that is augmented with IoT technologies. It
involves several sensors embedded in various parts of the
instrument, which allow for the tracking of a variety of
4www.mindmusiclabs.com
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-14
gestures of the performer. These are used to modulate the
instrument’s sound thanks to an embedded platform for
digital audio effects. Acoustical sounds are produced by
the instrument itself by means of an actuation system that
transforms the instrument’s resonating wooden body into
a loudspeaker. Furthermore, the Sensus Smart Guitar is
equipped with bidirectional wireless connectivity, which
makes possible the transmission and reception of different
types of data from the instrument to a variety of smart de-
vices and vice versa.
2.6 Smart wearables
The last decade has witnessed a substantial increase in the
prevalence of wearable sensor systems, including electronic
wristbands, watches and small sensor tokens clipped to a
belt or held in a pocket. Many of such devices (e.g., Fit-
bit 5) target the personal health and fitness sectors. They
typically include inertial measurement units (IMUs) for
capturing body movement and sensors for physiological
data (e.g., body temperature, galvanic skin response, heart
rate). Such devices, here referred to as smart wearables,
include wireless communication options to link to mobile
phones or computers. In some cases, a small display, spe-
aker or tactile actuator may be included. A distinguishing
characteristic of wearable devices is their unobtrusiveness:
they are designed to be worn during everyday activity and
to passively collect data without regular intervention by the
user. Such features make these devices suitable to track
and collect body movements and physiological responses
of audience members during live concerts. However, to
date, this challenge has been scarcely addressed.
Moreover, to date, the use of the wearable devices ex-
ploiting the tactile channel in musical applications has been
rather limited. Noticeable exceptions are Rhytm’n’shoes, a
wearable shoe-based audio-tactile interface equipped with
bidirectional wireless transmission [32], and Mood Glove,
a glove designed to amplify the emotions expressed by mu-
sic in film through haptic sensations [33].
2.7 Virtual reality, augmented reality, and 360videos
The last two decades have seen an increase of both aca-
demic and industrial research in the fields of virtual reality
(VR) and augmented reality (AR) for musical applications.
Several virtual musical instruments have been developed
(for a recent review see [34]), while musical instruments
augmented with sensors, such as the Sensus Smart Guitar,
have been used to interactively control virtual reality sce-
narios displayed on head-mounted-displays (HMD) [31].
AR has been used to enhance performance stages for aug-
mented concert experiences, as well as for participatory
performances applications. Mazzanti et al. proposed the
augmented stage [35] an interactive space for both per-
formers and audience members, where AR techniques are
used to superimpose a performance stage with a virtual en-
vironment, populated with interactive elements. Specta-
tors contribute to the visual and sonic outcome of the per-
formance by manipulating virtual objects via their mobile
5www.fitbit.com
phones. Berthaut et al. proposed Reflets, a mixed-reality
environment that allows one to display virtual content on
stage, such as 3D virtual musical interfaces or visual aug-
mentations of instruments and performers [36]. Poupyrev
et al. proposed the augmented groove, a musical inter-
face for collaborative jamming where AR, 3D interfaces,
as well as physical, tangible interaction are used for con-
ducting multimedia musical performance [37].
Immersive virtual environments have been proposed as a
means to provide new forms of musical interactions. For
instance, Berthaut et al. proposed the 3D reactive wid-
gets, graphical elements that enable efficient and simulta-
neous control and visualisation of musical processes, along
with Piivert, an input device developed to manipulate such
widgets, and several techniques for 3D musical interac-
tion [38, 39].
The growing availability of 360videos has recently
opened new opportunities for the entertainment industry,
so that musical content that can be delivered through VR
devices offering experiences unprecedented in terms of im-
mersion and presence. Recent examples include Orches-
tra VR, a 3603D performance featuring the opening of
Beethoven’s Fifth Symphony performed by the Los Ange-
les Philharmonic Orchestra, accessible via an app for vari-
ous VR headsets 6, Paul McCartney’s 360 cinematic con-
cert experience app allowing the experience of recorded
concerts with 360video and 3D audio using Google’s
Cardboard HMD, Los Angeles Radio Station KCRW, which
launched a VR App for “intimate and immersive musical
performances” 7and FOVE’s Eye Play The Piano project,
which allows disabled children to play a real acoustic piano
using eye tracking technologies embedded in HMDs 8.
3. THE INTERNET OF MUSICAL THINGS
The proposed Internet of Musical Things (IoMUT) relates
to the network of physical objects (Musical Things) dedi-
cated to the production, interaction with or experience of
musical content. Musical Things embed electronics, sen-
sors, data forwarding and processing software, and net-
work connectivity enabling the collection and exchange
of data for musical purpose. A Musical Thing can take
the form of a Smart Instrument, a Smart Wearable, or any
other smart device utilised to control, generate, or track re-
sponses to music content. For instance, a Smart Wearable
can track simple movements, complex gestures, as well as
physiological parameters, but can also provide feedback
leveraging the senses of audition, touch, and vision.
The IoMUT arises from (but is not limited to) the holis-
tic integration of the current and future technologies men-
tioned in Section 2. The IoMUT is based on a technolog-
ical infrastructure that supports multidirectional wireless
communication between Musical Things, both locally and
remotely. Within the IoMUT, different types of devices
for performers and audience are exploited simultaneously
to enrich interaction possibilities. This multiplies affor-
dances and ways to track performers’ and audience mem-
6www.laphil.com/orchestravr
7www.kcrw.com/vr
8www.eyeplaythepiano.com/en
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-15
bers’ creative controls or responses. The technological in-
frastructure of the IoMUT consists of hardware and soft-
ware (such as sensors, actuators, devices, networks, pro-
tocols, APIs, platforms, clouds, services), but differently
from the IoT, these are specific to the musical case. In
particular, for the most typical case of real-time applica-
tions, the infrastructure ensures communications with low
latency, high reliability, high quality, and synchronization
between connected devices.
Such an infrastructure enables an ecosystem of interop-
erable devices connecting performers with each other, as
well as with audiences. Figure 1 shows a conceptual dia-
gram of the different components that are interconnected
in our vision of the IoMUT ecosystem. As it can be seen
in the diagram, the interactions between the human actors
(performers and audience members) are mediated by Mu-
sical Things. Such interactions can be both co-located (see
blue arrows), when the human actors are in the same physi-
cal space (e.g., concert hall, public space), or remote, when
they take place in different physical spaces that are con-
nected by a network (see black arrows).
Regarding co-located interactions, these can be based on
point to point communications between a Musical Thing
in possession of the performer and a Musical Thing in pos-
session of a audience member (see the blue dashed arrows),
but also between one or more Musical Things of the perfor-
mers towards one or more Musical Things for the audience
as a whole, and vice versa (see the blue solid line arrow).
An example of the latter case could be that of one or more
Smart Instruments affecting the lighting system of a con-
cert all. Regarding remote interactions, these can occur not
only between audience members/performers present at the
concert venue and remote audience members/performers
(see the solid black arrows), but also between remote audi-
ence members/performers (see the black dashed arrows).
The communication between Musical Things is achieved
through APIs (application programming interfaces, indi-
cated in Figure 1 with the small red rectangles), which
we propose could be based on a unified API specifica-
tion (the IoMUT API specification). The interactions men-
tioned above, based on the exchange of multimodal cre-
ative content, are made possible thanks to Services (in-
dicated with the green areas). For instance, these can be
services for creative content analysis (such as multi-sensor
data fusion [40], music information retrieval [41]), services
for creative content mapping (between analysis and de-
vices), or services for creative content synchronization (be-
tween devices). In particular, the implementation of novel
forms of interactions that leverage different sensory modal-
ities makes the definition of Multimodal Mapping Strate-
gies necessary. These strategies consist of the process of
transforming, in real-time, the sensed data into control data
for perceptual feedback (haptic, auditory, visual).
4. IMPLICATIONS
Thanks to the IoMUT it is possible to reimagine the live
music performance art and music teaching by providing a
technological ecosystem that multiplies possibilities of in-
teraction between audiences, performers, students, teach-
ers, as well as their instruments and machines. This has
the potential to revolutionise the way to experience, com-
pose, and learn music, as well as even record it by adding
other modalities to audio. In particular, IoMUT has the po-
tential to make NMP and PLMP more engaging and more
expressive, because it uses a radically novel approach to
address the fundamental obstacles of state-of-the-art meth-
ods, which hinder efficient, meaningful and expressive in-
teractions between performers and between performers and
audiences.
The IoMUT ecosystem can support new performers-per-
formers and audience-performers interactions, not possible
beforehand. Examples of use cases include novel forms
of: jamming (e.g., using apps running on smartphones to
control the sound engine of a smart instrument); enhanced
concert experiences (e.g., audience members in possession
of haptic feedback smart wearables “feel” the vibrato of
a smart violin or the rhythm of a smart drum; the emo-
tional response of audience is used to control the timbre
of Smart Instruments or the behavior of stage equipment
such as projections, smoke machines, lights); remote re-
hearsals (point-to-point audio streaming between Smart In-
struments).
To date, no human computer interaction system for mu-
sical applications enables the many interaction pathways
and mapping strategies we envision in the IoMUT: one-
to-one, one-to-many, many-to-one, many-to-many, in both
co-located and remote situations. In contrast to traditional
acoustic instruments, the IoMUT framework allows to es-
tablish “composed electronic instruments” where the con-
trol interface and the process of sound production is decou-
pled [42]. New situations of “performative agency” [42]
can be envisioned by letting audience members and the
intelligence derived within the IoMUT digital ecosystem
influence the outcome of specific musical performances.
By applying the IoT to music we envision to go from the
traditional musical chain (i.e., composers writing musical
content for performers, who deliver it to a unique and “cre-
atively passive” audience) to a musical mesh where pos-
sibilities of interactions are countless. We envision both
common (co-located participatory music performance in
a concert hall) and extreme scenarios (massive open on-
line music performance gathering thousands or hundreds
of thousands of participants in a virtual environment).
Combining such IoMUT musical mesh model with
VR/AR applications it is possible to enable new forms of
music learning, co-creation, and immersive and augmented
concert experiences. For instance, a violin player could see
through AR head-mounted-displays semantic and visual
information about what another performer is playing, or be
able to follow the score without having to look at a music
stand. An audience member could virtually experience to
walk on stage or feel in the “skin” of a performer. A whole
audience could affect the lighting effects on stage based on
physiological responses sensed with wireless smart wrist-
bands. Such smart wristbands could also be used to under-
stand audiences’ affective responses. An audience could
engage in the music creation process at specific times in a
performance as prepared in a composer’s score. A concert-
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-16
Co-located
Smart
Instrument 1
Performer
1
Audience
Member 1
Smart
Device 1
Smart
Instrument n
Performer
n
Audience
Member n
Smart
Device n
Performers Musical Things AudienceMusical ThingsServices
Remote
Smart
Instrument 1
Performer
1
Audience
Member 1
Smart
Device 1
Smart
Instrument n
Performer
n
Audience
Member n
Smart
Device n
Performers Musical Things AudienceMusical ThingsServices
Services
Legend: API; co-located interactions between individual actors; co-located interactions as a whole;
interactions between remote actors; interactions between co-located and remote actors.
Figure 1. Block diagram of the IoMUT ecosystem.
goer could have access to “augmented programme notes”
guiding and preparing them prior to the concert experi-
ence by learning more about historical and compositional
aspects and listening to different renderings interactively,
and letting them see the evolution in the score as the music
is being played or additional information about the soloist
during the concert.
In a different vein, the IoMUT has the potential to gener-
ate new business models that could exploit the information
collected by Musical Things in many ways. For instance,
such information could be used to understand customer be-
haviour, to deliver specific services, to improve products
and concert experiences, and to identify and intercept the
so-called “business moments” (defined by Gartner Inc. 9).
5. CURRENT CHALLENGES
The envisioned IoMUT poses both technological and artis-
tic or pedagogical challenges. Regarding the technological
challenges, it is necessary that the connectivity features of
the envisioned Musical Things go far beyond the state-of-
the-art technologies available today for the music domain.
9www.gartner.com/newsroom/id/2602820
Between a sensor that acquires a measurement of a specific
auditory, physiological, or gestural phenomenon, and the
receiver that reacts to that reading over a network, there is
a chain of networking and information processing compo-
nents, which must be appropriately addressed in order to
enable acceptable musical interactions over the network.
Nevertheless, current NMP systems suffer from transmis-
sion issues of latency, jitter, synchronization, and audio
quality: these hinder real-time interactions that are essen-
tial to collaborative music creation [9]. It is also important
to notice that for optimal NMP experiences, several aspects
of musical interactions must be taken into account beside
the efficient transmission of audio content. Indeed, during
co-located musical interactions musicians rely on several
modalities in addition to the sounds generated by their in-
struments, which include for instance the visual feedback
from gestures of other performers, related tactile sensa-
tions, or the sound reverberation of the space [43]. How-
ever, providing realistic performance conditions over a net-
work represents a significant engineering challenge due to
the extremely strict requirements in terms of network la-
tency and multimodal content quality, which are needed to
achieve a high-quality interaction experience [44].
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-17
The most important specific technological challenges of
IoMUT against the more general ones from the IoT are the
requirements of very short latency, high reliability, high
audio/multimodal content quality, and synchronization to
be ensured for musical communication. This implies the
creation of a technological infrastructure that is capable of
transmitting multimodal content, and in particular audio,
from one musician to another/others not only in hi-fi qua-
lity, but also with a negligible amount of latency, which en-
able performers to play in synchronous ways. Current IoT
scientific methods and technologies do not satisfy these
tight constraints needed for the real-time transmission of
audio/multimodal content both at short and at large dis-
tances [9, 17]. The envisioned Tactile Internet [10] is ex-
pected to solve at least in part such issues.
The establishing of the Tactile Internet vision will re-
quire, however, the redesign of networking protocols, from
the physical layer to the transport layer. For example, at the
physical layer, messages will have to be inevitably short
to ensure the desired low latencies, and this will pose re-
strictions to the data rates. At the routing layer, the proto-
cols will have to be optimized for low delays rather than
for high throughput of information. Possible avenues for
future research include: the identification of all the com-
ponents of a high-performance network infrastructure that
is physically capable to deliver low-latency across the ra-
dio as well as in the core network segments; the propo-
sition of new distributed routing decision methods capa-
ble to minimize the delay by design; the investigation of
new dynamic control and management techniques based
on optimization theory capable to configure and allocate
the proper data plane resources across different domains to
build low-latency end-to-end services.
The IoMUT digital ecosystems can benefit from ongoing
work in the semantic web. Metadata related to multimodal
creative content can be represented using vocabulary de-
fined in ontologies with associated properties. Such on-
tologies enable the retrieval of linked data within the eco-
system driven by the the needs of specific creative music
services (e.g., a service providing vibrato of notes played
by a guitar player to enact events in the Musical Thing of
an audience member). Alignment and mapping/translation
techniques can be developed to enable “semantic informa-
tion integration” [14] within the IoMUT ecosystem.
An important aspect of the IoMUT regards the intercon-
nection of different types of devices. Such devices target
performers or audiences (both co-located and remote), and
are used to generate, track, and/or interpret multimodal
musical content. This poses several other technological
challenges. These include the need for ad-hoc protocols
and interchange formats for musically relevant information
that have to be common to the different Musical Things,
as well as the definition of common APIs specifically de-
signed for IoMUT applications.
Wearable systems present many opportunities for novel
forms of musical interaction, especially involving multi-
ple sensory modalities. Related design challenges concern
the optimization for musical qualities of the sensor and ac-
tuators capabilities of these devices (e.g., temporal preci-
sion, low latency, synchronicity of audio, visual, and tac-
tile modalities). Another related challenge is how to effec-
tively use multiple sensory modalities in PLMP systems.
In particular the haptic one leveraged by Smart Wearables
could have a high impact potential on the musical experi-
ence of audience.
A major challenge to the approach of Multimodal Map-
ping Strategies, consists of how to determine mappings
flexible enough to allow for musical participation expres-
sive and meaningful to both experts and novices. These
mappings could be based on features extracted in real-time
from sensors data and musical audio analysis. Multi-sensor
data fusion techniques [40] could be exploited for this pur-
pose, which explicitly account for the diversity in acquired
data, (e.g., in relation to sampling rates, dimensionality,
range, and origin).
Moreover, the IoMUT demands new analytic approaches:
new analytic tools and algorithms are needed to process
large amounts of music-related data in order to retrieve in-
formation useful to understand and exploit user behaviours.
Regarding the non-technological challenges posed by the
IoMUT, from the artistic perspective, previous attempts
to integrate audiences into performances have not com-
pletely released the barriers inhibiting musical interactivity
between performers and audiences. For a performance to
be truly interactive, each member of the audience should
be as individually empowered as the performers on stage.
To achieve this, it is necessary to reimagine musical per-
formances and invent new compositional paradigms that
can catalyse, encompass and incorporate multiple and di-
verse contributions into a coherent and engaging whole.
This poses several challenges: how can we compose music
that gives individual freedom to several (potentially hun-
dreds or thousands) of performers so that they feel em-
powered without compromising the quality of the perfor-
mance? How do we manage all of these inputs from indi-
viduals who have different backgrounds, sensibilities and
skills and leverage them so that the result is satisfying for
all of them and in which each person can still recognize
his or her own individual contributions? How do musi-
cians compose and rehearse for a concert where they do
not fully control the end result? In short, how do we use
technology to integrate each individual expression into an
evolving whole that binds people together?
A framework such as the IoMUT and what it entails for
artistic and pedagogical agendas will require to be assessed.
This could pave the way for novel research on audience re-
ception, interactive arts, education and aesthetics. Such
research would for instance help to reflect on the roles of
constraints, agency and identities in the participatory arts.
Finally, issues related to security and privacy of informa-
tion should also be addressed, especially if such system
was to be deployed for the masses.
6. CONCLUSIONS
IoMUT relates to wireless networks of Musical Things that
allow for interconnection of and interaction between per-
formers, audiences, and their smart devices. Using IoMUT
we proposed a model transforming the traditional linear
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-18
composer-performer-audience musical chain into a musi-
cal mesh interconnecting co-located and/or remote perfor-
mers and audiences. Many opportunities are enabled by
such a model that multiplies possibilities of interaction and
communication between audiences, performers, their in-
struments and machines. On the other hand, the IoMUT
poses both technological and non-technological challenges
that we expect will be faced in upcoming years by both
academic and industrial research.
Acknowledgments
The work of Luca Turchet is supported by MSCA Indi-
vidual fellowship of European Union’s Horizon 2020 re-
search and innovation programme, under grant agreement
No 749561. The work of Carlo Fischione is supported
by the TNG SRA ICT project “TouCHES” - Tactile Cy-
berphysical Networks, funded by the Swedish Research
Council. Mathieu Barthet acknowledges support from the
EU H2020 Audio Commons grant (688382).
7. REFERENCES
[1] L. Atzori, A. Iera, and G. Morabito, “The internet of
things: a survey,” Computer networks, vol. 54, no. 15,
pp. 2787–2805, 2010.
[2] W. Dargie and C. Poellabauer, Fundamentals of wire-
less sensor networks: theory and practice. John Wiley
& Sons, 2010.
[3] E. Borgia, “The Internet of Things vision: Key fea-
tures, applications and open issues,” Computer Com-
munications, vol. 54, pp. 1–31, 2014.
[4] A. Hazzard, S. Benford, A. Chamberlain, C. Green-
halgh, and H. Kwon, “Musical intersections across the
digital and physical,” in Digital Music Research Net-
work Abstracts (DMRN+9), 2014.
[5] A. Willig, “Recent and emerging topics in wireless in-
dustrial communication,” IEEE Transactions on Indus-
trial Informatics, vol. 4, no. 2, pp. 102–124, 2008.
[6] IEEE Std 802.15.4-2996, September, Part 15.4: Wire-
less Medium Access Control (MAC) and Physical
Layer (PHY) Specifications for Low-Rate Wireless Per-
sonal Area Networks (WPANs), IEEE, 2006.
[7] S. Landstr¨
om, J. Bergstr¨
om, E. Westerberg, and
D. Hammarwall, “Nb-iot: a sustainable technology for
connecting billions of devices, Ericsson Technology
Review, vol. 93, no. 3, pp. 1–12, 2016.
[8] C. Rottondi, M. Buccoli, M. Zanoni, D. Garao, G. Ver-
ticale, and A. Sarti, “Feature-based analysis of the
effects of packet delay on networked musical inter-
actions,” Journal of the Audio Engineering Society,
vol. 63, no. 11, pp. 864–875, 2015.
[9] C. Rottondi, C. Chafe, C. Allocchio, and A. Sarti, An
overview on networked music performance technolo-
gies,” IEEE Access, 2016.
[10] G. P. Fettweis, “The Tactile Internet: applications
and challenges,” IEEE Vehicular Technology Maga-
zine, vol. 9, no. 1, pp. 64–70, 2014.
[11] H. Shokri-Ghadikolaei, C. Fischione, G. Fodor,
P. Popovski, and M. Zorzi, “Millimeter wave cellular
networks: A MAC layer perspective, IEEE Transac-
tions on Communications, vol. 63, no. 10, pp. 3437–
3458, 2015.
[12] H. Shokri-Ghadikolaei, C. Fischione, P. Popovski, and
M. Zorzi, “Design aspects of short-range millimeter-
wave networks: A MAC layer perspective, IEEE Net-
work, vol. 30, no. 3, pp. 88–96, 2016.
[13] L. Turchet, A. McPherson, and C. Fischione, “Smart
Instruments: Towards an Ecosystem of Interoperable
Devices Connecting Performers and Audiences, in
Proceedings of the Sound and Music Computing Con-
ference, 2016, pp. 498–505.
[14] H. Boley and E. Chang, “Digital ecosystems: Princi-
ples and semantics,” in Inaugural IEEE International
Conference on Digital Ecosystems and Technologies,
2007.
[15] ´
A. Barbosa, “Displaced soundscapes: A survey of
network systems for music and sonic art creation,
Leonardo Music Journal, vol. 13, pp. 53–59, 2003.
[16] G. Weinberg, “Interconnected musical networks: To-
ward a theoretical framework, Computer Music Jour-
nal, vol. 29, no. 2, pp. 23–39, 2005.
[17] L. Gabrielli and S. Squartini, Wireless Networked Mu-
sic Performance. Springer, 2016.
[18] S. Jord`
a, G. Geiger, M. Alonso, and M. Kaltenbrun-
ner, “The reactable: exploring the synergy between live
music performance and tabletop tangible interfaces, in
Proceedings of the 1st international conference on Tan-
gible and embedded interaction, 2007, pp. 139–146.
[19] Y. Wu, L. Zhang, N. Bryan-Kinns, and M. Barthet,
“Open symphony: Creative participation for audiences
of live music performances, IEEE Multimedia Maga-
zine, no. 48-62, 2017.
[20] G. Fazekas, M. Barthet, and M. B. Sandler, Novel
Methods in Facilitating Audience and Performer In-
teraction Using the Mood Conductor Framework, ser.
Lecture Notes in Computer Science. Springer-Verlag,
2013, vol. 8905, pp. 122–147.
[21] L. Zhang, Y. Wu, and M. Barthet, “A web application
for audience participation in live music performance:
The open symphony use case, in Proceedings of the
international conference on New Interfaces for Musi-
cal Expression, 2016, pp. 170–175.
[22] A. Cl´
ement, F. Ribeiro, R. Rodrigues, and R. Penha,
“Bridging the gap between performers and the audi-
ence using networked smartphones: the a.bel system,”
in Proceedings of International Conference on Live In-
terfaces, 2016.
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-19
[23] A. Tanaka, “Mobile music making,” in Proceedings of
the international conference on New Interfaces for Mu-
sical Expression, 2004, pp. 154–156.
[24] N. Weitzner, J. Freeman, S. Garrett, and Y.-L. Chen,
“massmobile-an audience participation framework. in
Proceedings of the international conference on New In-
terfaces for Musical Expression, 2012, pp. 21–23.
[25] B. Bengler and N. Bryan-Kinns, “Designing collabora-
tive musical experiences for broad audiences, in Pro-
ceedings of the 9th ACM Conference on Creativity &
Cognition. ACM, 2013, pp. 234–242.
[26] J. Freeman, “Large audience participation, technology,
and orchestral performance,” in Proceedings of the In-
ternational Computer Music Conference, 2005.
[27] D. J. Baker and D. M ¨
ullensiefen, “Hearing wagner:
Physiological responses to richard wagner’s der ring
des nibelungen,” in Proc. Int. Conference on Music
Perception and Cognition, 2014.
[28] A. Tanaka, “Musical performance practice on sensor-
based instruments,” Trends in Gestural Control of Mu-
sic, vol. 13, pp. 389–405, 2000.
[29] T. Machover and J. Chung, “Hyperinstruments: Musi-
cally intelligent and interactive performance and cre-
ativity systems, in Proceedings of the International
Computer Music Conference, 1989.
[30] E. Miranda and M. Wanderley, New digital musical
instruments: control and interaction beyond the key-
board. AR Editions, Inc., 2006, vol. 21.
[31] L. Turchet, M. Benincaso, and C. Fischione, “Exam-
ples of use cases with smart instruments,” in Proceed-
ings of AudioMostly Conference, 2017 (In press).
[32] S. Papetti, M. Civolani, and F. Fontana,
“Rhythm’n’shoes: a wearable foot tapping inter-
face with audio-tactile feedback, in Proceedings of
the International Conference on New Interfaces for
Musical Expression, 2011, pp. 473–476.
[33] A. Mazzoni and N. Bryan-Kinns, “Mood glove: A hap-
tic wearable prototype system to enhance mood music
in film,” Entertainment Computing, vol. 17, pp. 9–17,
2016.
[34] S. Serafin, C. Erkut, J. Kojs, N. Nilsson, and R. Nor-
dahl, “Virtual reality musical instruments: State of the
art, design principles, and future directions,” Computer
Music Journal, vol. 40, no. 3, pp. 22–40, 2016.
[35] D. Mazzanti, V. Zappi, D. Caldwell, and A. Brogni,
Augmented stage for participatory performances.” in
Proceedings of the international conference on New In-
terfaces for Musical Expression, 2014, pp. 29–34.
[36] F. Berthaut, D. Plasencia, M. Hachet, and S. Subra-
manian, “Reflets: Combining and revealing spaces for
musical performances,” in Proceedings of the interna-
tional conference on New Interfaces for Musical Ex-
pression, 2015.
[37] I. Poupyrev, R. Berry, J. Kurumisawa, K. Nakao,
M. Billinghurst, C. Airola, H. Kato, T. Yonezawa, and
L. Baldwin, “Augmented groove: Collaborative jam-
ming in augmented reality, in ACM SIGGRAPH 2000
Conference Abstracts and Applications, 2000, p. 77.
[38] D. Berthaut, M. Desainte-Catherine, and M. Hachet,
“Interacting with 3d reactive widgets for musical per-
formance,” Journal of New Music Research, vol. 40,
no. 3, pp. 253–263, 2011.
[39] F. Berthaut and M. Hachet, “Spatial interfaces and in-
teractive 3d environments for immersive musical per-
formances,” IEEE Computer Graphics and Applica-
tions, vol. 36, no. 5, pp. 82–87, 2016.
[40] D. Hall and J. Llinas, “Multisensor data fusion,” in
Multisensor Data Fusion. CRC press, 2001.
[41] J. Burgoyne, I. Fujinaga, and J. Downie, “Music in-
formation retrieval, A New Companion to Digital Hu-
manities, pp. 213–228, 2016.
[42] O. Bown, A. Eldridge, and J. McCormack, “Under-
standing interaction in contemporary digital music:
from instruments to behavioural objects, Organised
Sound, vol. 14, no. 2, pp. 188–196, 2009.
[43] W. Woszczyk, J. Cooperstock, J. Roston, and
W. Martens, “Shake, rattle, and roll: Getting immersed
in multisensory, interactive music via broadband net-
works, Journal of the Audio Engineering Society,
vol. 53, no. 4, pp. 336–344, 2005.
[44] M. Slater, “Grand challenges in virtual environments,”
Frontiers in Robotics and AI, vol. 1, p. 3, 2014.
Proceedings of the 14th Sound and Music Computing Conference, July 5-8, Espoo, Finland
SMC2017-20
... A infraestrutura da Internet das Coisas Musicais deve conseguir comportar um ecossistema heterogêneo de dispositivos, assim como a interação dos músicos com o público em tempo de real, a uma baixa latência, alta confiabilidade e excelente sincronismo, seja no ambiente local ou remoto [5,6]. ...
... O público, os músicos e os dispositivos podem interagir durante um show, de forma que os músicos tenham um feedback da plateia através de capturas fisiológicas, obtendo estímulos que podem controlar o timbre dos instrumentos ou comportamento dos equipamentos de palco, como iluminação, telões, entre outros. Da mesma forma, os sons de instrumentos inteligentes podem ser enviados através de vibrações em forma de ritmos para os vestíveis inteligentes do público [5]. ...
... Na camada de rede, os protocolos de roteamentos precisarão ser otimizados para apresentarem menos latência, além de métodos de decisões de roteamento distribuído, capazes de minimizar o atraso da comunicação em rede. Pode ser necessário ainda novas técnicas de controle e gerenciamento baseados em otimização que deverão conseguir configurar e alocar os recursos adequados do plano de dados em diferentes domínios para criar serviços de ponta a ponta com baixa latência [5]. ...
Conference Paper
Full-text available
A internet das Coisas Musicais é uma área de pesquisa que pretende levar a conectividade da Internet das Coisas para o campo da música e das artes. Junto com esta tecnologia surge a possibilidade de conexão de diferentes coisas musicais em um ambiente de concerto ou de criação artística que permitiria, por exemplo, a participação do público nestes processos, tanto de maneira presencial, por meio de uma rede local, quanto remoto, por meio da Internet. Neste trabalho trazemos algumas discussões sobre a IoMusT e também suas possibilidades e desafios.
... Take for instance the recent emergence of the Internet of Musical Things. This proposal was formulated in parallel by Turchet and Barthet (2017) and . After several exchanges in the ubimus community, the various acronyms were dropped (IoMT, IoMUT, etc.) and the label IoMusT was adopted. 2 There are two issues worth mentioning here. ...
... IoMusT has grown out of many discussions among researchers and references to it then appeared in publications including Hazzard et al. (2014), Keller at al. (2014), , and Turchet et al. (2017). A significant review paper was published in 2018 by . ...
... Roles can be defined, in this context, as the set of actions that a device presents in a musical activity, for example: smart instruments [Turchet and Barthet 2019a], such as the Sensus Smart Guitar [Turchet et al. 2017] and Smart Cajón [Turchet et al. 2018b], used to create music; augmented/mixed reality glasses, used to increase the audience's immersion in a presentation (see e.g. [Selfridge and Barthet 2019]) and bracelets that vibrate according to the rhythm of the music (see e.g. ). ...
... IoT was not left out of this. When tools aimed at musical practice were connected to the Internet, or when a sound characteristic was attributed to the objects that were there, a sub-field called Internet of Musical Things emerged [4]. However, it is necessary more research and investigation to elucidate how these objects should behave and how they could assist in the creation of music. ...
... In addition, users can have several points in common, aligning expectations with reality in music creation. Therefore, this environment can be configured in a way that adapts to the preferences and characteristics of users, encompassing different profiles [4], [22]. ...
... IoMusT has grown out of many discussions among researchers and references to it then appeared in publications including Hazzard et al. (2014), Keller at al. (2014), Keller and Lazzarini (2017), and Turchet et al. (2017). A significant review paper was published in 2018 by Turchet et al. (2018). ...
Article
Full-text available
The application of the Internet of Things (IoT) specific to Music technologies is referred to as Internet of Musical Things (IoMusT). This field is in its infancy but potential applications include wide network interactive music performances, smart and wearable instruments, and multi-person virtual music systems. There is no framework as of yet, but a number of technical hurdles have yet to be overcome. These obstacles reflect the fact that many devices will be wearable which means they must be lightweight, energy efficient, and possibly low cost, but must also be powerful. Music therapy is yet untouched by these new technologies of IoMusT. However, it offers great potential for devices that could be applied in assistive living scenarios that provide a non-invasive, individually attuned, form of treatment that is always available. For example, wearable devices that are driven by low cost computing could offer interactive music therapies that are delivered outside of the clinical setting. These could be used to enhance a patient’s wellbeing in between instruction and practice sessions with a therapist. The usefulness of ubiquitous music technologies will be further extended with the development of machine learning algorithms that are designed specifically to have a low computational footprint. These could be used to analyze and predict user behavior and thus tailor the therapy exactly on each occasion. This paper will develop a review of what has been achieved in this domain so far. It will then look ahead to see which trajectories are most likely for the future, particular within the guiding framework of ubiquitous music and computing. It will close with a discussion on the key technological enablers and risks that could hamper their progress.
... IoMusT has grown out of many discussions among researchers and references to it then appeared in publications including Hazzard et al. (2014), Keller at al. (2014), Keller and Lazzarini (2017), and Turchet et al. (2017). A significant review paper was published in 2018 by Turchet et al. (2018). ...
... Although both the Sonic Pi and Bela facilitate live-coding platforms for embedded systems, they do not lend themselves easily to conglomeration or to functioning as IoT systems, because devices running these systems are not expected to communicate with one another in an IoT infrastructure (Turchet, Fischione, and Barthet 2017). Our goal was to address this gap by exploiting the opportunities afforded by these devices through their agile networking capability, powerful operating systems, small form factors, and low cost. ...
Article
This article introduces an open-source Java-based programming environment for creative coding of agglomerative systems using Internet-of-Things (IoT) technologies. Our software originally focused on digital signal processing of audio—including synthesis, sampling, granular sample playback, and a suite of basic effects—but composers now use it to interface with sensors and peripherals through general-purpose input/output and external networked systems. This article examines and addresses the strategies required to integrate novel embedded musical interfaces and creative coding paradigms through an IoT infrastructure. These include: the use of advanced tooling features of a professional integrated development environment as a composition or performance interface rather than just as a compiler; techniques to create media works using features such as autodetection of sensors; seamless and serverless communication among devices on the network; and uploading, updating, and running of new compositions to the device without interruption. Furthermore, we examined the difficulties many novice programmers experience when learning to write code, and we developed strategies to address these difficulties without restricting the potential available in the coding environment. We also examined and developed methods to monitor and debug devices over the network, allowing artists and programmers to set and retrieve current variable values to or from these devices during the performance and composition stages. Finally, we describe three types of art work that demonstrate how the software, called HappyBrackets, is being used in live-coding and dance performances, in interactive sound installations, and as an advanced composition and performance tool for multimedia works.
... Such definition of Semantic Web Thing is indeed unrelated to the technology realizing it. We might also argue that even the human body can be considered as a connected Web Thing in some situations: applications in healthcare [74], of course, but also research on wearable IoT for everyday life [75], [76] and music [77] are valid examples. ...
Article
Full-text available
The Web of Things (WoT) has recently appeared as the latest evolution of the Internet of Things and, as the name suggests, requires that devices interoperate through the Internet using Web protocols and standards. Currently only a few theoretical approaches have been presented by researchers and industry, to fight the fragmentation of the IoT world through the adoption of semantics. This further evolution is known as Semantic Web of Things and relies on a WoT implementation crafted on the technologies proposed by the Semantic Web stack. This paper presents a working implementation of the Web of Things declined in its Semantic flavour through the adoption of a shared ontology for describing devices. In addition to that, the ontology includes patterns for dynamic interactions between devices, and therefore we define it as dynamic ontology. A practical example will give a proof of concept and overall evaluation, showing how the dynamic setup proposed can foster interoperability at information level allowing on one hand smart discovery, enabling on the other hand orchestration and automatic interaction through the semantic information available.
Article
Full-text available
[Nota do editor] Editorial dos artigos apresentador no Ubimus 2018.
Conference Paper
Full-text available
The a.bel project aims to provide artists with a way to easily interact with their audience, making use of their participation to effectively craft unique performances. This paper gives an overview of the a.bel system and details the development of a suite of tools (as well as its integration into mobile applications) with which multimedia artists can easily create and distribute interactive content unto mobile devices.
Conference Paper
Full-text available
This paper presents a web-based application enabling audiences to collaboratively contribute to the creative process during live music performances. The system aims at enhancing audience engagement and creating new forms of live music experiences. Interaction between audience and performers is made possible through a client/server architecture enabling bidirectional communication of creative data. Audience members can vote for predetermined musical attributes using a smartphone-friendly and cross-platform web application. The system gathers audience members' votes and provide feedback through visualisations that can be tailored for specific needs. In order to support multiple performers and large audiences, automatic audience-to-performer groupings are handled by the application. The framework was applied to support live interactive musical improvisations where creative roles are shared amongst audience and performers (Open Symphony). Qualitative analyses of user surveys highlighted very positive feedback related to themes such as engagement and creativity and also identified further design challenges around audience sense of control and latency.
Conference Paper
Full-text available
This paper presents some of the possibilities for interaction between performers, audiences, and their smart devices, offered by the novel family of musical instruments, the Smart Instruments. For this purpose, some implemented use cases are described, which involved a preliminary prototype of MIND Music Labs' Sensus Smart Guitar, the first exemplar of Smart Instrument. Sensus consists of a guitar augmented with sensors, actuators, onboard processing, and wireless communication. Some of the novel interactions enabled by Sensus technology are presented, which are based on connectivity of the instrument to smart devices, virtual reality headsets, and the cloud.
Article
Full-text available
Networked Music Performance (NMP) is a potential game changer among Internet applications, as it aims at revolutionizing the traditional concept of musical interaction by enabling remote musicians to interact and perform together through a telecommunication network. Ensuring realistic performance conditions, however, constitutes a significant engineering challenge due to the extremely strict requirements in terms of network delay and audio quality, which are needed to maintain a stable tempo, a satisfying synchronicity between performers and, more generally, a high-quality interaction experience. In this paper we offer a review of the psycho-perceptual studies conducted in the past decade, aimed at identifying latency tolerance thresholds for synchronous real-time musical performance. We also provide an overview of hardware/software enabling technologies for NMP, with a particular emphasis on system architecture paradigms, networking configurations, and applications to real use cases.
Conference Paper
Full-text available
Designing a collaborative performance requires the use of paradigms and technologies which can deeply influence the whole piece experience. In this paper we define a set of six metrics, and use them to describe and evaluate a number of platforms for participatory performances. Based on this evaluation, the Augmented Stage is introduced. Such concept describes how Augmented Reality techniques can be used to superimpose a performance stage with a virtual environment, populated with interactive elements. The manipulation of these objects allows spectators to contribute to the visual and sonic outcome of the performance through their mobile devices, while keeping their freedom to focus on the stage. An interactive acoustic rock performance based on this concept was staged. Questionnaires distributed to the audience and performers’ comments have been analyzed, contributing to an evaluation of the presented concept and platform done through the defined metrics.
Article
Most contemporary Western performing arts practices restrict creative interactions from audiences. Open Symphony is designed to explore audience-performer interaction in live music performances, assisted by digital technology. Audiences can conduct improvising performers by voting for various musical "modes." Technological components include a web-based mobile application, a visual client displaying generated symbolic scores, and a server service for the exchange of creative data. The interaction model, app, and visualization were designed through an iterative participatory design process. The system was experienced by about 120 audience and performer participants (35 completed surveys) in controlled (lab) and real-world settings. Feedback on usability and user experience was overall positive, and live interactions demonstrate significant levels of audience creative engagement. The authors identified further design challenges around audience sense of control, learnability, and compositional structure. This article is part of a special issue on multimedia for enriched music.
Conference Paper
While listeners’ emotional response to music is the subject of numerous studies, less attention is paid to the dynamic emotion variations due to the interaction between artists and audiences in live improvised music performances. By opening a direct communication channel from audience members to performers, the Mood Conductor system provides an experimental framework to study this phenomenon. Mood Conductor facilitates interactive performances and thus also has an inherent entertainment value. The framework allows audience members to send emotional directions using their mobile devices in order to “conduct” improvised performances. Emotion coordinates indicted by the audience in the arousal-valence space are aggregated and clustered to create a video projection. This is used by the musicians as guidance, and provides visual feedback to the audience. Three different systems were developed and tested within our framework so far. These systems were trialled in several public performances with different ensembles. Qualitative and quantitative evaluations demonstrated that musicians and audiences were highly engaged with the system, and raised new insights enabling future improvements of the framework.
Article
The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term "virtual musical instruments" has been used for a long time to describe software simulations, extensions of existing musical instruments, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field overview of VRMIs from the viewpoint of the performer. We propose nine design guidelines, describe evaluation methods, analyze case studies, and consider future challenges.
Article
The power of interactive 3D graphics, immersive displays, and spatial interfaces is still under-explored in domains where the main target is to enhance creativity and emotional experiences. This article presents a set of work the attempts to extent the frontiers of music creation as well as the experience of audiences attending to digital performances. The goal is to connect sounds to interactive 3D graphics that musicians can interact with and the audience can observe.