Figure 4 - uploaded by Kerstin Bunte
Content may be subject to copyright.
Diagram of the implemented software components for the Relevance-based Interactive Sonification System.

Diagram of the implemented software components for the Relevance-based Interactive Sonification System.

Source publication
Article
Full-text available
This paper presents a novel approach for the interactive optimiza-tion of sonification parameters. In a closed loop, the system auto-matically generates modified versions of an initial (or previously selected) sonification via gradient ascend or evolutionary algo-rithms. The human listener directs the optimization process by providing relevance fee...

Contexts in source publication

Context 1
... system consists mainly of three SuperCollider classes: MasterControl, SonControl and SonOptParMap, illustrated in Fig. 4, which use a self-built Octave function library. The start script al- lows interactive textual specification of the data set to be used, the synthesizer code (as SC Synth) and various parameters of the opti- mization system. From here, a MasterControl object is instantiated with data set X, the number of descendants k per iteration, ...
Context 2
... demonstrate basic operation of our new approach we first use a 2-dimensional benchmark data set (see Fig. 6.1), which con- tains of two toppled, somewhat overlapping classes, and a single- parameter sonification, where onset of granular sound events is the only parameter. Specifically we use for sound synthesis the very simple synthesizer code Out.ar(0, Pan2.ar ( Blip.ar(440, 4), 0, EnvGen.kr( Env.perc(0.001, ...

Similar publications

Conference Paper
Full-text available
Mobile devices have been used in soundscape installations and performances over the past decade or longer, often to emphasize social interaction. Multichannel sonification has been found to successfully represent data describing kinematic phenomena. However, there are few if any examples where these two approaches are combined. The Locust Wrath pro...
Conference Paper
Full-text available
Auditory displays are promising for informing operators about hazards or objects in the environment. However, it remains to be investigated how to map distance information to a sound dimension. In this research, three sonification approaches were tested: Beep Repetition Rate (BRR) in which beep time and inter-beep time were a linear function of dis...
Conference Paper
Full-text available
Sonifications are auditory displays that can help operators monitor safety critical data. However, there are very few guidelines that describe the design of effective sonifications. Two parameters that may impact the effectiveness of a sonification are discriminability and perceived urgency. We designed and evaluated four sonifications which varied...

Citations

... For that reason we proposed and implemented a method that veils all parameter details from the participants and leaves to them solely the task to select one of 4 new candidate sounds which are derived from the starting sound (or to keep the previous sound). The method is akin to evolutionary optimization and has been introduced and used by the authors before in [8] for the refinement of sonification mappings. The variations are obtained by modifying the initial parameter vector by means of random mutations. ...
Conference Paper
This paper presents a novel approach for using sound to externalize emotional states so that they become an object for communication and reflection both for the users themselves and for interaction with other users such as peers, parents or therapists. We present an abstract, vocal, and physiology-based sound synthesis model whose sound space each covers various emotional associations. The key idea in our approach is to use an evolutionary optimization approach to enable users to find emotional prototypes which are then in turn fed into a kernel-regression-based mapping to allow users to navigate the sound space via a low-dimensional interface, which can be controlled in a playful way via tablet interactions. The method is intended to be used for supporting people with autism spectrum disorder.
... The number of publications covering the subjects is rapidly increasing each year (comp.e.g. [1] [2] [3] [4] [5] [6] [7] and references therein) and they concern various fields of applications, including crucially important medical areas. ...
... There are many open issues indicated in the recent research approaches worth focusing on. The number of publications covering the subjects is rapidly increasing each year (comp.e.g.1234567 and references therein) and they concern various fields of applications, including crucially important medical areas. The main arguments in the recent developments refer to the fact that humans are adapted for interacting with their physical environment making use of all their senses and, in particular, with sound explorary interaction quite a different context may be gained, not obvious in the visual rendering. ...
Article
Full-text available
A procedure of image data sonification has been described as an alternative to the traditional graphical visualization approach. The state of the art and the essential features of the method in various data analyses have been presented. Original interface designs have been introduced and potential applications in medical practice have been illustrated and discussed.
Conference Paper
A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future.