Figure 2 - uploaded by Annika Neidhardt
Content may be subject to copyright.
Conference room with a size of 10.3m × 5.7m × 3.1m and all its furniture: At positions 1-5 the room acoustics were captured with a KEMAR HATS and an azimuthal resolution of 5 • for an auralization in the experiment. The source was kept in the same distance and angle relation to each of the listening positions 

Conference room with a size of 10.3m × 5.7m × 3.1m and all its furniture: At positions 1-5 the room acoustics were captured with a KEMAR HATS and an azimuthal resolution of 5 • for an auralization in the experiment. The source was kept in the same distance and angle relation to each of the listening positions 

Source publication
Conference Paper
Full-text available
Virtual auditory environments are of increasing interest, in science, but also for industrial applications. With tracking devices, the auditory scene can be explored interactively. Appropriate room simulation algorithms are essential for the plausibility and naturalness of a dynamic scene, as well as for an orientation within. It is well known, tha...

Contexts in source publication

Context 1
... conference room with a size of 10.31m × 5.76m × 3.07m was chosen for the experiment. It has a volume of V = 182.3m 3 and a reverberation time of T 60 = 0.65s (broadband). Thus, the critical distance is about r crit = 0.95m. Fig. 2 provides an overview of the furniture and its arrangement within the room, as well as the different listening positions, where the sound field was captured. Table 1 contains the exact coordinates of the measured positions within the room. Signal processing and reproduction setup The experiment was carried out in a listening laboratory ...
Context 2
... extra appointment was set up one or two days before the actual test. First, photos of the different measurement positions and the room were shown to the participant. A schematic pic- ture of the room and all five positions (Fig. 2) was available to the proband throughout the whole experiment. Then the participants started listening to each of the positions in comparison to each other. They were asked to explore the different room acoustics and find differences between the posi- tions. In the end the proband could to choose two positions he found easiest to ...

Similar publications

Article
Full-text available
The aim of the current study was to evaluate whether dogs living in multi-member families show a stronger bond towards a specific person, and, if it is the case, which characteristics of the owner or of their relationship may lead to such preference. Eleven dogs were tested using a modified version of Ainsworth Strange Situation Test where all the...

Citations

... Identification was better for the omnidirectional source, and there was no effect of static vs dynamic synthesis. In Neidhardt (2016), emphasis was put on training. Participants were presented with pictures and a map of a meeting room along with dynamically rendered binaural sound. ...
... All in all, prior work shows the existence of echolocation as an expert ability, on one hand, but suggests that for novice listeners without special training, associating the position in a room based on a map (Neidhardt, 2016;Neidhardt et al., 2016;Shinn-Cunningham and Ram, 2003) or 360 pictures presented on a HMD (Klein et al., 2017) with the acoustics is extremely hard. However, all experiments with novice listeners were done using headphone rendering, without individualized binaural synthesis, and they did not offer natural six degrees of freedom (6DoF) movements of the listener. ...
Article
Full-text available
A common aim in virtual reality room acoustics simulation is accurate listener position dependent rendering. However, it is unclear whether a mismatch between the acoustics and visual representation of a room influences the experience or is even noticeable. Here, we ask if listeners without any special experience in echolocation are able to identify their position in a room based on the acoustics alone. In a first test, direct comparison between acoustic recordings from the different positions in the room revealed clearly audible differences, which subjects described with various acoustic attributes. The design of the subsequent experiment allows participants to move around and explore the sound within different zones in this room while switching between visual renderings of the zones in a head-mounted display. The results show that identification was only possible in some special cases. In about 74% of all trials, listeners were not able to determine where they were in the room. The results imply that audible position dependent room acoustic rendering in virtual reality may not be noticeable under certain conditions, which highlights the importance of evaluation paradigm choice when assessing virtual acoustics.
... After a short training, performance was slightly above chance, and an improvement was observed in the course of the test. In [3], participants were presented with pictures and a map of a meeting room along with dynamically rendered binaural sound. In a training session, they were allowed to add items to a single choice comparison one-by-one. ...
... All in all, the prior work suggests that associating the position in a room based on a map [1,2,3] or 360 • pictures presented on an HMD [4] with the acoustics is extremely hard, unless extensive training is performed. However, all these experiments were done using headphone rendering, without individualized binaural synthesis, and not offering 6DoF movement. ...
... The room condition and the positions were selected carefully. The dimensions of (8.7 m × 6.18 m × 3.5 m, are in a similar range as the rooms used in [1,3]. However, the used variable acoustics room "Arni" [6] allows for greater variability within the room. ...
Conference Paper
Full-text available
Knowing how well listeners can perform self-localization based on room reflections is important for designing acoustic rendering with 6 Degrees-of-Freedom (6DoF). In contrast to earlier work on echolocation with self-produced sounds, we study to which extend self-localization is possible using external sounds. To assess this, we present a novel experimental design that uses a virtual blindfold technique: Participants listen to sound sources in a room, when at the same time they are visually presented with a 3D-scanned model of the room on a head-mounted display. Whilst staying at one position in the real room, they can switch between different positions in the virtual representation. Their task is to identify the perspective that matches their physical location. Results of the pilot experiment, conducted in a small room with irregularly distributed absorption, indicate that identifying one’s position in the room solely based on the room acoustics is difficult. The new virtual blindfold design allows for assessing self-localization in diverse scenarios and permits conclusions about the need for 6DoF acoustic rendering.
... Based on this findings, Neidhardt (2016) and Klein et al. (2017a) investigated the influence of head movements and training on the performance in position-identification tasks in a room. It was shown that targeted head movements had no significant effect on this kind of tasks. ...
Chapter
Full-text available
It is pointed out that beyond reproducing the physically correct sound pressure at the eardrums, more effects play a significant role in the quality of the auditory illusion. In some cases, these can dominate perception and even overcome physical deviations. Perceptual effects like the room-divergence effect, additional visual influences, personalization, pose and position tracking as well as adaptation processes are discussed. These effects are described individually, and the interconnections between them are highlighted. With the results from experiments performed by the authors, the perceptual effects can be quantified. Furthermore, concepts are proposed to optimize reproduction systems with regard to those effects. One example could be a system that adapts to varying listening situations as well as individual listening habits, experience and preference.
... Neidhardt [4] provided photographs of the different positions and the illustration shown in Fig. 2 for the participant to get an idea of the room and the potential listening perspectives. ...
Conference Paper
Full-text available
A data set was created to study the position dependent perception of room acoustics in case of a small conference room. Binaural room impulse responses (BRIRs) were measured with a KEMAR 45BA head-and-torso-simulator at 5 different potential listening positions. To achieve comparable direct sound conditions the Genelec 1030A two-way loudspeaker was always placed at a distance of 2.5 m to the front. In one scenario, the loudspeaker was directed towards the listening position. In the other scenario it was turned by 180° to realize an indirect reproduction with low direct sound energy. Furthermore, an mh acoustics Eigenmike was placed at each of the five listening positions to record 32-channel directional room impulses responses (DRIRs). Omnidirectional room impulse responses (RIR) were measured where the center of the head was placed before. Additionally, 360° visual footages were captured with a GoPro Omni spherical camera array to provide audiovisual impressions of the listening situation at the five positions. The data set is documented in detail and is freely available for download.
... In contrast, several studies have shown, that at least in small rooms, that distinguishing or identifying a certain listening position only by its room acoustical characteristics is rather difficult and often not possible [27,28,14]. This is valid in particular, if the listeners do not know the room, are not experienced listeners, and if the direct sound energy is high compared to the reverberation. ...
... The results of several studies suggest a limited sensitivity of humans to variations in the spatial and temporal pattern of early reflections [27,28,40]. Especially the small changes in the reflection pattern at room convergence, when the variation of the listening position is rather small, seem to be negligible [29], at least with regard to plausibility. ...
Conference Paper
Full-text available
An auditory augmented reality approach is presented which adds virtual audio objects into a real acoustic environment. A dynamic binaural synthesis system is used to reproduce the ear signals. The used binaural room impulse responses (BRIRs) are synthesized using spatially sparse measured BRIRs of the listening room. The auditory augmented reality system is part of the research project Personalized Auditory Reality which addresses a manipulable auditory environment. Besides the technical realization, the challenges of the project are the generation of a plausible auditory perception. This work presents the BRIR synthesis approach and the enabled auditory scene exploration. The viability of the system is evaluated for the quality features localization, externalization, and overall quality in terms of a spatial auditory perception.
... The BRIR measurements for this study were conducted in a small conference room (10.3m x 5.7m x 3.1m) with a head and torso simulator KEMAR 45BA at 4 different positions with an azimuth resolution of 5 • [17]. For all positions, the source was placed in the same relation to the different listening positions to minimize the differences of the direct sound. ...
Conference Paper
Virtual acoustic environments can provide a plausible reproduction of real acoustic scenes. Since the perceived quality is based on expectations and previous sound exposure, a reliable measure of the listening experience is difficult. Listeners are able to learn how to interpret spatial cues and room reflections for certain tasks. To discuss the relevance of auditory adaptation effects, this publication summarizes a series of listening experiments which show adaptation processes with effect on localization accuracy, externalization, and the ability of a listener to identify the own position in a virtual acoustic environment.
... The temporal and spatial structure of all reflections changes with the position as well. It has been shown in the past that the sensitivity to changes in the reflection pattern is limited [15,16]. However, a lack of adaptation might affect the plausibility of the walk-through. ...
Conference Paper
In this paper the interactive approaching motion towards a virtual loudspeaker created with dynamic binaural synthesis is subject to research. A realization based on a given set of measured binaural room impulse responses (BRIRs) was rated as plausible by all participants in a previous experiment. In this study the same BRIR-data is systematically simplified to investigate the consequences for the perception. This is of interest in the context of position-dynamic reproduction, related interpolation and extrapolation approaches as well as attempts of parameterization. The potential of inaudible data simplification is highly related to the human sensitivity to position-dependent changes in room acoustics. The results suggest a high potential for simplification, while some kinds of BRIR-impairment clearly affect the plausibility.
... In [12], the ability of humans to tell their position in a room was studied, which can be considered to be an even more complicated task than telling the distance to nearby surfaces. Dynamic binaural reproduction of room-impulse responses allowing the listener head movements was utilized in the study. ...
Article
The audibility of the acoustic effect of a reflective surface close to a subject is studied with measurements and psychoacoustic listening tests. The effect is measured with a reflective surface in anechoic and reverberant conditions using a binaural dummy head, and the results are compared with results from a simple image-source model of the situation. The effects caused by first reflection and interaction between the subject and the surface are shown. A listening experiment is conducted where the ability of subjects to perceive the distance of a surface is measured. Some of the listeners are shown to conduct the task reliably, and others provide either a random response or consistently reversed responses. This shows that the acoustic cues of the distance of the object are audible but do not always lead to correct perception of proximity to a surface.
... In another experiment by Neidhardt [19] a training approach was applied for a similar task. First, the participant could compare directly between items in order to find audible differences and to try to remember them. ...
... In this experiment, the BRIR data studied in [19] was used. The measurements were conducted in a small conference room (10.3m×5.7m×3.1m) ...
... The measurements were conducted in a small conference room (10.3m×5.7m×3.1m) with a head and torso simulator KEMAR 45BA at 5 different positions with an azimuth resolution of 5 • [19]. Fig. 1 shows the spatial arrangement during the measurement. ...
Conference Paper
This paper presents an investigation of the training effect on the perception of position dependent room acoustics. Listeners are trained to distinguish the acoustics at different listening positions and to detect mismatches between the visual and the acoustical representations. In virtual acoustic environments simplified representation of room acoustics are used often. That works well, when fictive or unknown rooms are auralized but may be critical for real rooms. The results show, that ten out of 20 participants could significantly increase their accuracy for choosing the correct combinations after training. The publication investigates the underlying processes of the adaptation effect and the reasons for the individual differences. The relevance of these findings for acoustic virtual/augmented reality applications is discussed.
Thesis
Full-text available
The goal of technical evolutions in the context of entertainment electronics is to improve the user experience by providing visuals and acoustics in the best possible way. With modern virtual and augmented reality devices and applications the goal of a reproduction indistinguishable from reality became more tangible. When the listener is no longer able to distinguish artificial sound sources from real ones the term auditory illusion is used. In order to achieve such illusions different technical challenges need to be mastered. But the assumption that the exact replica of the ear signals lead to the same perceptions as in the corresponding real-life situation is not correct. Fundamental mechanisms of human perception such as the integration of cues from different modalities and the dependency on expectations and experience add another layer of complexity. These expectations can change depending on prior sound exposure. In the context of spatial hearing this means that listeners are probably able to learn how to interpret spatial cues. Such mechanisms and their effect on the perceived quality of spatial sound reproduction systems are the scope of this work. Perceptual studies investigate the learning of spatial localization cues and adaptation mechanisms related to room acoustic perception. Quality deficits due to mismatched ear signals are measured and it is shown how quality ratings can change depending on training. The results suggest, that learning and adaptation processes are a key factor for the establishment of an auditory illusion. The practical relevance of such effects and their underlying principles are discussed.