Article

The Influence of Vibrations on Musical Experience

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The coupled perception of sound and vibration is a well-known phenomenon during live pop or organ concerts. However, even during a symphonic concert in a concert hall, sound can excite perceivable vibrations on the surface of the body. This study analyzes the influence of audio-induced vibrations on the perceived quality of the concert experience. Therefore, sound and seat vibrations are controlled separately in an audio reproduction scenario. Because the correlation between sound and vibration is naturally strong, vibrations are generated from audio recordings using various approaches. Different parameters during this process (frequency and intensity modifications) are examined in relation to their perceptual consequences using psychophysical experiments. It can be concluded that vibrations play a significant role in the perception of music.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The emotional experience when viewing music or videos is an antecedent to determining the value of the content. Thus far, various studies have been conducted to enhance the emotional experiences evoked by music and videos by presenting haptic stimuli to the viewer's torso [1][2][3][4][5]. For example, the presentation of vibration stimuli to the upper body while viewing a horror movie increased the sense of fear [1]. ...
... The second method transmitted a sound source one octave lower than the original sound to the vibrator. Previous studies used a method to reduce the frequency of sounds to convert them into vibrational stimuli [3,14]. In a preliminary test, we used vibrations based on sounds lowered by one and two octaves, as well as vibrations one octave higher than the original sound stimuli. ...
... After experiencing each stimulus, participants answered a questionnaire. They rated the degree of relaxation they felt while listening to the music using a 9-point Likert scale (1)(2)(3)(4)(5)(6)(7)(8)(9). A score of 1 indicated that they did not feel relaxed during the stimulus period, while higher scores indicated greater relaxation. ...
Article
When people listen to music, vibratory stimuli that are synchronized with sounds enhance the affective effects of music. We investigated the effects of vibratory stimuli on relaxation. Vibratory stimuli were delivered to the cymba concha, an auricular part, concentrating on the vagus nerves. Five vibration patterns were prepared based on music that evoked relaxation. A user study involving nine participants found that they felt more relaxed under the condition in which a vibration stimulus was generated from sounds lowered by one octave than under the condition with no vibratory stimuli applied. This effect was more prominent for one of the two musical sounds tested.
... Emotional changes induced by sound and videos may influence the judgment of their values. Haptic stimuli to human bodies affect humans' emotions while experiencing audiovisual contents [1][2][3][4][5][6][7][8][9][10]. For example, vibrotactile stimuli, which are delivered through chairs to torsos are synchronized with movie soundtracks to improve emotional impressions [6,7]. ...
... Haptic stimuli to human bodies affect humans' emotions while experiencing audiovisual contents [1][2][3][4][5][6][7][8][9][10]. For example, vibrotactile stimuli, which are delivered through chairs to torsos are synchronized with movie soundtracks to improve emotional impressions [6,7]. Vibratory stimuli are typically applied to torsos and the stimulation of ears is expected to cause effects similar to that of torsos [10]. ...
... Extant studies adopted vibratory stimuli which gave rise to arousing or unpleasant feelings [1][2][3][4][5][6][7][8][9][10]. Vibratory stimuli alone may not effectively induce pleasant and nonarousing feelings. ...
Article
We investigated whether stroking stimuli near the external auditory meatus while listening to sounds enhances pleasant feelings. Previous studies have demonstrated that vibratory stimuli manipulate emotions evoked by watching videos or listening to sounds; however, none have investigated the effects of stroking stimuli near the external auditory meatus through which vagus nerves extend. Participants were exposed to three types of sounds with and without the stroking stimuli to their ears, and selected one stimulus condition that induced intense feelings. When the sound stimuli were presented with the stroking stimuli to the ear, joyful and pleasant feelings were intensified, whereas depressing feelings weakened. Stroking stimulation of the outer ears in combination with sound stimulation was suggested to induce positive feelings.
... Another potential factor linking bass and movement is music's ability to stimulate the tactile sense. Moving sound sources cause vibrations in air molecules and objects, which can displace eardrums and mechanoreceptors on the body (Merchel & Altinsoy, 2014;Verrillo, 1992). Consequently, loud music can be heard through the ears and felt as vibrations on the skin. ...
... Tactile and auditory rhythms are integrated to produce a coherent rhythmic percept (Huang et al., 2012;Roy, Lagarde, Dotov, & Dalla Bella, 2017) and recruit a common beat-detection network in the brain (Araneda, Renier, Ebner-Karestinos, Dricot, & De Volder, 2017). In a musicperception study, seat vibrations (low-pass filtered from the music) increased judgments of the overall quality of music (Merchel & Altinsoy, 2014). Although people can clearly use tactile information in music perception, it remains unclear whether tactile stimulation increases musical engagement and enjoyment and contributes to the bass-movement connection. ...
... Multisensory encoding is thought to minimize variance in the perceptual estimate of a stimulus, and accordingly can improve performance in a variety of tasks (Elliott, Wing, & Welchman, 2010;Ernst & Bülthoff, 2004). Auditory-tactile multisensory stimulation (compared with unimodal stimulation) can improve movement timing in gait (Roy et al., 2017), spatiotemporal variability in arm movements (Zelic, Mottet, & Lagarde, 2012), and quality judgments of music (Merchel & Altinsoy, 2014). Auditory and tactile rhythmic information are integrated into a coherent multisensory percept (Huang et al., 2012) and are coded in a common brain network (Araneda et al., 2017). ...
Article
Full-text available
Music is both heard and felt-tactile sensation is especially pronounced for bass frequencies. Although bass frequencies have been associated with enhanced bodily movement, time perception, and groove (the musical quality that compels movement), the underlying mechanism remains unclear. In 2 experiments, we presented high-groove music to auditory and tactile senses and examined whether tactile sensation affected body movement and ratings of enjoyment and groove. In Experiment 1, participants (N = 22) sat in a parked car and listened to music clips over sound-isolating earphones (auditory-only condition), and over earphones plus a subwoofer that stimulated the body (auditory-tactile condition). Experiment 2 (N = 18) also presented music in auditory-only and auditory-tactile conditions, but used a vibrotactile backpack to stimulate the body and included 2 loudness levels. Participants tapped their finger with each clip, rated each clip, and, in Experiment 1, we additionally video recorded spontaneous body movement. Results showed that the auditory-tactile condition yielded more forceful tapping, more spontaneous body movement, and higher ratings of groove and enjoyment. Loudness had a small, but significant, effect on ratings. In sum, findings suggest that bass felt in the body produces a multimodal auditory-tactile percept that promotes movement through the close connection between tactile and motor systems. We discuss links to embodied aesthetics and applications of tactile stimulation to boost rhythmic movement and reduce hearing damage. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... Haptic stimuli to the human body affect emotions when experiencing audiovisual content (Branje et al., 2014;Fukushima et al., 2014;Giroux et al., 2019;Hasegawa et al., 2019;Karafotias et al., 2017;Lemmens et al., 2009;Makioka et al., 2022;Merchel & Altinsoy, 2014;Okazaki et al., 2015;Sakurai et al., 2014;Tara et al., 2023). For example, vibrotactile stimuli delivered through chairs to the torsos are synchronized with movie soundtracks to improve emotional impressions (Giroux et al., 2019;Merchel & Altinsoy, 2014). ...
... Haptic stimuli to the human body affect emotions when experiencing audiovisual content (Branje et al., 2014;Fukushima et al., 2014;Giroux et al., 2019;Hasegawa et al., 2019;Karafotias et al., 2017;Lemmens et al., 2009;Makioka et al., 2022;Merchel & Altinsoy, 2014;Okazaki et al., 2015;Sakurai et al., 2014;Tara et al., 2023). For example, vibrotactile stimuli delivered through chairs to the torsos are synchronized with movie soundtracks to improve emotional impressions (Giroux et al., 2019;Merchel & Altinsoy, 2014). Furthermore, vibratory stimuli to the epigastric fossa region increase excitement and fear experienced while watching sports and horror videos, respectively (Makioka et al., 2022;Tara et al., 2023). ...
Article
We investigated the effects of stroking stimuli near the external auditory meatus on the emotions aroused by the experience of listening to sounds. Previous studies demonstrated that vibratory stimuli to the upper body manipulate emotions while watching videos or listening to sounds; however, none examined the effects of stroking stimuli near the external auditory meatus through which the vagus nerve extends. This study investigated the effects of stroking stimuli to the outer ear on emotions for each of six emotionally evocative sounds that arouse pleasant or unpleasant feelings. They comprised the sounds of rain, shampoo, insects’ buzz, and horrifying winds. When the sound stimuli were presented with stroking stimuli at the ear, participants’ emotions mostly changed to be more pleasant and sleepy, that is less arousing. These findings are expected to help in the development of haptic interfaces for more emotionally appealing content.
... Vibration also plays a significant role in the perception of music as noted by Merchel and Ercan Altinsoy [7]; "The coupled perception of sound and vibration is a well-known phenomenon during live pop and organ concerts. However even during a symphonic concert in a concert hall, sound can excite perceivable vibrations on the surface of the body". ...
... The low frequency content of music, such as that often produced by percussion and some bass and keyboard instruments, is capable of producing mechanosensation. Music in a concert hall when heard without sound reinforcement contains the Infra-sub content [7] which adds to the overall listening experience. However, when sound reinforcement is employed there will be a limited frequency range reproduced by the public address speaker system. ...
Conference Paper
The purpose of this study is to investigate whether audience perception at large arena shows and music festivals could be improved by the addition of Infra-sub. Infra-sub refers to low frequency audio content below 50Hz, and its presence might help provide a more involving and engaging audience experience at lower sound pressure levels. This paper investigates whether an increase in low frequency content (below 50Hz), both in terms of magnitude and frequency range, affects a listener’s preferred listening level. The study was conducted in real-life situations at a number of European indoor arenas using a large format line array system with low frequency extension. It was shown that preferred listening levels were lower when low frequency content (Infra-sub) was increased. The implication of this result suggests that increasing Infra-sub content allows the environmental impact of large arena concerts and music festivals to be reduced whilst maintaining a positive listening experience. Although the study had some limitations in sample size and range of participants, it still underlines the beneficial social, environmental and health implications of the use of Infra-sub, stemming from an overall reduction in sound pressure level at arena concerts and open-air music festivals.
... A possible breeding ground for multisensory integration are robust structural correlations between stimulus attributes of different sensory modalities. Vibrotactile and acoustic stimuli are often structurally correlated, because bass frequencies directly affect the quality of the music experience via whole-body vibration at high sound levels 8 -imagine concerts of pop/rock music or the sound of low organ pipes. It is long known that vibrotactile and auditory perception share important qualitative features, such as frequency dependent detection thresholds, frequency-based masking, energy integration over time, and temporal gap detection 9 . ...
Article
Full-text available
Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair’s surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of ‘musical engagement’, encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.
... One possible breeding ground for cross-modal correspondences are robust structural correlations between stimulus attributes of different sensory modalities. Vibrotactile and acoustic stimuli are often structurally correlated, because bass frequencies directly affect the quality of the music experience via whole-body vibration at high sound levels 7 -imagine concerts of pop/rock music or the sound of low organ pipes. It is long known that vibrotactile and auditory perception share important qualitative features, such as frequency dependent detection thresholds, frequency-based masking, energy integration over time, and temporal gap detection 8 . ...
Preprint
Full-text available
Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Although previous studies have addressed audio-tactile perception for music, the underlying perceptual dimensions of vibrotactile enhancement of music listening remain poorly understood. Here, 41 listeners were presented with vibrotactile stimuli via a chair’s four surfaces (left and right arm rests, back rest, seat surface) in addition to music presented over headphones. Vibrations were driven by different tracks of the music for each surface (multi condition) or conjointly by a mono-rendering (mono condition), in addition to incongruent and headphones-only conditions. Listeners evaluated excerpts of popular music on six scales, probing valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi-and mono vibration conditions robustly enhanced the nature of the musical experience compared to headphones alone, but both did not differ significantly from each other. They particularly enhanced the key latent dimension of ‘musical engagement’, encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the specific role of vibrotactile cues in creating intensive musical experiences.
... Furthermore, high-level low-frequency pure tones can cause human body surface vibration which can be felt in the chest and the abdomen (Takahashi et al. 2005). Vibrations are an extremely important aspect of live musical experiences: they play an important role in both the perception and the enjoyment of music (Merchel and Altinsoy 2014). In fact, evidence suggests that listening to recorded music without vibrotactile feedback is less perceptually rich and immersive (Ideguchi and Muranaka 2007). ...
Article
Full-text available
Today, some of the most widely attended concerts are in virtual reality (VR). For example, the videogame Fortnite recently attracted 12.3 million viewers sitting in homes all over the world to a VR Travis Scott rap concert. As such VR concerts become increasingly ubiquitous, we are presented with an opportunity to design more immersive virtual experiences by augmenting VR with other multisensory technologies. Given that sound is a multi-modal phenomenon that can be experienced sonically and vibrationally, we investigated the importance of haptic feedback to musical experiences using a combination of qualitative and empirical methodologies. Study 1 was a qualitative study demonstrating that, unlike their live counterparts, current VR concerts make it harder for audiences to form a connection with artists and their music. Furthermore, VR concerts lack multisensory feedback and are perceived as less authentic than live concert experiences. Participants also identified a variety of different kinds of touch that they receive at live concerts and suggested that ideal VR concerts would replicate physical touch and thermal feedback from the audience, emotional touch, and vibrations from the music. Specifically, users advocated for the use of haptic devices to increase the immersiveness of VR concert experiences. Study 2 isolated the role of touch in the music listening experience and empirically investigated the impact of haptic music players (HMPs) on the audio-only listening experience. An empirical, between-subjects study was run with participants either receiving vibrotactile feedback via an HMP (haptics condition) or no vibrotactile feedback (control) while listening to music. Results indicated that listening to music while receiving vibrotactile feedback increased participants’ sense of empathy, parasocial bond, and loyalty towards the artist, while also decreasing participants’ feelings of loneliness. The connection between haptics condition and these dependent variables was mediated by the feeling of social presence. Study 2 thus provides initial evidence that HMPs may be used to meet people’s need for connection, multisensory immersion, and complex forms of touch in VR concerts as identified in Study 1.
... Music perception is a further area that has not yet been fully explored for improving cochlear implant user's listening experience. Evidence suggests that vibration can influence the quality of live concert experiences (Merchel and Altinsoy, 2014) and ability to synchronise dancing to music (Tranchant et al., 2017;Shibasaki et al., 2016). Tactile stimulation devices have also shown improvements in timbre and instrument discrimination (Russo et al., 2012). ...
Thesis
Hearing aid and cochlear implant users struggle to understand speech in noisy places, such as classrooms and busy workplaces, with their performance typically being significantly worse than for normal-hearing listeners. This thesis details development of two new methods for improvement of speech-in-noise performance outcomes. The first addresses shortcomings in current techniques for assessing speech-in-noise performance and the second proposes a new intervention to improve performance. Chapters 3 and 4 present modifications to a new electrophysiological assessment method, using the temporal response function (TRF), for prediction of speech-in-noise performance. The TRF offers information not provided by behavioural speech-in-noise measures (the gold standard for speech-in-noise research and clinical assessment), which may be used for automated intervention fitting and further analysis of the mechanisms of speech-in-noise performance. Alterations to methodology for applying the TRF are proposed, which may provide the groundwork for further development of the TRF as a method for assessing speech-in-noise performance. Chapters 5 and 6 investigate the efficacy of a new intervention to improve speech-in-noise performance in cochlear implant users by providing missing sound-information through tactile stimulation on the wrists. This section focuses on developing and testing initial prototype devices that could rapidly be adapted for real-world use. These prototypes represent the first step towards the realisation of a wearable device, with accompanying results demonstrating the potential for their use in improving speech-in-noise performance. This thesis highlights two techniques that could be further developed for assessing and enhancing speech-in-noise performance, and outlines future steps to be taken for the realisation and combination of these techniques for improved treatment of the hearing impaired.
... Tactile vibrations can enhance music listening experiences (Merchel and Altinsoy, 2014). This phenomenon might not surprise anyone who enjoys loud concerts that can elicit vibrations perceivable with the tactile system (Merchel and Altinsoy, 2013). ...
Article
Full-text available
Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.
... The bodily listening experience significantly influences the perception of music and enjoyment of music performance. This kind of listening is provided to the audience through in-air sound vibrations that are perceived both on the surface of the body (Merchel and Altinsoy, 2014) and inside the body (Holmes, 2017). While embodied listening and felt musical experiences support hearing audiences' music performance appreciation, they play a significant role in D/HoH individuals' understanding of music performance. ...
Article
Full-text available
Inclusive musical instruments benefit from incorporating wearable interfaces into digital musical instrument design, creating opportunities for bodily felt experiences and movement-based interactions. In this article, we discuss the evolution of our inclusive design approach behind the design and performance practices of three wearable musical instruments. By focusing on the embodied, somatic, and tacit dimensions of movement-based musical interaction, we evaluate these case studies, combining the third and first-person perspectives. The design and implementation of the wearable sensing, utilizing the additive manufacturing techniques, are discussed for each instrument and its performer in specific cases of musical expression. This article further discusses how our approach integrates music performance as a crucial step into design and evaluation, utilizing these performance practices and such collaborative settings for improved diversity and inclusion. Finally, we examine how our design approach evolves from user-centered design to more participatory practices, offering people with diverse abilities a shared music performance space.
... Vibration plays a significant role in the perception of music as noted by Merchel and Ercan Altinsoy [51]; ...
Thesis
Full-text available
The purpose of this study is to investigate whether audience perception at large arena shows and music festivals can be improved by the addition of Infra-sub. Infra-sub refers to low frequency audio content below 50Hz, and its presence might help provide a more involving and engaging audience experience at lower sound pressure levels. Historical aspects of the development of sound reinforcement technology within the live music industry are outlined. Concerns about the environmental effects of loud music, its social impact and associated health implications, as well as the increase in legislation around its presentation, are all considered. A review of current knowledge regarding human perception of low-frequency sound, both aurally and through other forms of mechanosensation is also presented. This thesis investigates whether an increase in low-frequency content (below 50Hz), both in terms of magnitude and frequency range, affects a listener’s preferred listening level. The study was conducted in real-life situations at a number of European indoor arenas using a large format line array system with low frequency extension. The research shows that preferred listening levels were lower when low frequency content (Infra-sub) was increased. The implication of this result is that increasing Infra-sub content allows the environmental impact of large arena concerts and music festivals to be reduced whilst maintaining a positive listening experience. The study had some limitations in sample size and range of participants. Future research will study the individual effects of range extension and increased sound pressure level at low frequencies, and their perception by the listener. Nevertheless, this study underlines the beneficial social, environmental and health implications of the use of Infra-sub, stemming from the overall reduction in sound pressure level at arena concerts and open-air music festivals.
... At the largest scale, these include systems used for delivering whole-body vibration, such as those used at Deaf Raves, where music containing a lot of lowfrequency energy is played at a high intensity. There is evidence that whole-body low-frequency vibration, which is also common during live pop or organ concerts, can play a significant role in the quality of the concert experience (Merchel and Altinsoy, 2014). There is also evidence that vibrating floors can improve the synchronization of dancing to music for hearing-impaired listeners (Shibasaki et al., 2016;Tranchant et al., 2017). ...
Article
Full-text available
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
... Mapping sound onto the tactile domain has been explored in a variety of projects for applications such as enhancing multi-sensory experiences in virtual reality (VR) environments [6] and improving the affective quality of digital touch [7], and as a means of sensory substitution for people with hearing loss [8]. It has been shown that the addition of full body vibration to a musical auditory experience increases the perceived volume of music [9] and improves the overall quality of experience for listeners [10]. There is also evidence to show that people are capable of matching audible frequencies to those presented as haptic vibrations [11] and that frequency perception of both auditory and tactile stimulation occurs in the same region of the sensory cortex [12]. ...
Article
Full-text available
We present and evaluate the concept of FeelMusic and evaluate an implementation of it. It is an augmentation of music through the haptic translation of core musical elements. Music and touch are intrinsic modes of affective communication that are physically sensed. By projecting musical features such as rhythm and melody into the haptic domain, we can explore and enrich this embodied sensation; hence, we investigated audio-tactile mappings that successfully render emotive qualities. We began by investigating the affective qualities of vibrotactile stimuli through a psychophysical study with 20 participants using the circumplex model of affect. We found positive correlations between vibration frequency and arousal across participants, but correlations with valence were specific to the individual. We then developed novel FeelMusic mappings by translating key features of music samples and implementing them with “Pump-and-Vibe”, a wearable interface utilising fluidic actuation and vibration to generate dynamic haptic sensations. We conducted a preliminary investigation to evaluate the FeelMusic mappings by gathering 20 participants’ responses to the musical, tactile and combined stimuli, using valence ratings and descriptive words from Hevner’s adjective circle to measure affect. These mappings, and new tactile compositions, validated that FeelMusic interfaces have the potential to enrich musical experiences and be a means of affective communication in their own right. FeelMusic is a tangible realisation of the expression “feel the music”, enriching our musical experiences.
... For example, Lindeman et al. [6] designed and implemented a wearable suit made of multiple individually addressable vibrotactile actuators placed on the upper body of a user. Recently, Merchel and Altinsoy [7], used vertical vibrations to explore vibrotactile feedback influence on music perception and enjoyment. ...
Chapter
Full-text available
We introduce a novel experimental system to explore the role of vibrotactile haptic feedback in Virtual Reality (VR) to induce the self-motion illusion. Self-motion (also called vection) has been mostly studied through visual and auditory stimuli and a little is known how the illusion can be modulated by the addition of vibrotactile feedback. Our study focuses on whole-body haptic feedback in which the vibration is dynamically generated from the sound signal of the Virtual Environment (VE). We performed a preliminary study and found that audio and haptic modalities generally increase the intensity of vection over a visual only stimulus. We observe higher ratings of self-motion intensity when the vibrotactile stimulus is added to the virtual scene. We also analyzed data obtained with the igroup presence questionnaire (IPQ) which shows that haptic feedback has a general positive effect of presence in the virtual environment and a qualitative survey that revealed interesting and often overlooked aspects such as the implications of using a joystick to collect data in perception studies and in the concept of vection in relation to people’s experience and cognitive interpretation of self-motion.
... The fields of music and haptics are tightly connected in numerous ways. Auditory-tactile experience of music, especially the multimodal perception of attending a concert, has been stressed [25,26]. Vibrotactile feedback can be added to restore intimacy in instrumental interaction with a DMI or to enable persons with hearing impairment to experience music [8]. ...
Conference Paper
Full-text available
This paper presents a study on the composition of haptic music for a multisensory installation and how composers could be aided by a preparatory workshop focusing on the perception of whole-body vibrations prior to such a composition task. Five students from a Master's program in Music Production were asked to create haptic music for the installation Sound Forest. The students were exposed to a set of different sounds producing whole-body vibrations through a wooden platform and asked to describe perceived sensations for respective sound. Results suggested that the workshop helped the composers successfully complete the composition task and that awareness of haptic possibilities of the multisensory installation could be improved through training. Moreover, the sounds used as stimuli provided a relatively wide range of perceived sensations, ranging from pleasant to unpleasant. Considerable intra-subject differences motivate future large-scale studies on the perception of whole-body vibrations in artistic music practice.
... Consequently, the design of HFVK stimuli largely remains an artistic process (Petersen 2019), thereby limiting the optimization of HFVK design and the expansion of HFVK technology beyond mere entertainment. HFVK stimulation in AV contexts has been reported to increase the perceived loudness and physiologically arousing nature of music, and improve memorability of AV content (Merchel and Altinsoy 2014, Giroux et al 2019, Pauna et al 2019, clearly suggesting effects on cognition and brain activity, and the potential to measure these effects. Studies to identify brain areas and activities affected by HFVK stimulation during AV experiences could be a valuable first step in building a mechanism for how HFVK technology enhances AV immersion. ...
Article
Full-text available
Objective: High-fidelity vibrokinetic (HFVK) technology is widely used to enhance the immersiveness of audiovisual (AV) entertainment experiences. However, despite evidence that HFVK technology does subjectively enhance AV immersion, the underlying mechanism has not been clarified. Neurophysiological studies could provide important evidence to illuminate this mechanism, thereby benefiting HFVK stimulus design, and facilitating expansion of HFVK technology. Approach: We conducted a between-subjects (VK, N = 11; Control, N = 9) study to measure the effect of HFVK stimulation through an HFVK seat on EEG cortical activity during an AV cinematic experience. Subjective appreciation of the experience was assessed and incorporated into statistical models exploring the effects of HFVK stimulation across cortical brain areas. We separately analyzed alpha-band (8-12 Hz) and theta-band (5-7 Hz) activities as indices of engagement and sensory processing, respectively. We also performed theta-band (5-7 Hz) coherence analyses using cortical seed areas identified from the theta activity analysis. Main results: The right fusiform gyrus, inferiotemporal gyrus, and supramarginal gyrus, known for emotion, AV-spatial, and vestibular processing, were identified as seeds from theta analyses. Coherence from these areas was uniformly enhanced in HFVK subjects in right motor areas, albeit predominantly in those who were appreciative. Meanwhile, compared to control subjects, HFVK subjects exhibited uniform interhemispheric decoherence with the left insula, which is important for self-processing. Significance: The results collectively point to sustained decoherence between sensory and self-processing as a potential mechanism for how HFVK increases immersion, but also suggest that coordination of emotional, spatial, and vestibular processing hubs with the motor system may be required for appreciation of the HFVK-enhanced experience. Overall, this study offers the first ever demonstration that HFVK stimulation has a real and sustained effect on brain activity during a cinematic experience.
... Just think of the auditory and vibrotactile feedback of a button on a touch screen, or vibrotactile feedback of electronic music instruments, or bimodal devices for guidance of blind persons. For example, the authors developed and optimized systems for multimodal reproduction of music [64,65,67]. To this end, a vibration actuator was coupled to a surface in contact with the listener, e.g., an electrodynamic shaker mounted in a backpack, integrated in clothing or attached below a seat or floor. ...
Article
Full-text available
In this paper, the psychophysical abilities and limitations of the auditory and vibrotactile modality will be discussed. A direct comparison reveals similarities and differences. The knowledge of those is the basis for the design of perceptually optimized auditory-tactile human–machine interfaces or multimodal music applications. Literature data and own results for psychophysical characteristics are summarized. An overview of the absolute perception thresholds of both modalities is given. The main factors which influence these thresholds are discussed: age, energy integration, masking and adaptation. Subsequently, the differential sensitivity (discrimination of intensity, frequency, temporal aspects and location) for suprathreshold signals is compared.
... In the discrete haptic condition, the signal was delivered as brief vibrational pulses. The continuous haptic condition consisted of the bass frequency of the auditory signal, generated by dropping the soundscape by an octave, a technique successfully used to pair music and haptic vibrations for live concert performances (Merchel & Altinsoy, 2014). Discrete and continuous haptic vibrations have been used for simple reaction studies and are both shown to be good indicators of a needed response (Salzer et al., 2014). ...
Article
Introduction Alarm fatigue and medical alarm mismanagement reduces the quality of patient care and creates stressful work environments for clinicians. Here, the feasibility of a novel “pre-alarm” system that utilizes multisensory integration of auditory and haptic stimuli is examined as a possible solution. Methods Three vital signs (heart rate, blood pressure, and blood oxygenation) were represented by three musically distinct sounds that were combined into soundscapes and progressed through five pre-alarm zones (very low to very high). Three haptic conditions were tested with the auditory stimulus to determine the best combination of auditory and haptic stimulation. Qualitative data was collected through surveys and the NASA TLX index. Results Alterations in frequency and timbre were most effective at transmitting information regarding changing vital sign zones with comparatively higher accuracy and quicker reaction time (RT), p <.01. The addition of haptic stimuli to the auditory soundscape caused no significant decline in study participant accuracy or RT. However, two weeks after training, participants performed the tasks significantly faster (p <.001) and felt the alarm monitoring task was significantly less cognitively demanding (p <.01), compared to the unisensory condition. Participants also felt more confident in identifying changing vital signs with the addition of haptic stimuli. Discussion The current study demonstrates that multisensory signals do not diminish the perception of transmitted information and suggest efficient training benefits over unimodal signals. Multisensory training may be beneficial over time compared to unisensory training due to a stronger consolidation effect. The potential integration of haptic input with existing auditory alarm systems and training is supported.
... The Emoti-Chair [7] converted acoustic signals directly into vibration by attaching eight voice coils to a chair. Merchel and Altinsoy [12] vibrated the entire chair and examined the change in the music experience. By individually controlling the sound and vibration to be played back, they clarified that vibration affects the music experience. ...
Chapter
There are two conventional methods to experience music: listening physically (Live) and listening through digital media. However, there are differences in the quality of the music experience between these listening methods. To improve the quality of music experience and entertainment when listening through digital media, we developed LIVEJACKET, a jacket capable of vibrotactile presentation and music playback. By simultaneously presenting vibration and sound from 22 multiple speakers attached to a jacket, we created the sensation of being enveloped in sound. Wearers feel like they are singing, which can improve the quality of the music experience. We set five music listening methods, completed experiments, and conducted a questionnaire survey on the music experience. Based on the results, we found that the system we proposed can provide a music experience that cannot be obtained by listening to music through traditional digital methods.
... In this chapter, algorithms are described that were developed and evaluated to improve music-driven vibration generation, taking into account the above questions and complaints. The content is based on several papers [3,27,28] and the dissertation of the first author with the title 'Auditory-Tactile Music Perception' [23] with kind permission from Shaker Verlag. ...
Chapter
Full-text available
We listen to music not only with our ears. The whole body is present in a concert hall, during a rock event, or while enjoying music reproduction at home. This chapter discusses the influence of audio-induced vibrations at the skin on musical experience. To this end, sound and body vibrations were controlled separately in several psychophysical experiments. The multimodal perception of the resulting concert quality is evaluated, and the effect of frequency, intensity, and temporal variation of the vibration signal is discussed. It is shown that vibrations play a significant role in the perception of music. Amplifying certain vibrations in a concert venue or music reproduction system can improve the music experience. Knowledge about the psychophysical similarities and differences of the auditory and tactile modality help to develop perceptually optimized algorithms to generate music-related vibrations. These vibrations can be reproduced, e.g., using electrodynamic exciters mounted to the floor or seat. It is discussed that frequency shifting and intensity compression are important approaches for vibration generation.
... Bass, the lowest frequency range, although one that we are relatively insensitive to, is still the foundation of musical experience. Perception in this area is complex as, at vibration of the surface of the skin and body parts can augment the normal function of the ear in contributing to this unique part of musical sensation (1). Especially in popular music, bass instruments define the pulse of the music and provide a temporal structure for dancing (2). ...
Conference Paper
Full-text available
Absorption is a difficult quantity to measure. The standards based on reverberation chamber measurements are limited to those above 100 Hz. Those based on commercially available impedance tubes are limited to 63 Hz and above. The Microflown impedance gun offers a newer approach to the measurement of absorption coefficients, but is limited to 300-8000 Hz in its commercial form. An extension to the commercial instrumentation to lower the operational range to 40 Hz is proposed. Measurements have been undertaken of dissipative type absorbers and panel absorbers to determine if this method can be used in the 40-100 Hz range. Results are reported and discussed along with the uncertainty in the measurement.
... Perception in this area is complex as vibration of the surface of the skin and body parts augments the normal function of the ear in contributing to this unique part of musical sensation. [1] Especially in popular music, bass instruments define the pulse of the music and provide a temporal structure for dancing. [2] In rooms for classical music performance, Barron notes that bass rise, that is to say a reverberation time increase as frequency lowers from around 500Hz, has been seen as desirable or at least tolerated in auditoria, especially in the USA. ...
Thesis
Full-text available
Absorption is a difficult quantity to measure. The standards based on reverberation chamber measurements are limited to those above 100 Hz. Those based on commercially available impedance tubes are limited to 63 Hz and above. The Microflown impedance gun offers a newer approach to the measurement of absorption coefficients, but is limited to 300-8000 Hz in its commercial form. An extension to the commercial instrumentation to lower the operational range to 40 Hz is proposed. Measurements have been undertaken of dissipative type absorbers and panel absorbers to determine if this method can be used in the 40-100 Hz range. Results are reported and discussed along with the uncertainty in the measurement.
Conference Paper
Despite the fact that spatio-temporal patterns of vibration, characterized as rhythmic compositions of tactile content, have exhibited an ability to elicit specific emotional responses and enhance the emotion conveyed by music, limited research has explored their underlying mechanism in regulating emotional states within the pre-sleep context. Aiming to investigate whether synergistic spatio-temporal tactile displaying of music can facilitate relaxation before sleep, we developed 16 vibration patterns and an audio-tactile prototype for presenting an ambient experience in a pre-sleep scenario. The stress-reducing effects were further evaluated and compared via a user experiment. The results showed that the spatio-temporal tactile display of music significantly reduced stress and positively influenced users’ emotional states before sleep. Furthermore, our study highlights the therapeutic potential of incorporating quantitative and adjustable spatio-temporal parameters correlated with subjective psychophysical perceptions in the audio-tactile experience for stress management.
Article
Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.
Conference Paper
Song signing is a method practiced by people who are d/Deaf and non-d/Deaf individuals to visually represent music and make music accessible through sign language and body movements. Although there is growing interest in song signing, there is a lack of understanding on what d/Deaf people value about song signing and how to make song signing productions that they would consider acceptable. We conducted semi-structured interviews with 12 d/Deaf participants to gain a deeper understanding of what they value in music and song signing. We then interviewed 14 song signers to understand their experiences and processes in creating song signing performances. From this study, we identify three complex, interrelated layers of the song signing creation process and discuss how they can be supported and completed to potentially bridge the cultural divide between the d/Deaf and non-d/Deaf audiences and guide more culturally responsive creation of music.
Thesis
Full-text available
How can vibrotactile stimuli be used to create a technology-mediated somatic learning experience? This question motivates this practice-based research, which explores how the Feldenkrais Method and cognate neuroscience research can be applied to technology design. Supported by somaesthetic philosophy, soma-based design theories, and a critical acknowledgement of the socially-inflected body, the research develops a systematic method grounded in first- and third-person accounts of embodied experience to inform the creation and evaluation of design of Haplós, a wearable, user-customisable, remote-controlled technology that plays methodically composed vibrotactile patterns on the skin in order to facilitate body awareness—the major outcome of this research and a significant contribution to soma-based creative work. The research also contributes to design theory and somatic practice by developing the notion of a somatic learning affordance, which emerged during course of the research and which describes the capacity of a material object to facilitate somatic learning. Two interdisciplinary collaborations involving Haplós contribute to additional fields and disciplines. In partnership with experimental psychologists, Haplós was used in a randomised controlled study that contributes to cognitive psychology by showing that vibrotactile compositions can reduce, with statistical significance, intrusive food-related thoughts. Haplós was also used in Bisensorial, an award-winning, collaboratively developed proof-of-concept of a neuroadaptive vibroacoustic therapeutic device that uses music and vibrotactile stimuli to induce desired mental states. Finally, this research contributes to cognitive science and embodied philosophy by advancing a neuroscientific understanding of vibrotactile somaesthetics, a novel extension of somaesthetic philosophy.
Article
Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0–7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.
Article
Full-text available
We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
Conference Paper
Full-text available
What if the traditional relationship between touch and music was essentially turned upside down, making the tactile sensation the aesthetic end? This paper presents a novel coupling of haptics technology and music, introducing the notion of tactile composition or aesthetic composition for the sense of touch. A system that facilitates the composition and perception of intricate, musically structured spatio-temporal patterns of vibration on the surface of the body is described. Relevant work from disciplines including sensory substitution, electronic musical instrument design, simulation design, entertainment technology, and visual music is considered. The psychophysical parameter space for our sense of touch is summarized and the building blocks of a compositional language for touch are explored. A series of concerts held for the skin and ears is described, as well as some of the lessons learned along the way. In conclusion, some potential evolutionary branches of tactile composition are posited.
Article
Full-text available
Difference thresholds for seated subjects exposed to whole-body vertical sinusoidal vibration have been determined at two vibration magnitudes [0.1 and 0.5 ms(-2) root mean square (r.m.s.)] and at two frequencies (5 and 20 Hz). For 12 subjects, difference thresholds were determined using the up-and-down transformed response method based on two-interval forced-choice tracking. At both frequencies, the difference thresholds increased by a factor of five when the magnitude of the vibration increased from 0.1 to 0.5 ms(-2) r.m.s. The median relative difference thresholds, Weber fractions (deltaI/I), expressed as percentages, were about 10% and did not differ significantly between the two vibration magnitudes or the two frequencies. It is concluded that for the conditions investigated the difference thresholds for whole-body vibration are approximately consistent with Weber's Law. A vibration magnitude will need to be reduced by more than about 10% for the change to be detectable by human subjects; vibration measurements will be required to detect reductions of less than 10%.
Article
Full-text available
A series of studies was carried out in which it was shown that the amount of time that must intervene between 2 events in order for S to report correctly which of these 2 events preceded the other is approximately 20 msec. This holds over all modalities studied. "Whereas the time between successive stimuli that is necessary for the stimuli to be perceived as successive rather than simultaneous may depend upon the particular sense modality employed, the temporal separation that is required for the judgment of perceived temporal order is much longer and is independent of the sense modality employed."
Article
Threshold shifts for the detection of vibrotactile test stimuli were determined as a function of the intensity of a masker. The masking stimulus, narrow‐band noise centered at 275 Hz, and the test stimulus, a 15‐, 50‐, 80‐, or 300‐Hz sinusoidal burst, were applied to the same site on the thenar eminence of the hand. The intensity of the masker was varied over a range of 0 to 52 dB SL. The results support the hypothesis that the detection of vibrotactile stimuli is mediated by at least two receptor systems which do not mask each other.
Article
Sound and vibrations are often perceived via the auditory and tactile senses simultaneously, e.g., in a car or train. During a rock concert, the body vibrates with the rhythm of the music. Even in a concert hall or a church, sound can excite vibrations in the ground or seats. These vibrations might not be perceived separately because they integrate with the other sensory modalities into one multi-modal perception. This paper discusses the relation between sound and vibration for frequencies up to 1 kHz in an opera house and a church. Therefore, the transfer function between sound pressure and acceleration was measured at different exemplary listening positions. A dodecahedron loudspeaker on the stage was used as a sound source. Accelerometers on the ground, seat and arm rest measured the resulting vibrations. It was found that vibrations were excited over a broad frequency range via airborne sound. The transfer function was measured using various sound pressure levels. Thereby, no dependence on level was found. The acceleration level at the seat corresponds approximately to the sound pressure level and is independent of the receiver position. Stronger differences were measured for vibrations on the ground.
Article
Producing a formal means for ranking the quality of concert halls requires that the subjective assessment by experts and listeners be matched to acoustic parameters. Using previous results from Ando and Beranek, the authors propose a statistical scheme to obtain a function that fits the objective experimental data from 17 performance halls, which vary in uses, sizes, building types, and acoustics. Of the many possible acoustic parameters available, it appears that most of them can be reduced to LEV (listener envelopment), RTmid (mid frequency reverberation time), and LFCE4 (early lateral energy fraction). The obtained model based on these parameters, allows for designing and improving performance spaces.
Article
It is currently assumed that the same frequency weightings, derived from studies of vibration discomfort, can be used to evaluate the severity of vibration at all vibration magnitudes from the threshold of vibration perception to the vibration magnitudes associated with risks to health. This experimental study determined equivalent comfort contours for the whole-body vibration of seated subjects over the frequency range 2–315 Hz in each of the three orthogonal axes (fore-and-aft, lateral and vertical). The contours were determined at vibration magnitudes from the threshold of perception to levels associated with severe discomfort and risks to health.At frequencies greater than 10 Hz, thresholds for the perception of vertical vibration were lower than thresholds for fore-and-aft and lateral vibration. At frequencies less than 4 Hz, thresholds for vertical vibration were higher than thresholds for fore-and-aft and lateral vibration. The rate of growth of sensation with increasing vibration magnitude was highly dependent on the frequency and axis of vibration. Consequently, the shapes of the equivalent comfort contours depended on vibration magnitude. At medium and high vibration magnitudes, the equivalent comfort contours were reasonably consistent with the frequency weightings for vibration discomfort in current standards (i.e. Wb and Wd). At low vibration magnitudes, the contours indicate that relative to lower frequencies the standards underestimate sensitivity at frequencies greater than about 30 Hz. The results imply that no single linear frequency weighting can provide accurate predictions of discomfort caused by a wide range of magnitudes of whole-body vibration.
Article
Oscillatory motions of handles, seats, and floors produce complex patterns of sensations in the body with the detection of these motions dependent on the sensitivity of the body to the applied vibration. This study examined the effect of input location (the hand, the seat, and the foot) and vibration frequency (8–315 Hz at the hand and foot; 2–315 Hz at the seat) on absolute thresholds for the perception of vibration in each of three axes (fore-and-aft, lateral, and vertical). Perception thresholds were determined with 96 males aged 20–29 years divided into eight groups of 12 subjects; each group received vibration at either the hand, the seat, or the foot in one of the three axes (one group experienced both lateral and vertical vibration at the hand). A frequency dependence in the thresholds was apparent for each of the three directions at each of the three locations; U-shaped acceleration threshold contours at frequencies greater than 80 Hz suggest the same psychophysical channel-mediated high-frequency thresholds at the hand, the seat, and the foot. Among the nine axes, sensitivity was greatest for vertical vibration at the seat at frequencies between 8 and 80 Hz, whereas sensitivity was greatest for vertical vibration at the hand at frequencies greater than 100 Hz. Absolute thresholds for the perception of vibration at the hand, the seat, and the foot are not consistent with the relevant frequency weightings in current standards.
Article
The validity of a previously introduced psychophysical method of numerical magnitude balance is discussed. The method is applied to loudness scaling at low sound frequencies. The results are shown to agree with loudness‐matching data obtained in several investigations.
Article
Threshold shifts for the detection of vibrotactile test stimuli were determined as a function of the intensity of a masker. The masking stimulus, narrow-band noise centered at 275 Hz, and the test stimulus, a 15-, 50-, 80-, or 300-Hz sinusoidal burst, were applied to the same site on the thenar eminence of the hand. The intensity of the masker was varied over a range of 0 to 52 dB SL. The results support the hypothesis that the detection of vibrotactile stimuli is mediated by at least two receptor systems which do not mask each other.
Article
Previous research has revealed the existence of perceptual mechanisms that compensate for slight temporal asynchronies between auditory and visual signals. We investigated whether temporal recalibration would also occur between auditory and tactile stimuli. Participants were exposed to streams of brief auditory and tactile stimuli presented in synchrony, or else with the auditory stimulus leading by 75ms. After the exposure phase, the participants made temporal order judgments regarding pairs of auditory and tactile events occurring at varying stimulus onset asynchronies. The results showed that the minimal interval necessary to correctly resolve audiotactile temporal order was larger after exposure to the desynchronized streams than after exposure to the synchronous streams. This suggests the existence of a mechanism to compensate for audiotactile asynchronies that results in a widening of the temporal window for multisensory integration.