Fig 4 - uploaded by Thomas Wolbers
Content may be subject to copyright.
Layout of the homing task. (A) Design of the polygons used in the homing task. (B) Example polygon depicting the homing error (error between start point and actual end point of participants walk), H = Homing point, point from which participants had to home to the starting point on their own, S = Start point, red dot = actual end point of participants' path, grey dashed line H-S = ideal homing segment. (C) Pre-to post-training comparison of homing errors of belt wearing participants in the belt-on condition. For visualization polygons were superimposed and mirrored and/or rotated in a way that all of them end up below the dashed line which represents the optimal path from homing point (H) to starting point (S). An inward error could be observed in the position of the ellipse underneath the homing trajectory could be observed. (D) Effect of belt use (error in belt-off minus belt-on conditions, positive numbers indicate a reduction of homing error) onto homing error comparing pre-to post-measurement for belt wearing and control group. Error bars indicate SEM.  

Layout of the homing task. (A) Design of the polygons used in the homing task. (B) Example polygon depicting the homing error (error between start point and actual end point of participants walk), H = Homing point, point from which participants had to home to the starting point on their own, S = Start point, red dot = actual end point of participants' path, grey dashed line H-S = ideal homing segment. (C) Pre-to post-training comparison of homing errors of belt wearing participants in the belt-on condition. For visualization polygons were superimposed and mirrored and/or rotated in a way that all of them end up below the dashed line which represents the optimal path from homing point (H) to starting point (S). An inward error could be observed in the position of the ellipse underneath the homing trajectory could be observed. (D) Effect of belt use (error in belt-off minus belt-on conditions, positive numbers indicate a reduction of homing error) onto homing error comparing pre-to post-measurement for belt wearing and control group. Error bars indicate SEM.  

Source publication
Article
Full-text available
Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor con...

Citations

... Does the precision increase depend on the unusual visual cue that was used in the previous study? Previous results on the use and combination of new signals in navigation and sense-of-direction tasks are mixed König et al., 2016;Nagel et al., 2005;Weisberg et al., 2018), so it is important to make sure that this previous result can be robustly replicated in variations of the basic task. As a whole, the study is designed to examine if perception and decision-making adapt to new sensory skills in key multisensory functions and mechanisms. ...
Article
Full-text available
It is clear that people can learn a new sensory skill-a new way of mapping sensory inputs onto world states. It remains unclear how flexibly a new sensory skill can become embedded in multisensory perception and decision-making. To address this, we trained typically sighted participants (N = 12) to use a new echo-like auditory cue to distance in a virtual world, together with a noisy visual cue. Using model-based analyses, we tested for key markers of efficient multisensory perception and decision-making with the new skill. We found that 12 of 14 participants learned to judge distance using the novel auditory cue. Their use of this new sensory skill showed three key features: (a) It enhanced the speed of timed decisions; (b) it largely resisted interference from a simultaneous digit span task; and (c) it integrated with vision in a Bayes-like manner to improve precision. We also show some limits following this relatively short training: Precision benefits were lower than the Bayes-optimal prediction, and there was no forced fusion of signals. We conclude that people already embed new sensory skills in flexible multisensory perception and decision-making after a short training period. A key application of these insights is to the development of sensory augmentation systems that can enhance human perceptual abilities in novel ways. The limitations we reveal (sub-optimality, lack of fusion) provide a foundation for further investigations of the limits of these abilities and their brain basis. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
... Previous research on the feelSpace belt found that users reported an altered perception of space and a subjectively improved spatial navigation ability after wearing and training with the belt for an extended duration of six weeks [41]. Moreover, sleep-electroencephalography (EEG) has shown procedural learning to be increased during the first week of training with the device [43]. Further investigations using functional magnetic resonance imaging (fMRI) have shown differential activations in sensory and higher motor brain regions during a virtual path integration task [43]. ...
... Moreover, sleep-electroencephalography (EEG) has shown procedural learning to be increased during the first week of training with the device [43]. Further investigations using functional magnetic resonance imaging (fMRI) have shown differential activations in sensory and higher motor brain regions during a virtual path integration task [43]. Based on these subjective and physiological reports, the feelSpace belt appears to be a useful tool to study how humans attain a novel sense of cardinal directions. ...
... Following this line of research, we are interested in the behavioral effects induced by the feelspace belt. König and colleagues [43] reported that no behavioral effect for the feelSpace belt could be found in a complex homing paradigm. A potential explanation is that the complex homing paradigm was conducted on a very small scale (19-22 m) [43] and could therefore have been solved by exclusively using the vestibular system [44,45]. ...
Article
Full-text available
Sensory augmentation provides novel opportunities to broaden our knowledge of human perception through external sensors that record and transmit information beyond natural perception. To assess whether such augmented senses affect the acquisition of spatial knowledge during navigation, we trained a group of 27 participants for six weeks with an augmented sense for cardinal directions called the feelSpace belt. Then, we recruited a control group that did not receive the augmented sense and the corresponding training. All 53 participants first explored the Westbrook virtual reality environment for two and a half hours spread over five sessions before assessing their spatial knowledge in four immersive virtual reality tasks measuring cardinal, route, and survey knowledge. We found that the belt group acquired significantly more accurate cardinal and survey knowledge, which was measured in pointing accuracy, distance, and rotation estimates. Interestingly, the augmented sense also positively affected route knowledge, although to a lesser degree. Finally, the belt group reported a significant increase in the use of spatial strategies after training, while the groups’ ratings were comparable at baseline. The results suggest that six weeks of training with the feelSpace belt led to improved survey and route knowledge acquisition. Moreover, the findings of our study could inform the development of assistive technologies for individuals with visual or navigational impairments, which may lead to enhanced navigation skills and quality of life.
... While vibro-tactile cues can be an effective replacement for visual alerts, they are not necessarily effective in terms of replacing visual direction cues. Tactile cues have been adopted quite frequently to direct navigation, e.g., using vests [242], gloves [410] or a belt [218]. 3D selection methods using tactile cues are studied for non-directional feedback [10] and directional feedback [265]. ...
Thesis
Full-text available
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
... Vibrotactile interfaces have also been used to alert users to an event or sudden change in task condition (Sklar and Sarter 1999;Ferris and Sarter 2011;Li et al. 2012). Although vibrotactile sensory interfaces have been used to train body movements by providing feedback of spatial orientation or by stimulating the moving limb in response to limb configuration, these prior studies have largely been technology demonstrations or feasibility case studies that investigated short-term use of this type of sensory augmentation (Lieberman and Breazeal 2007;Kapur et al. 2009;Weber et al. 2011;De Santis et al. 2014;Afzal et al. 2015; Bark et al. 2015;König et al. 2016). Cuppone et al. (2016) investigated proprioceptive learning of wrist movements in the absence of vision. ...
... Cues were only provided when errors exceeded certain limits, unfortunately discrete on/off alarms are not well-suited for augmenting continuous (i.e., moment-by-moment) sensorimotor control of limb posture and movement. There have been very few examinations of the extent to which extended training with continuous, graded vibrotactile cues can facilitate learning of the sensorimotor relationships needed to establish novel closed-loop feedback control of a moving limb (see König et al. 2016 for a study of sensory augmentation training in the context of spatial navigation). ...
Article
Full-text available
Prior studies have shown that the accuracy and efficiency of reaching can be improved using novel sensory interfaces to apply task-specific vibrotactile feedback (VTF) during movement. However, those studies have typically evaluated performance after less than 1 h of training using VTF. Here, we tested the effects of extended training using a specific form of vibrotactile cues—supplemental kinesthetic VTF—on the accuracy and temporal efficiency of goal-directed reaching. Healthy young adults performed planar reaching with VTF encoding of the moving hand's instantaneous position, applied to the non-moving arm. We compared target capture errors and movement times before, during, and after approximately 10 h (20 sessions) of training on the VTF-guided reaching task. Initial performance of VTF-guided reaching showed that people were able to use supplemental VTF to improve reaching accuracy. Performance improvements were retained from one training session to the next. After 20 sessions of training, the accuracy and temporal efficiency of VTF-guided reaching were equivalent to or better than reaches performed with only proprioception. However, hand paths during VTF-guided reaching exhibited a persistent strategy where movements were decomposed into discrete sub-movements along the cardinal axes of the VTF display. We also used a dual-task condition to assess the extent to which performance gains in VTF-guided reaching resist dual-task interference. Dual-tasking capability improved over the 20 sessions, such that the primary VTF-guided reaching and a secondary choice reaction time task were performed with increasing concurrency. Thus, VTF-guided reaching is a learnable skill in young adults, who can achieve levels of accuracy and temporal efficiency equaling or exceeding those observed during movements guided only by proprioception. Future studies are warranted to explore learnability in older adults and patients with proprioceptive deficits, who might benefit from using wearable sensory augmentation technologies to enhance control of arm movements.
... This, with the intent to make formerly inaccessible signals accessible for further processing using existing pathways through which sensory information is accessed (Deroy and Auvray, 2012). Examples of sensory augmentation devices include the feelSpace belt that translates magnetic north sensor data to vibration motors worn around the waist (König et al., 2016), Neil Harbisson's device enabling the detection of colors beyond the normal human vision wavelengths (Leenes et al., 2017) and the "X-Ray Vision" device that enables users to recognize objects through walls (Avery et al., 2009;Raisamo et al., 2019). ...
Article
Full-text available
Deaf and hearing people can encounter challenges when communicating with one another in everyday situations. Although problems in verbal communication are often seen as the main cause, such challenges may also result from sensory differences between deaf and hearing people and their impact on individual understandings of the world. That is, challenges arising from a sensory gap. Proposals for innovative communication technologies to address this have been met with criticism by the deaf community. They are mostly designed to enhance deaf people's understanding of the verbal cues that hearing people rely on, but omit many critical sensory signals that deaf people rely on to understand (others in) their environment and to which hearing people are not tuned to. In this perspective paper, sensory augmentation, i.e., technologically extending people's sensory capabilities, is put forward as a way to bridge this sensory gap: (1) by tuning to the signals deaf people rely on more strongly but are commonly missed by hearing people, and vice versa, and (2) by sensory augmentations that enable deaf and hearing people to sense signals that neither person is able to normally sense. Usability and user-acceptance challenges, however, lie ahead of realizing the alleged potential of sensory augmentation for bridging the sensory gap between deaf and hearing people. Addressing these requires a novel approach to how such technologies are designed. We contend this requires a situated design approach.
... The assumption that mastery of sensorimotor contingencies can be acquired via the observer's continuous interaction with the environment has been a basis for sensory rehabilitation and for empirical tests of the theory itself (e.g., Auvray, et al., 2007;Auvray & Myin, 2009;Auvray, Philipona, et al., 2007;Bermejo et al., 2015;Blackmore, 2001;Bompas & O'Regan, 2006a;2006b;Froese et al., 2012;Kärcher et al., 2012;König et al., 2016;Lenay & Steiner, 2010;Lenay & Stewart, 2012;Nagel et al., 2005;O'Regan & Noë, 2001, p. 1020. Exposition to new sensorimotor contingencies should lead the perceiver to experience a novel perceptual feel, provided that the user masters the novel sensorimotor contingencies, either spontaneously, or through learning (Hurley & Noë, 2003;Kaspar et al., 2014;König et al., 2016;Myin & Degenaar, 2014;Nagel et al., 2005;Noë, 2004;O'Regan, 2011). ...
... The assumption that mastery of sensorimotor contingencies can be acquired via the observer's continuous interaction with the environment has been a basis for sensory rehabilitation and for empirical tests of the theory itself (e.g., Auvray, et al., 2007;Auvray & Myin, 2009;Auvray, Philipona, et al., 2007;Bermejo et al., 2015;Blackmore, 2001;Bompas & O'Regan, 2006a;2006b;Froese et al., 2012;Kärcher et al., 2012;König et al., 2016;Lenay & Steiner, 2010;Lenay & Stewart, 2012;Nagel et al., 2005;O'Regan & Noë, 2001, p. 1020. Exposition to new sensorimotor contingencies should lead the perceiver to experience a novel perceptual feel, provided that the user masters the novel sensorimotor contingencies, either spontaneously, or through learning (Hurley & Noë, 2003;Kaspar et al., 2014;König et al., 2016;Myin & Degenaar, 2014;Nagel et al., 2005;Noë, 2004;O'Regan, 2011). Thus, a central prediction of sensorimotor theory is the potential to experientially expand the sensory apparatus. ...
... Similarly, beyond substitution, sensory augmentation research suggests that a perceiver may also develop novel kinds of perceptual feel from artificial sensorimotor contingencies provided by technical sensors that convey information not available to the natural apparatus, such as the direction of geomagnetic north obtained by a magnetic compass (e.g., Kaspar et al., 2014;König et al., 2016;Nagel et al., 2005;Schumann & O'Regan, 2017). ...
Article
Full-text available
This study investigated the potential for the development of novel perceptual experiences through sustained training with a sensory augmentation device. We developed (1) a new geomagnetic sensory augmentation device, the NaviEar, and (2) a battery of tests for automaticity in the use of the device. The NaviEar translates head direction toward north into continuous sound according to a “wind coding” principle. To facilitate automatization of use, its design is informed by considerations of the embodiment of spatial orientation and multi-sensory integration, and it uses a sensory coding scheme derived from means for auditory perception of wind direction that is common in sailing because it is easy to understand and use. The test battery assesses different effects of automaticity (interference, rigidity of responses, and dynamic integration) assuming that automaticity is a necessary criterion to show the emergence of perceptual feel, that is, an augmented experience with perceptual phenomenal quality. We measured performance in simple training tasks, administered the tests for automaticity, and assessed subjective reports through a questionnaire. Results suggest that the NaviEar is easy and comfortable to use and has a potential for applications in real-world situations. Despite high usability, however, a 5-day training with the NaviEar did not reach levels of automaticity that are indicative of perceptual feel. We propose that the test battery for automaticity may be used as a benchmark test for iterative research on perceptual experiences in sensory augmentation and sensory substitution.
... Exploration is also crucial for educating attention (Gibson and Rader, 1979;Gibson, 1977;Szokolszky et al., 2019) and making sense of the output of an SSD (Froese et al., 2012;Froese and Ortiz-Garin, 2020). So, diminished exploratory activity might result in poor development of sensorimotor contingencies and a lack of SSD-supported adaptive behaviours (Kaspar et al., 2014;König et al., 2016;Lenay and Steiner, 2010). This in turn creates a recurrent motivational gap: if a user feels unsafe, inefficient, and constrained by the device and is disappointed with the SSD-mediated sensory experience (which does not sufficiently resemble their expectations concerning vision), their willingness to use the device for everyday activities decreases. ...
Article
Sensory substitution is thought to be a promising non-invasive assistive technology for people with complete loss of sight because it provides inaccessible visual information via a preserved modality. However, Sensory Substitution Devices (SSDs) are still rarely used by visually impaired persons, possibly due to a lack of structured and supervised training that could be offered alongside these devices. Here, we developed and evaluated a training program that supports the usage of a recently developed colour-to-sound SSD – the Colorophone. Following our recently proposed theoretical model of SSD development, we propose that this training should help people with complete loss of sight to learn how to efficiently use the device by developing relationships between the components of the user-environment-technology system. We applied systematic case studies combined with a mixed-method approach to evaluate the efficacy of this SSD training program. Five blind users underwent ca. 22 hours of training, divided into four main parts: identification of the users’ individual characteristics and adaptations; sensorimotor training with the device; semi-structured explorations with the device; and evaluation of the training. We demonstrated that this training allows users to successfully acquire a set of skills (i.e., master the sensorimotor contingencies required by the device, develop visual-like perceptual skills, as well as learn about colours) and progress along developmental trajectories (e.g., switch from serial to parallel information processing, recognize more complex colours, increase environment and task complexity). Importantly, we identified individual differences in learning strategies (i.e., sensorimotor vs. metacognitive strategy) that had an impact on the users’ training progress and required the training assistants (TAs) to apply different assistive strategies. Additionally, we described the crucial role of a (non-professional) training assistant in the training progress: this person facilitates the development of relationships between elements of the user-environment-technology system by supporting a metacognitive learning strategy, thereby reducing the risk of abandonment of the SSD. Our study shows the importance for SSD development of well-designed, tailored training, and it provides new insights into the process of SSD-related perceptual learning.
... For this approach to work, the system's user must learn a novel mapping between the state of the moving limb and changes in the synthesized feedback delivered by the sensory interface, and then use that map to guide subsequent actions [cf. König et al. (2016)]. Preliminary studies have demonstrated that within minutes of initial exposure to a novel sensory interface, neurologically intact individuals can rapidly learn to use supplemental feedback of limb configuration and movement to improve the accuracy of reaching actions performed without visual guidance and some survivors of stroke (Tzorakoleftherakis et al. 2015(Tzorakoleftherakis et al. , 2016Krueger et al. 2017;Risi et al. 2019;Ballardini et al. 2021). ...
... Vibrotactile sensory interfaces show particular promise for enhancing postural stabilization in vestibular patients and for providing information about grasp force and hand aperture to users of myoelectric forearm prostheses (Sienko et al. 2008;Lee et al. 2011Lee et al. , 2012Witteveen et al. 2015). While others have also proposed vibrotactile sensory interfaces to train body motions by providing feedback of spatial orientation or by stimulating the moving limb in response to limb configuration, few have attempted to mitigate the effects of somatosensory impairment or loss on the control of contralesional arm and hand movements after neuromotor injury errors (Lieberman and Breazeal 2007;Kapur et al. 2009;Weber et al. 2011;Bark et al. 2015;König et al. 2016). Those that have attempted this were largely technology demonstrations or feasibility case studies (De Santis et al. 2014;Afzal et al. 2015;Hussain et al. 2015;Elangovan et al. 2019;Ballardini et al. 2021). ...
... Those that have attempted this were largely technology demonstrations or feasibility case studies (De Santis et al. 2014;Afzal et al. 2015;Hussain et al. 2015;Elangovan et al. 2019;Ballardini et al. 2021). None have examined the extent to which extended practice with such technology can facilitate learning of the novel sensorimotor relationships needed to establish novel closed-loop feedback control of a moving limb [but see König et al. (2016) for a training study of sensory augmentation in the context of spatial navigation]. ...
Preprint
Full-text available
Prior studies have shown that providing task-specific vibrotactile feedback (VTF) during reaching and stabilizing with the arm can immediately improve the accuracy and efficiency. However, such studies typically evaluate performance after less than 1 hour of practice using VTF. Here we tested the effects of extended practice using supplemental kinesthetic VTF on goal-directed reaching with the arm. Healthy young adults performed a primary reaching task and a secondary choice reaction task individually and as a dual-task. The reaching task was performed under three feedback conditions: visual feedback, proprioceptive feedback, and with supplemental kinesthetic VTF applied to the non-moving arm. We compared performances before, during, and after approximately 10 hours of practice on the VTF-guided reaching task, distributed across 20 practice sessions. Upon initial exposure to VTF-guided reaching, participants were immediately able to use the VTF to improve reaching accuracy. Performance improvements were retained from one practice session to the next. After 10 hours of practice, the accuracy and temporal efficiency of VTF-guided reaching were equivalent to or better than reaching performed without vision or VTF. However, hand paths during VTF-guided reaching exhibited a persistent strategy whereby movements were decomposed into discrete sub-movements along the cardinal axes of the VTF interface. Dual-tasking capability also improved, such that the primary and secondary tasks we performed more concurrently after extended practice. Our results demonstrate that extended practice on VTF-guided reaching can yield performance improvements that accrue in a manner increasingly resistant to dual-task interference.
... Finally, even completely new senses can be best acquired by learning new sensorimotor contingencies through interaction. Using sensory augmentation devices allows for learning new SMCs and shows changes in the person's perception as well as in the representations in the brain (Kaspar et al., 2014;Kieliba et al., 2021;König et al., 2016;Nagel et al., 2005). ...
Thesis
Full-text available
The fields of biologically inspired artificial intelligence, neuroscience, and psychology have had exciting influences on each other over the past decades. Especially recently, with the increased popularity and success of artificial neural networks (ANNs), ANNs have enjoyed frequent use as models for brain function. However, there are still many disparities between the implementation, algorithms, and learning environment used for deep learning and those employed by the brain, which is reflected in their differing abilities. I first briefly introduce ANNs and survey the differences and similarities between them and the brain. I then make a case for designing the learning environment of ANNs to be more similar to that in which brains learn, namely by allowing them to actively interact with the world and decreasing the amount of external supervision. To implement this sensorimotor learning in an artificial agent, I use deep reinforcement learning, which I will also briefly introduce and compare to learning in the brain. In the research presented in this dissertation, I focus on testing the hypothesis that the learning environment matters and that learning in an embodied way leads to acquiring different representations of the world. We first tested this on human subjects, comparing spatial knowledge acquisition in virtual reality to learning from an interactive map. The corresponding two publications are complemented by a methods paper describing eye tracking in virtual reality as a helpful tool in this type of research. After demonstrating that subjects do indeed learn different spatial knowledge in the two conditions, we test whether this transfers to artificial agents. Two further publications show that an ANN learning through interaction learns significantly different representations of the sensory input than ANNs that learn without interaction. We also demonstrate that through end-to-end sensorimotor learning, an ANN can learn visually-guided motor control and navigation behavior in a complex 3D maze environment without any external supervision using curiosity as an intrinsic reward signal. The learned representations are sparse, encode meaningful, action-oriented information about the environment, and can perform few-shot object recognition despite not knowing any labeled data beforehand. Overall, I make a case for increasing the realism of the computational tasks ANNs need to solve (largely self-supervised, sensorimotor learning) to improve some of their shortcomings and make them better models of the brain.
... Among the most relevant, a previous study reported that SA coupled with navigation training was associated with changes in brain activity in the sensorimotor and navigation (hippocampus, caudate) brain regions [23]. However, it is unknown whether brain-related changes occur after balance training with SA. ...
... with SA devices, including sensory reweighting and context-specific adaptation [14,22]. Among the most relevant, a previous study reported that SA coupled with navigation training was associated with changes in brain activity in the sensorimotor and navigation (hippocampus, caudate) brain regions [23]. However, it is unknown whether brain-related changes occur after balance training with SA. ...
Article
Full-text available
Vibrotactile sensory augmentation (SA) decreases postural sway during real-time use; however, limited studies have investigated the long-term effects of training with SA. This study assessed the retention effects of long-term balance training with and without vibrotactile SA among community-dwelling healthy older adults, and explored brain-related changes due to training with SA. Sixteen participants were randomly assigned to the experimental group (EG) or control group (CG), and trained in their homes for eight weeks using smart-phone balance trainers. The EG received vibrotactile SA. Balance performance was assessed before, and one week, one month, and six months after training. Functional MRI (fMRI) was recorded before and one week after training for four participants who received vestibular stimulation. Both groups demonstrated significant improvement of SOT composite and MiniBESTest scores, and increased vestibular reliance. Only the EG maintained a minimal detectable change of 8 points in SOT scores six months post-training and greater improvements than the CG in MiniBESTest scores one month post-training. The fMRI results revealed a shift from activation in the vestibular cortex pre-training to increased activity in the brainstem and cerebellum post-training. These findings showed that additional balance improvements were maintained for up to six months post-training with vibrotactile SA for community-dwelling healthy older adults.