The virtual environment. The top panel shows a first-person view of one end of the environment. The bottom panel shows a top-down view of the environment. The starting position is the red and white bull's-eye and the two black rectangles represent support columns that were present in the real and virtual environments.  

The virtual environment. The top panel shows a first-person view of one end of the environment. The bottom panel shows a top-down view of the environment. The starting position is the red and white bull's-eye and the two black rectangles represent support columns that were present in the real and virtual environments.  

Source publication
Article
Full-text available
Three experiments examine how the peripheral visual field (PVF) mediates the development of spatial representations. In Experiment 1 participants learned and were tested on statue locations in a virtual environment while their field-of-view (FOV) was restricted to 40 degrees , 20 degrees , 10 degrees , or 0 degrees (diam). As FOV decreased, overall...

Context in source publication

Context 1
... immersive virtual replication of the laboratory (see Fig. 1) was created using 3D Studio Max software (Dis- creet, Montreal, Canada). The replication was used in order to give the participants a familiar sense of scale in the environment and to prevent the experimenter from hav- ing to interfere with the participant's walking patterns dur- ing the testing phase, as all of the walls and support ...

Similar publications

Article
Full-text available
The integration of the human brain with computers is an interesting new area of applied neuroscience, where one application is replacement of a person's real body by a virtual representation. Here we demonstrate that a virtual limb can be made to feel part of your body if appropriate multisensory correlations are provided. We report an illusion tha...
Article
Full-text available
The aim of the experiment was to study the adaptive capacities of children to perform drawing movements while being visually perturbed. Children aged 5-11 years and a group of adults drew diamonds via information provided through a computer screen. The screen display was either upright or rotated 180 degrees. Results showed that the absence of dire...
Article
Full-text available
It has been shown that people can learn to perform a variety of motor tasks in novel dynamic environments without visual feedback, highlighting the importance of proprioceptive feedback in motor learning. However, our results show that it is possible to learn a viscous curl force field without proprioceptive error to drive adaptation, by providing...
Article
Full-text available
The way in which the head is controlled in roll was investigated by dissociating the body axis and the gravito-inertial force orientation. Seated subjects (N = 8) were requested to align their head with their trunk, 30 degrees to the left, 30 degrees to the right or with the gravito-inertial vector, before, during (Per Rotation), after off-center r...
Article
Full-text available
It has been argued that when an observer moves, a contingent retinal-image motion of a stimulus would strengthen the perceived glossiness. This would be attributed to the veridical perception of three-dimensional structure by motion parallax. However, it has not been investigated whether the effect of motion parallax is more than that of retinal-im...

Citations

... The mental model of the required gaze movements to perceive objects on the blind side yielded a small variance within the sample and did not affect the scanning. In the past, researchers have found that spatial representation is affected by impairments of the peripheral field of view, resulting in more placement errors and a compression of space [63][64][65]. This could explain the increased difficulties of participants with HVFL to judge the spatial extent of their peripheral impairment. ...
Article
Full-text available
Objective It is currently still unknown why some drivers with visual field loss can compensate well for their visual impairment while others adopt ineffective strategies. This paper contributes to the methodological investigation of the associated top-down mechanisms and aims at validating a theoretical model on the requirements for successful compensation among drivers with homonymous visual field loss. Methods A driving simulator study was conducted with eight participants with homonymous visual field loss and eight participants with normal vision. Participants drove through an urban surrounding and experienced a baseline scenario and scenarios with visual precursors indicating increased likelihoods of crossing hazards. Novel measures for the assessment of the mental model of their visual abilities, the mental model of the driving scene and the perceived attention demand were developed and used to investigate the top-down mechanisms behind attention allocation and hazard avoidance. Results Participants with an overestimation of their visual field size tended to prioritize their seeing side over their blind side both in subjective and objective measures. The mental model of the driving scene showed close relations to the subjective and actual attention allocation. While participants with homonymous visual field loss were less anticipatory in their usage of the visual precursors and showed poorer performances compared to participants with normal vision, the results indicate a stronger reliance on top-down mechanism for drivers with visual impairments. A subjective focus on the seeing side or on near peripheries more frequently led to bad performances in terms of collisions with crossing cyclists. Conclusion The study yielded promising indicators for the potential of novel measures to elucidate top-down mechanisms in drivers with homonymous visual field loss. Furthermore, the results largely support the model of requirements for successful compensatory scanning. The findings highlight the importance of individualized interventions and driver assistance systems tailored to address these mechanisms.
... For example, many models of navigation emphasize visual landmarks (e.g., Chan et al. 2012;Chrastil and Warren 2015;Ekstrom 2015;Epstein and Vass 2014) and environmental geometry (Marchette et al. 2014;Mou and McNamara 2002) as providing frames of reference for spatial learning. Here, in addition to reduced acuity and contrast sensitivity, field of view should also play a role, as it should be more difficult to perceive the scale and shape of large-scale environmental geometry or encode global configurations when experienced in multiple restricted visual snapshots (Fortenbaugh et al. 2007(Fortenbaugh et al. , 2008Kelly et al. 2008;Sturz et al. 2013). Importantly, landmark recognition, self-localization, and formation and use of long-term spatial knowledge all involve some amount of attentional resources (Lindberg and Gärling 1982), and low vision increases these attentional demands (Pigeon and Marin-Lamellet 2015). ...
... Whereas the study of local features has focused primarily on the interaction of vision with reduced acuity with surface, geometry, and lighting conditions, examination of global features has extended to simulated peripheral field loss. Reduced peripheral field of view impacts use of global features in spatial cognition in numerous ways, including distance estimation (Fortenbaugh et al. 2007(Fortenbaugh et al. , 2008, perception of global configurations of spatial layout (Yamamoto and Philbeck 2013), encoding and use of environmental geometry as a frame of reference (Kelly et al. 2008;Sturz et al. 2013), and increasing cognitive load (Barhorst-Cates et al. 2016). Legge et al. (2016a, b) measured the impact of low vision on both distance and direction estimates in a simple spatial updating task using a three-segment path completion task in seven different sized rooms (see Fig. 9). ...
... To explain these deficits in performance on spatial cognition tasks with simulated low vision, several studies have tested hypotheses related to perception (Fortenbaugh et al. 2007(Fortenbaugh et al. , 2008Legge et al. 2016a, b;Rand et al. 2019), attentional demands (Rand et al. 2015), and environmental complexity ). There is some support for perceptual distortions that could influence more global spatial tasks. ...
Article
Full-text available
People with visual impairment often rely on their residual vision when interacting with their spatial environments. The goal of visual accessibility is to design spaces that allow for safe travel for the large and growing population of people who have uncorrectable vision loss, enabling full participation in modern society. This paper defines the functional challenges in perception and spatial cognition with restricted visual information and reviews a body of empirical work on low vision perception of spaces on both local and global navigational scales. We evaluate how the results of this work can provide insights into the complex problem that architects face in the design of visually accessible spaces.
... There are a range perceptual changes with reduced vision that may have impacted reorientation performance in our study, including impaired distance or shape estimates (Fortenbaugh et al., 2008(Fortenbaugh et al., , 2007 and impaired detection of environmental features (Bochsler et al., 2013(Bochsler et al., , 2012. Because reorienting requires searching the environment for information, the slower response times may also point to challenges with performing visual search, which are increased when peripheral vision is limited (Senger et al., 2017). ...
... For instance, reduced FOV disrupts the ability to access a global spatial framework even of small-scale spatial layoutsmore head and eye movements must be used to perceive the layout of objects when visual field is restricted (Yamamoto & Philbeck, 2013). One contribution to spatial memory error with peripheral field loss may be inaccurate distance perception (Fortenbaugh, Hicks, Hao, & Turano, 2007;Fortenbaugh, Hicks, & Turano, 2008; for the effects of severely blurred vision on distance estimates, see Rand, Barhorst-Cates, Kiris, Thompson, & Creem-Regehr, 2019). Distance perception and spatial memory with restricted FOV improve through active walking to targets as opposed to stationary viewing (Fortenbaugh et al., 2008). ...
... However, we did find the predicted and consistent effect of increased cognitive load with active search. Based on prior work examining the role of restricted peripheral field on spatial memory in room-size spaces (Fortenbaugh et al., 2007;Fortenbaugh et al., 2008;Yamamoto & Philbeck, 2013) we reasoned that the additional effort required to both search for targets and integrate multiple restricted viewpoints would reduce the cognitive resources available to accurately encode target locations. While the secondary auditory task results revealed the increased cognitive effort, we did not see the expected consequences on spatial memory. ...
Article
Spatial learning of real-world environments is impaired with severely restricted peripheral field of view (FOV). In prior research, the effects of restricted FOV on spatial learning have been studied using passive learning paradigms – learners walk along pre-defined paths and are told the location of targets to be remembered. Our research has shown that mobility demands and environmental complexity may contribute to impaired spatial learning with restricted FOV through attentional mechanisms. Here, we examine the role of active navigation, both in locomotion and in target search. First, we compared effects of active versus passive locomotion (walking with a physical guide versus being pushed in a wheelchair) on a task of pointing to remembered targets in participants with simulated 10° FOV. We found similar performance between active and passive locomotion conditions in both simpler (Experiment 1) and more complex (Experiment 2) spatial learning tasks. Experiment 3 required active search for named targets to remember while navigating, using both a mild and a severe FOV restriction. We observed no difference in pointing accuracy between the two FOV restrictions but an increase in attentional demands with severely restricted FOV. Experiment 4 compared active and passive search with severe FOV restriction, within subjects. We found no difference in pointing accuracy, but observed an increase in cognitive load in active versus passive search. Taken together, in the context of navigating with restricted FOV, neither locomotion method nor level of active search affected spatial learning. However, the greater cognitive demands could have counteracted the potential advantage of the active learning conditions.
... Prior work from our laboratory has shown that spatial learning during navigation is impaired with both simulated severely degraded acuity and contrast sensitivity (Rand, Creem-Regehr, & Thompson, 2015) and severely restricted peripheral FOV (Barhorst-Cates, Rand, & Creem-Regehr, 2016) in a real-world environment, and has attributed the deficit in learning partially to the attentional demands of monitoring to ensure safe mobility (Barhorst-Cates et al., 2016;Rand et al., 2015). Much of the prior work on low-vision spatial perception and navigation has used large, highly structured indoor hallways (Barhorst-Cates et al., 2016;Barhorst-Cates, Rand, & Creem-Regehr, 2017;Rand et al., 2015) or single-room environments (Fortenbaugh, Hicks, Hao, & Turano, 2007;Fortenbaugh, Hicks, & Turano, 2008;Legge, Gage, Baek, & Bochsler, 2016;Legge, Granquist, Baek, & Gage, 2016;Yamamoto & Philbeck, 2013). However, everyday navigation often occurs outside the context of straightforward hallways or rooms and it is unknown how low vision affects spatial learning in more irregular spatial contexts. ...
... Room size judgments were impaired in the narrow FOV condition compared to normal vision, but spatial updating was not impaired with FOV restriction in either walking or wheelchair conditions. As such, a wide FOV does not seem necessary to complete simple spatial updating tasks, but it does affect other properties of environmental learning such as perceived scale of a room (Legge, Granquist, et al., 2016) and distance perception (Fortenbaugh et al., 2007(Fortenbaugh et al., , 2008. ...
... First, spatial learning with FOV restricted to 10°may be significantly impaired in the museum space because the environment cannot be viewed all at once. In more typically studied hallway or stationary-viewing environments, viewing with peripheral field loss requires increased head rotation and integration of multiple views, which results in worse performance (Barhorst-Cates et al., 2016;Fortenbaugh et al., 2007;Yamamoto & Philbeck, 2013). We suggest that navigation in a museum further increases these visual demands because of the open nature of the environment, affecting the overall intelligibility, or mutual visibility (Hillier, 2006), of the space. ...
Article
Full-text available
Background: Previous research has found that spatial learning while navigating in novel spaces is impaired with extreme restricted peripheral field of view (FOV) (remaining FOV of 4°, but not of 10°) in an indoor environment with long hallways and mostly orthogonal turns. Here we tested effects of restricted peripheral field on a similar real-world spatial learning task in an art museum, a more challenging environment for navigation because of valuable obstacles and unpredictable paths, in which participants were guided along paths through the museum and learned the locations of pieces of art. At the end of each path, participants pointed to the remembered landmarks. Throughout the spatial learning task, participants completed a concurrent auditory reaction time task to measure cognitive load. Results: Unlike the previous study in a typical hallway environment, spatial learning was impaired with a simulated 10° FOV compared to a wider 60° FOV, as indicated by greater average pointing error with restricted FOV. Reaction time to the secondary task also revealed slower responses, suggesting increased attentional demands. Conclusions: We suggest that the presence of a spatial learning deficit in the current experiment with this level of FOV restriction is due to the complex and unpredictable paths traveled in the museum environment. Our results also convey the importance of the study of low-vision spatial cognition in irregularly structured environments that are representative of many real-world settings, which may increase the difficulty of spatial learning while navigating.
... In visual-cognition research, central vision is generally considered to extend from 08 to 58 eccentricity (Hollingworth, Schrock, & Henderson, 2001;Rayner, 1998;Shimozaki, Chen, Abbey, & Eckstein, 2007;van Diepen, Wampers, & d'Ydewalle, 1998), including both the rod-free foveola from 08 to approximately 0.58-18 eccentricity and the parafovea from approximately 0.58-18 to 58 eccentricity, with everything beyond 58 being peripheral vision. Others in vision science consider central vision to extend from 08 to 108 eccentricity (Fortenbaugh, Hicks, Hao, & Turano, 2007;Schwartz, 2010), to the outer edge of the perifovea (Table 2), with peripheral vision beyond that edge. Most importantly, by any of these definitions the vast majority of the human visual field is in peripheral vision. ...
... Nevertheless, Larson and Loschky's results were produced using images of only 278 3 278 of visual angle. As noted earlier, some researchers consider central vision 08-108, with anything at !108 eccentricity being peripheral vision (Fortenbaugh et al., 2007;Schwartz, 2010). In that case, windows and scotomas of radius 108 would divide central and peripheral vision. ...
... Some retinal-neurophysiology researchers have defined peripheral vision as anything beyond the macula, namely .2.68-3.68 (Quinn et al., 2019), while many visual-cognition researchers have defined it as anything beyond 58 eccentricity (Hollingworth et al., 2001;Larson & Loschky, 2009;Rayner, 1998;Shimozaki et al., 2007;, and still other vision researchers have defined it as anything beyond 108 eccentricity (Fortenbaugh et al., 2007;Schwartz, 2010). We found that the critical radius producing equivalent performance in the window and scotoma conditions was 108, or roughly the outer limit of the perifovea (Table 1). ...
Article
Full-text available
We investigated the relative contributions of central versus peripheral vision in scene-gist recognition with panoramic 180° scenes. Experiment 1 used the window/scotoma paradigm of Larson and Loschky (2009). We replicated their findings that peripheral vision was more important for rapid scene categorization, while central vision was more efficient, but those effects were greatly magnified. For example, in comparing our critical radius (which produced equivalent performance with mutually exclusive central and peripheral image regions) to that of Larson and Loschky, our critical radius of 10° had a ratio of central to peripheral image area that was 10 times smaller. Importantly, we found different functional relationships between the radius of centrally versus peripherally presented imagery (or the proportion of centrally versus peripherally presented image area) and scene-categorization sensitivity. For central vision, stimulus discriminability was an inverse function of image radius, while for peripheral vision the relationship was essentially linear. In Experiment 2, we tested the photographic-bias hypothesis that the greater efficiency of central vision for rapid scene categorization was due to more diagnostic information in the center of photographs. We factorially compared the effects of the eccentricity from which imagery was sampled versus the eccentricity at which imagery was presented. The presentation eccentricity effect was roughly 3 times greater than the sampling eccentricity effect, showing that the central-vision efficiency advantage was primarily due to the greater sensitivity of central vision. We discuss our results in terms of the eccentricity-dependent neurophysiology of vision and discuss implications for computationally modeling rapid scene categorization.
... However, it was important to validate that the model of visual impairment that we used, i.e., a restricted field of view within a VR helmet, was truthful. Different behavioral studies already showed that the model is reliable in spatial cognitive tasks [46,47] and spatial memory tasks [47,48]. Nevertheless, the model does not supplant an evaluation of the device with people with tunnel vision. ...
... However, it was important to validate that the model of visual impairment that we used, i.e., a restricted field of view within a VR helmet, was truthful. Different behavioral studies already showed that the model is reliable in spatial cognitive tasks [46,47] and spatial memory tasks [47,48]. Nevertheless, the model does not supplant an evaluation of the device with people with tunnel vision. ...
Article
Full-text available
The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the location of specific targets with private and non-intrusive tactile cues. Three experimental studies were completed to design and evaluate the tactile code and the device. In the first study, four different tactile codes (single stimuli or trains of pulses rendered either in a Cartesian or a Polar coordinate system) were evaluated with a head pointing task. In the following studies, the most efficient code, trains of pulses with Cartesian coordinates, was used on a bracelet located on the wrist, and evaluated during a visual search task in a complex virtual environment. The second study included ten subjects with a simulated restrictive field of view (10°). The last study consisted of proof of a concept with one visually impaired subject with restricted peripheral vision due to glaucoma. The results show that the device significantly improved the visual search efficiency with a factor of three. Including object recognition algorithm to smart glass, the device could help to detect targets of interest either on demand or suggested by the device itself (e.g., potential obstacles), facilitating visual search, and more generally spatial awareness of the environment.
... Reversely, peripheral visual field loss excludes the use of covert visual attention, constraining the affected individuals to increase their saccade rate to laboriously explore their environment . Affected individuals preserve functions related to the high spatial resolution of the residual central vision such as face and small objects recognition but exhibit impaired spatial orientation (Wittich et al. 2011) and scene perception (Fortenbaugh et al. 2007), altered postural control (Berencsi et al. 2005) and increased risk of object collision during locomotion (Turano et al. 1999(Turano et al. , 2002 due to the limited coverage of the residual visual field. However, shreds of evidences support a brain reorganisation consequent to the adjustment of these behaviours. ...
Article
Full-text available
Disorders that specifically affect central and peripheral vision constitute invaluable models to study how the human brain adapts to visual deafferentation. We explored cortical changes after the loss of central or peripheral vision. Cortical thickness (CoTks) and resting-state cortical entropy (rs-CoEn), as a surrogate for neural and synaptic complexity, were extracted in 12 Stargardt macular dystrophy, 12 retinitis pigmentosa (tunnel vision stage), and 14 normally sighted subjects. When compared to controls, both groups with visual loss exhibited decreased CoTks in dorsal area V3d. Peripheral visual field loss also showed a specific CoTks decrease in early visual cortex and ventral area V4, while central visual field loss in dorsal area V3A. Only central visual field loss exhibited increased CoEn in LO-2 area and FG1. Current results revealed biomarkers of brain plasticity within the dorsal and the ventral visual streams following central and peripheral visual field defects. Electronic supplementary material The online version of this article (10.1007/s00429-018-1700-7) contains supplementary material, which is available to authorized users.
... For instance, reduced FOV disrupts the ability to access a global spatial framework even of small-scale spatial layoutsmore head and eye movements must be used to perceive the layout of objects when visual field is restricted (Yamamoto & Philbeck, 2013). One contribution to spatial memory error with peripheral field loss may be inaccurate distance perception (Fortenbaugh, Hicks, Hao, & Turano, 2007;Fortenbaugh, Hicks, & Turano, 2008; for the effects of severely blurred vision on distance estimates, see Rand, Barhorst-Cates, Kiris, Thompson, & Creem-Regehr, 2019). Distance perception and spatial memory with restricted FOV improve through active walking to targets as opposed to stationary viewing (Fortenbaugh et al., 2008). ...
... However, we did find the predicted and consistent effect of increased cognitive load with active search. Based on prior work examining the role of restricted peripheral field on spatial memory in room-size spaces (Fortenbaugh et al., 2007;Fortenbaugh et al., 2008;Yamamoto & Philbeck, 2013) we reasoned that the additional effort required to both search for targets and integrate multiple restricted viewpoints would reduce the cognitive resources available to accurately encode target locations. While the secondary auditory task results revealed the increased cognitive effort, we did not see the expected consequences on spatial memory. ...
... For judgments of obstacles (ramps and steps), Bochsler et al. (2013) found qualitatively similar effects of distance and target type for people with low vision and normal vision with simulated reduced acuity. Likewise, Fortenbaugh et al. (2007) found similar effects of peripheral field loss in clinical patients and restricted normally sighted participants, with respect to distance compression in spatial memory. However, recent work with locomotion in large rooms showed that real vision loss had little effect on the ability to judge room size and update spatial positions along simple paths (Legge et al. 2016b). ...
Article
Full-text available
Monitoring one’s safety during low vision navigation demands limited attentional resources which may impair spatial learning of the environment. In studies of younger adults, we have shown that these mobility monitoring demands can be alleviated, and spatial learning subsequently improved, via the presence of a physical guide during navigation. The present study extends work with younger adults to an older adult sample with simulated low vision. We test the effect of physical guidance on improving spatial learning as well as general age-related changes in navigation ability. Participants walked with and without a physical guide on novel real-world paths in an indoor environment and pointed to remembered target locations. They completed concurrent measures of cognitive load on the trials. Results demonstrate an improvement in learning under low vision conditions with a guide compared to walking without a guide. However, our measure of cognitive load did not vary between guidance conditions. We also conducted a cross-age comparison and found support for age-related declines in spatial learning generally and greater effects of physical guidance with increasing age.