Article

Updating egocentric representations in human navigation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Seven experiments tested whether human navigation depends on enduring representations, or on momentary egocentric representations that are updated as one moves. Human subjects pointed to unseen targets, either while remaining oriented or after they had been disoriented by self-rotation. Disorientation reduced not only the absolute accuracy of pointing to all objects ('heading error') but also the relative accuracy of pointing to different objects ('configuration error'). A single light providing a directional cue reduced both heading and configuration errors if it was present throughout the experiment. If the light was present during learning and test but absent during the disorientation procedure, however, subjects showed low heading errors (indicating that they reoriented by the light) but high configuration errors (indicating that they failed to retrieve an accurate cognitive map of their surroundings). These findings provide evidence that object locations are represented egocentrically. Nevertheless, disorientation had little effect on the coherence of pointing to different room corners, suggesting both (a) that the disorientation effect on representations of object locations is not due to the experimental paradigm and (b) that room geometry is captured by an enduring representation. These findings cast doubt on the view that accurate navigation depends primarily on an enduring, observer-free cognitive map, for humans construct such a representation of extended surfaces but not of objects. Like insects, humans represent the egocentric distances and directions of objects and continuously update these representations as they move. The principal evolutionary advance in animal navigation may concern the number of unseen targets whose egocentric directions and distances can be represented and updated simultaneously, rather than a qualitative shift in navigation toward reliance on an allocentric map.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The pen is on the table in front of us, about 30 degrees around our current facing orientation. We frequently use this type of representation to avoid collisions with objects and traverse our immediate, peripersonal space, as demonstrated by various studies of human spatial cognition [33][34][35]. ...
... These representations can then be updated as we walk through an environment, constituting the basis for path integration [22,37]. However, these representations diminish during disorientation [34][35][36] or in large-scale environments [38]. ...
... ATE aligns the two trajectories and then directly evaluates the absolute pose differences. This method is well-suited for assessing visual SLAM systems [34,39], but requires that absolute ground truth poses are available. Furthermore, the frame rate of the ORB-SLAM2 and the proposed system in both stereo and RGBD cases are presented to evaluate the real-time performance. ...
Article
Full-text available
The technological advances in computational systems have enabled very complex computer vision and machine learning approaches to perform efficiently and accurately. These new approaches can be considered a new set of tools to reshape the visual SLAM solutions. We present an investigation of the latest neuroscientific research that explains how the human brain can accurately navigate and map unknown environments. The accuracy suggests that human navigation is not affected by traditional visual odometry drifts resulting from tracking visual features. It utilises the geometrical structures of the surrounding objects within the navigated space. The identified objects and space geometrical shapes anchor the estimated space representation and mitigate the overall drift. Inspired by the human brain’s navigation techniques, this paper presents our efforts to incorporate two machine learning techniques into a VSLAM solution: semantic segmentation and layout estimation to imitate human abilities to map new environments. The proposed system benefits from the geometrical relations between the corner points of the cuboid environments to improve the accuracy of trajectory estimation. Moreover, the implemented SLAM solution semantically groups the map points and then tracks each group independently to limit the system drift. The implemented solution yielded higher trajectory accuracy and immunity to large pure rotations.
... Moreover, the research on animal models in this field has been severely deficient and lagging. Compared to previous studies, 18,19 we have put forward an innovative approach to construct an animal model of vestibular-induced SD through rotational stimuli combined with the blockade of visual information. This approach is based on prior research 12 and can potentially lead to advancements in the study of the mechanisms of SD. ...
... This group of mice was designed to be a simple SD group based on previous research. 12,18,19 The third group, called the double-axis vision (D + V) group, underwent bidirectional rotational motion without visual occlusion. Since spatial orientation abilities involve the integration of various sensory information in the brain, we designed this group to investigate whether the mice would still exhibit more complex SD (i.e., Coriolis illusion) or reduce the related behaviors after being stimulated by biaxial rotation under the premise of having good visual cue input. ...
Article
Full-text available
Spatial disorientation (SD) is the main contributor to flight safety risks, but research progress in animals has been limited, impeding a deeper understanding of the underlying mechanisms of SD. This study proposed a method for constructing and evaluating a vestibular SD mouse model, which adopted coupled rotational stimulation with visual occlusion. Physiological parameters were measured alongside behavioral indices to assess the model, and neuronal changes were observed through immunofluorescent staining. The evaluation of the model involved observing decreased colonic temperature and increased arterial blood pressure in mice exposed to SD, along with notable impairments in motor and cognitive function. Our investigation unveiled that vestibular SD stimulation elicited neuronal activation in spatially associated cerebral areas, such as the hippocampus. Furthermore, transcriptomic sequencing and bioinformatics analysis revealed the potential involvement of Slc17a6 in the mechanism of SD. These findings lay a foundation for further investigation into the molecular mechanisms underlying SD.
... In Experiment 1, we used the disorientation paradigm to compare the accuracy of the spatial representations of different navigators before and after disorientation. Research suggests that disorientation, such as spinning the subject around for a sufficient time, disturbs a sense of direction and orientation [37,42] and therefore disturbs the generation of egocentric representations. The disorientation paradigm has also been used in studies of spatial learning, which attempted to minimize the influence of path integration during spatial learning tasks by forcing the subjects to rely on allothetic instead of idiothetic cues (e.g., [43,44]). ...
... Altogether, our findings cast doubt on the conclusions from previous studies that, while learning a route or exploring a novel environment without navigational aids, individuals form only one specific type of survey-based representation, either orientation-specific and transient egocentric [31,[37][38][39] or orientation-free and enduring allocentric [15,45]. It is also unlikely that individuals form and later maintain both egocentric and allocentric representations of an environment, with the former being more precise and the latter coarser but enduring [55], or that they can flexibly switch between these representations according to the task requirement [55,77]. ...
Article
Full-text available
The goal of the current study was to show the existence of distinct types of survey-based environmental representations, egocentric and allocentric, and provide experimental evidence that they are formed by different types of navigational strategies, path integration and map-based navigation, respectively. After traversing an unfamiliar route, participants were either disoriented and asked to point to non-visible landmarks encountered on the route (Experiment 1) or presented with a secondary spatial working memory task while determining the spatial locations of objects on the route (Experiment 2). The results demonstrate a double dissociation between the navigational strategies underlying the formation of allocentric and egocentric survey-based representation. Specifically, only the individuals who generated egocentric survey-based representations of the route were affected by disorientation, suggesting they relied primarily on a path integration strategy combined with landmark/scene processing at each route segment. In contrast, only allocentric-survey mappers were affected by the secondary spatial working memory task, suggesting their use of map-based navigation. This research is the first to show that path integration, in conjunction with egocentric landmark processing, is a distinct standalone navigational strategy underpinning the formation of a unique type of environmental representation—the egocentric survey-based representation.
... Sense of personal perspective is crucial for understanding in attentional mechanisms of the perception in ''self'' or ''other's'' body (Lotze and Moseley 2007;Mou et al. 2004;Schwoebel et al. 2002;Wang and Spelke 2000). To the best of our knowledge, this present study is one of the first studies that examining cortical responses in HLJ regarding personal perspectives. ...
... However, presenting stimulus as a body image causes activations in different level of central and peripheral nerve system including kinaesthetic and proprioceptive sensations (Lotze and Moseley 2007;Moseley and Flor 2012;Nico et al. 2004). On the other hand, as a controversial issue, it is mentioned that egocentric coding schemes also include some allocentric representations (Wang and Spelke 2000). That's why more specific paradigms are needed to be developed to explore the differences between pure allocentric and egocentric perspectives. ...
Article
Full-text available
Sense of personal perspective is crucial for understanding in attentional mechanisms of the perception in “self” or “other’s” body. In a hand laterality judgment (HLJ) task, perception of perspective can be assessed by arranging angular orientations and depths of images. A total of 11 healthy, right-handed participants (8 females, mean age: 38.36 years, education: 14 years) were included in the study. The purpose of this study was to investigate behavioural and cortical responses in low-frequency cortical rhythms during a HLJ task. A total of 80-visual hand stimuli were presented through the experiment. Hand visuals were categorized in the way of side (right vs. left) and perspective (1st vs. 3rd personal perspective). Both behavioural outcomes and brain oscillatory characteristics (i.e., frequency and amplitude) of the Electroencephalography were analysed. All reaction time and incorrect answers for 3rd person perspective were higher than the ones for 1st person perspective. Location effect was statistically significant in event-related theta responses confirming the dominant activity of theta frequency in spatial memory tasks on parietal and occipital areas. In addition, we found there were increasing in delta power and phase in hand visuals with 1st person perspective and increasing theta phase in hand visuals with 3rd person perspective (p < 0.05). Accordingly, a clear dissociation in the perception of perspectives in low-frequency bands was revealed. These different cortical strategy in the perception of hand visual with and without perspectives may be interpreted as delta activity may be related in self-body perception, whereas theta activity may be related in allocentric perception.
... [13] Among children and young adults, objects were coded egocentrically. [20] 1.2. Neural substrates of allocentric and egocentric representations of space 1.2.1. ...
... [32] Egocentric representationis primarily transitory and updates the representations of the object in space. [20] Allocentric representation is more enduring which incorporates "cognitive map" into the representations. [54] Allocentric representation, when compared with egocentric representation, involves additional information processing processes such as visual working memory [55] and demands greater cognitive resources (for critical review: [53,56] ). ...
Article
Full-text available
Behavioral and neurophysiological experiments have demonstrated that distinct and common cognitive processes and associated neural substrates maintain allocentric and egocentric spatial representations. This review aimed to provide evidence from previous behavioral and neurophysiological studies on collating cognitive processes and associated neural substrates and linking them to the state of visuospatial representations in patients with mild cognitive impairment (MCI). Even though MCI patients showed impaired visuospatial attentional processing and working memory, previous neuropsychological experiments in MCI largely emphasized memory impairment and lacked substantiating evidence of whether memory impairment could be associated with how patients with MCI encode objects in space. The present review suggests that impaired memory capacity is linked to impaired allocentric representation in MCI patients. This review indicates that further research is needed to examine how the decline in visuospatial attentional resources during allocentric coding of space could be linked to working memory impairment. Abbreviations: AD = Alzheimer's disease, FEF = frontal eye field, MCI = mild cognitive impairment, PPC = posterior parietal cortex, VFC = ventral frontal cortex.
... Although when navigating in familiar environments or over longer durations humans predominantly use an allocentric reference frame [42], we have several reasons to argue that our task was well-designed to assess egocentric and not allocentric spatial updating. Firstly, movement in unfamiliar environments, especially in smaller areas, requires a constant updating of relationships between the observer and each object in the visual field [42,43]. Given that the characteristics of our task (body motion in a real environment) correspond well with those conditions associated with egocentric spatial updating, it is reasonable to assume that self-motion cues were used to update the stored egocentric object representations. ...
... Nevertheless, it must be considered that a spatial navigation task cannot be pure egocentric or allocentric, rather a combination of both types of information in spatial navigation and learning is likely [44]. Additionally, in egocentric spatial updating, the updating efficiency is highly dependent on the number of objects one has to update-this is because of the fact that, as the observer moves, each of the given objects must be updated [43]. It was previously shown by Wolbers and colleagues [21] that humans can successfully update up to four spatial positions during self-motion, which was also the rationale behind choosing this number of objects in our study. ...
Article
Full-text available
As we move through an environment, we update positions of our body relative to other objects, even when some objects temporarily or permanently leave our field of view—this ability is termed egocentric spatial updating and plays an important role in everyday life. Still, our knowledge about its representation in the brain is still scarce, with previous studies using virtual movements in virtual environments or patients with brain lesions suggesting that the precuneus might play an important role. However, whether this assumption is also true when healthy humans move in real environments where full body-based cues are available in addition to the visual cues typically used in many VR studies is unclear. Therefore, in this study we investigated the role of the precuneus in egocentric spatial updating in a real environment setting in 20 healthy young participants who underwent two conditions in a cross-over design: (a) stimulation, achieved through applying continuous theta-burst stimulation (cTBS) to inhibit the precuneus and (b) sham condition (activated coil turned upside down). In both conditions, participants had to walk back with blindfolded eyes to objects they had previously memorized while walking with open eyes. Simplified trials (without spatial updating) were used as control condition, to make sure the participants were not affected by factors such as walking blindfolded, vestibular or working memory deficits. A significant interaction was found, with participants performing better in the sham condition compared to real stimulation, showing smaller errors both in distance and angle. The results of our study reveal evidence of an important role of the precuneus in a real-environment egocentric spatial updating; studies on larger samples are necessary to confirm and further investigate this finding.
... This relies on a constant updating of environmental spatial relationships in accordance with one's own movements and is associated with the posterior parietal cortex (6,20,21). Navigation as a behavior is a complex multisensory process that integrates memory, perception, vestibular signals, the motor system, and decision making, thus, requiring the facilitation of a wide range of neural structures, of which the hippocampus is only one (5,(21)(22)(23)(24). Navigation can also be influenced by a number of factors, including age, sex, brain injury, and personality (16,(23)(24)(25)(26)(27) and can be accomplished with allocentric strategies that use metric configurational knowledge of Significance Sleep facilitates hippocampaldependent memories, and the hippocampus uniquely supports the spatial acquisition and representation of environments. ...
... Navigation as a behavior is a complex multisensory process that integrates memory, perception, vestibular signals, the motor system, and decision making, thus, requiring the facilitation of a wide range of neural structures, of which the hippocampus is only one (5,(21)(22)(23)(24). Navigation can also be influenced by a number of factors, including age, sex, brain injury, and personality (16,(23)(24)(25)(26)(27) and can be accomplished with allocentric strategies that use metric configurational knowledge of Significance Sleep facilitates hippocampaldependent memories, and the hippocampus uniquely supports the spatial acquisition and representation of environments. However, sleep's contribution to how specific locations within environments (spatial memory) are retained and the movements to them (navigation) have had mixed findings, possibly due to task designs, familiarity of environments, or measurements. ...
Article
Full-text available
Sleep facilitates hippocampal-dependent memories, supporting the acquisition and maintenance of internal representation of spatial relations within an environment. In humans, however, findings have been mixed regarding sleep's contribution to spatial memory and navigation, which may be due to task designs or outcome measurements. We developed the Minecraft Memory and Navigation (MMN) task for the purpose of disentangling how spatial memory accuracy and navigation change over time, and to study sleep's independent contributions to each. In the MMN task, participants learned the locations of objects through free exploration of an open field computerized environment. At test, they were teleported to random positions around the environment and required to navigate to the remembered location of each object. In study 1, we developed and validated four unique MMN environments with the goal of equating baseline learning and immediate test performance. A total of 86 participants were administered the training phases and immediate test. Participants' baseline performance was equivalent across all four environments, supporting the use of the MMN task. In study 2, 29 participants were trained, tested immediately, and again 12 h later after a period of sleep or wake. We found that the metric accuracy of object locations, i.e., spatial memory, was maintained over a night of sleep, while after wake, metric accuracy declined. In contrast, spatial navigation improved over both sleep and wake delays. Our findings support the role of sleep in retaining the precise spatial relationships within a cognitive map; however, they do not support a specific role of sleep in navigation.
... Thus, in our current study, by choosing a real-environment scenario, we were able to assess the effect on the spatial updating ability while relying on the most relevant cues during self-motion, including tactile, vestibular and proprioceptive Although when navigating in familiar environments or over longer durations humans predominantly use an allocentric reference frame [42], we have several reasons to argue that our task was well-designed to assess egocentric and not allocentric spatial updating. Firstly, movement in unfamiliar environments, especially in smaller areas, requires a constant updating of relationships between the observer and each object in the visual field [42,43]. Given that the characteristics of our task (body motion in a real environment) correspond well with those conditions associated with egocentric spatial updating, it is reasonable to assume that self-motion cues were used to update the stored egocentric object representations. ...
... Nevertheless, it must be considered that a spatial navigation task cannot be pure egocentric or allocentric, rather a combination of both types of information in spatial navigation and learning is likely [44]. Additionally, in egocentric spatial updating, the updating efficiency is highly dependent on the number of objects one has to update-this is because of the fact that, as the observer moves, each of the given objects must be updated [43]. It was previously shown by Wolbers and colleagues [21] that humans can successfully update up to four spatial positions during self-motion, which was also the rationale behind choosing this number of objects in our study. ...
Article
Full-text available
As we move through an environment, we update positions of our body relative to other objects, even when some objects temporarily or permanently leave our field of view-this ability is termed egocentric spatial updating and plays an important role in everyday life. Still, our knowledge about its representation in the brain is still scarce, with previous studies using virtual movements in virtual environments or patients with brain lesions suggesting that the precuneus might play an important role. However, whether this assumption is also true when healthy humans move in real environments where full body-based cues are available in addition to the visual cues typically used in many VR studies is unclear. Therefore, in this study we investigated the role of the precuneus in egocentric spatial updating in a real environment setting in 20 healthy young participants who underwent two conditions in a cross-over design: (a) stimulation, achieved through applying continuous theta-burst stimulation (cTBS) to inhibit the precuneus and (b) sham condition (activated coil turned upside down). In both conditions, participants had to walk back with blindfolded eyes to objects they had previously memorized while walking with open eyes. Simplified trials (without spatial updating) were used as control condition, to make sure the participants were not affected by factors such as walking blindfolded, vestibular or working memory deficits. A significant interaction was found, with participants performing better in the sham condition compared to real stimulation, showing smaller errors both in distance and angle. The results of our study reveal evidence of an important role of the precuneus in a real-environment egocentric spatial updating; studies on larger samples are necessary to confirm and further investigate this finding.
... The spatial updating of self-to-object directions and distances (egocentric relations) that takes place concurrently with the change of spatial relations independent of the position of the perceiver (allocentric relations) depends upon the availability of multisensory information (Klatzky, 1998). Evidence suggests that humans update egocentric, internalized versions of the surroundings to orient themselves as they move (Wang and Spelke, 2000). Though, from an ecological approach to perception and action (Gibson, 1966), perception may not be based on patterns of stimulation available to individual perceptual systems, but may take advantage of "higher order relations" between them (Stoffregen and Riccio, 1988). ...
... After whole body passive rotations around an earth-vertical axis, without visual cues, subjects can indicate their orientation in space with respect to their initial orientation, while they update their actual orientation with respect to the surroundings (Israël et al., 1996;Wang and Spelke, 2000;Jáuregui-Renaud et al., 2008). Using simultaneous measurement of oculo-motor and perceptual measures of the vestibular time constant has shown that the perception of angular velocity is based on signals subserved by the velocity storage mechanism (Okada et al., 1999). ...
Article
Full-text available
Few studies have evaluated the influence of idiosyncrasies that may influence the judgment of space-time orientation after passive motion. We designed a study to assess the influence of anxiety/depression (which may distort time perception), motion sickness susceptibility (which has been related to vestibular function, disorientation, and to the velocity storage mechanism), and personal habits on the ability to update orientation, after passive rotations in the horizontal plane. Eighty-one healthy adults (22–64 years old) accepted to participate. After they completed an in-house general health/habits questionnaire, the short Motion Sickness Susceptibility Questionnaire, the Hospital Anxiety and Depression Scale (HADS), the Pittsburgh Sleep Quality Index, and the short International Physical Activity Questionnaire, they were exposed to 10 manually driven whole-body rotations (45°, 90°, or 135°), in a square room, with distinctive features on the walls, while seated in the normal upright position, unrestrained, with noise-attenuating headphones and blindfolded. After each rotation, they were asked to report which wall or corner they were facing. To calculate the error of estimation of orientation, the perceived rotation was subtracted from the actual rotation. Multivariate analysis showed that the estimation error of the first rotation was strongly related to the results of the orientation test. The magnitude and the frequency of estimation errors of orientation were independently related to HADS anxiety sub-score and to adult motion sickness susceptibility, with no influence of age, but a contribution from the interaction of the use of spectacles, the quality of sleep and sex. The results suggest that idiosyncrasies may contribute to the space-time estimation of passive self-motion, with influence from emotional traits, adult motion sickness susceptibility, experience, and possibly sleep quality.
... A common navigational task is to move through the environment in order to reach our home or to go to a known pub to meet friends or to reach a place which we never visited. To do this, we need to keep track of the changing of the spatial relations between our position and the landmarks in the environment (i.e., self-to-object relations), that is, keep in mind continuous spatial updating (Rieser 1989;Wang and Spelke 2000). ...
... In VR-WalCT, participants performed a task very similar to a daily situation in which they approached a familiar place from an unusual perspective. When we walk through a real or virtual environment, we continually change our perspective, which requires a continuous updating of our position in the new orientation (e.g., a turning) (Wang and Spelke 2000), so it is essential to remember a remote path acquired from a specific point of view, compare the old layout with the new perspective and update it. This updating is automatic (e.g., Shollet al. 2006) and involves computing the amplitude of angular displacements. ...
Article
Full-text available
Field independence (FI) is the extent to which a person perceives part of a field as discrete from the surrounding field rather than embedded in the field. Several studies proposed that it represents a cognitive style that is a relatively stable individuals' predisposition towards information processing. This study investigated the effects of Field Independence/Field Dependence (FI/FD) cognitive style on topographic memory in a virtual environment. Seventy-nine college students completed the Embedded Figure Test as a measure of FI/FD cognitive style and learned two paths in the VR-Walking Corsi Test apparatus. After the learning phase, participants had to reproduce the paths from a familiar perspective or unfamiliar perspectives. Data showed that FI cognitive style predicted the ability to reproduce a path from unfamiliar perspectives, suggesting a different impact of the angle degree. Results are discussed considering the facilitation of body axes references and the increasing difficulty due to maintaining online perspectives with higher angle degrees that increase the visuo-spatial working memory cognitive load. These results support the idea that FI predicts human navigation.
... As the sensors are part of the body and move with the body, somatosensory information is initially coded within an egocentric reference frame. Spatial updating thus involves changes in egocentrically coded sensory information (Riecke et al. 2007) and is linked to self-motion (Simons and Wang 1998;Wang and Spelke 2000). In contrast, allocentric reference frames are independent of the observer's physical body (Klatzky 1998). ...
... It has been shown that while actively walking from one location to another, people develop "route knowledge" and sometimes also "survey knowledge" (Ishikawa and Montello 2006;Mallot and Basten 2009;Siegel and White 1975;Wiener et al. 2009). While the two locations are within a single vista space (Kelly and McNamara 2008;Waller and Hodgson 2006;Wang and Spelke 2000)-that is, a space that can be observed from one viewpoint-they are represented in one local reference frame, often following the orientation of a street in the neighborhood (Meilinger 2008a;Meilinger et al. 2014Meilinger et al. , 2016. When two locations are at a greater distance and can not be overlooked from a single viewpoint, called the environmental space (Montello 1993), multiple local reference frames that are learned by active navigation are combined. ...
Preprint
Full-text available
Theories of Enactivism propose an action-oriented approach to understand human cognition. So far, however, empirical evidence supporting these theories has been sparse. Here, we investigate whether spatial navigation based on allocentric reference frames that are independent of the observer’s physical body can be understood within an action-oriented approach. Therefore, we performed three experiments testing the knowledge of the absolute orientation of houses and streets towards north, the relative orientation of two houses and two streets, respectively, and the location of houses towards each other in a pointing task. Our results demonstrate that under time pressure, the relative orientation of two houses can be retrieved more accurately than the absolute orientation of single houses. With infinite time for cognitive reasoning, the performance of the task using house stimuli increased greatly for the absolute orientation and surpassed the slightly improved performance in the relative orientation task. In contrast, with streets as stimuli participants performed under time pressure better in the absolute orientation task. Overall, pointing from one house to another house yielded the best performance. This suggests, firstly, that orientation and location information about houses are primarily coded in house-to-house relations, whereas cardinal information is deduced via cognitive reasoning. Secondly, orientation information for streets is preferentially coded in absolute orientations. Thus, our results suggest that spatial information about house and street orientation is coded differently and that house orientation and location is primarily learned in an action-oriented way, which is in line with an enactive framework for human cognition.
... While previous research suggested that cognitive maps are allocentric and encode visual information about local locations and environmental boundaries in a global coordinate system [22,23], there is evidence for another type of cognitive map, which is egocentric [24][25][26]. These egocentric maps represent spatial information relative to the navigator's position and orientation, encoding orientation-specific views of landmarks from multiple frames of reference. ...
Article
Full-text available
The editorial "The Contribution of Internal and External Factors to Human Spatial Navigation" by Laura Piccardi, Raffaella Nori, Jose Manuel Cimadevilla, and Maria Kozhevnikov in Brain Sciences is now available online. Spatial navigation involves various cognitive processes such as memory, attention, spatial updating, mental planning, and problem-solving skills. Additionally, internal and external factors like age, gender, familiarity with the environment, landmark attributes, and surrounding complexity can influence spatial navigation. The main goal of this Special Issue, titled "The Contribution of Internal and External Factors to Human Spatial Navigation," was to study the roles of different internal and external variables in navigation. As a result, seven papers authored by distinguished scientists in the field were compiled to address this issue from diverse perspectives.
... Likewise, Yount et al. [6] revealed that using Augmented Reality (AR) for navigation assistance can enhance driving performance but gravely impair route learning as compared to maps. Many researchers have demonstrated that this lapse of attention to surroundings is due to the use of mobile navigation devices and their continuous spatial updating that adversely impacts spatial learning [7,8]. Another possible cause of spatial knowledge degradation could be a discrepancy between the information provided via a digital map and individual differences in inherent spatial schemata used for interpreting spatial relations. ...
Article
Full-text available
Under emergencies such as floods and fires or during indoor navigation where cues from local landmarks and a Global Positioning System (GPS) are no longer available, the acquisition of comprehensive environmental representation becomes particularly important. Several studies demonstrated that individual differences in cognitive style might play an important role in creating a complete environmental representation and spatial navigation. However, this relationship between cognitive style and spatial navigation is not well researched. This study hypothesized that a specific type of map orientation (north-up vs. forward-up) might be more efficient for individuals with different cognitive styles. Forty participants were recruited to perform spatial tasks in a virtual maze environment to understand how cognitive style may relate to spatial navigation abilities, particularly the acquisition of survey and route knowledge. To measure survey knowledge, pointing direction tests and sketch map tests were employed, whereas, for route knowledge, the landmark sequencing test and route retracing test were employed. The results showed that both field-dependent and field-independent participants showed more accurate canonical organization in their sketch map task with a north-up map than with a forward-up map, with field-independent participants outperforming field-dependent participants in canonical organization scores. The map orientation did not influence the performance of Field-Independent participants on the pointing direct test, with field-dependent participants showing higher angular error with north-up maps. Regarding route knowledge, field-independent participants had more accurate responses in the landmark sequencing tests with a north-up map than with a forward-up map. On the other hand, field-dependent participants had higher accuracy in landmark sequencing tests in the forward-up map condition than in the north-up map condition. In the route retracing test, however, the map orientation had no statistically significant effect on different cognitive style groups. The results indicate that cognitive style may affect the relationship between map orientation and spatial knowledge acquisition.
... Additionally, there is some evidence that at least in some brain regions, reaching tasks are represented in body-coordinate frames [78]. However, the literature is quite mixed, with some researchers reporting results that seem to imply a neural preference for representing motion in hand or body-coordinates [79][80][81], and others reporting preference for eye or world-coordinates [82,83], and still others reporting that multiple coordinate systems may be used for different representations or in different brain regions [84,85]. Perhaps, as was proposed by Andersen et al. [76] a shared coordinate system is used, in which differentiated gains are used to modulate or transform sensory and motor information between coordinate frames in eye or body spaces. ...
Article
Full-text available
The inverse kinematics (IK) problem addresses how both humans and robotic systems coordinate movement to resolve redundancy, as in the case of arm reaching where more degrees of freedom are available at the joint versus hand level. This work focuses on which coordinate frames best represent human movements, enabling the motor system to solve the IK problem in the presence of kinematic redundancies. We used a multi-dimensional sparse source separation method to derive sets of basis (or source) functions for both the task and joint spaces, with joint space represented by either absolute or anatomical joint angles. We assessed the similarities between joint and task sources in each of these joint representations, finding that the time-dependent profiles of the absolute reference frame’s sources show greater similarity to corresponding sources in the task space. This result was found to be statistically significant. Our analysis suggests that the nervous system represents multi-joint arm movements using a limited number of basis functions, allowing for simple transformations between task and joint spaces. Additionally, joint space seems to be represented in an absolute reference frame to simplify the IK transformations, given redundancies. Further studies will assess this finding’s generalizability and implications for neural control of movement.
... Previous research suggests that the egocentric representation is the most essential for successful spatial navigation (Easton & Sholl, 1995). Despite this, directional and configurational errors exist in egocentric representations, leading to models of continuous egocentric spatial updating in contrast to a static allocentric model (Wang, 2012;Wang & Spelke, 2000). It is likely that both egocentric and allocentric reference frames operate together in spatial navigation tasks (Ekstrom et al., 2014), with Ekstrom et al (2017) arguing that spatial reference frames should be considered as existing on a continuum, allowing for change during different parts of a spatial navigation task. ...
Thesis
A fundamental problem in geospatial interface designs is how aspects of user cognition may be incorporated into their design structures for improved reasoning, decision making, and comprehension in geographic spaces. Narrative environments are one such example of geographic spaces, where stories are told and visually displayed. Recently, geospatial narrative environments have become a popular medium for visualising information about space and time in the Earth sciences. Consequently, effective ways of enhancing user cognition in these environments through visual narrative comprehension is becoming increasingly important, particularly for the development of interactive learning environments for geo-education. It was hoped that subtle visualisations of future tasks (environmental precues) could be incorporated into an ambient narrative interface that would improve user cognition and decision making in an immersive 3D virtual narrative environment, which acted as an experimental analogue for how the interface could operate in real-world environments. To address this, a hybrid navigational interface called Future Vision was developed. In addition to controller-based locomotion, the interface provides subliminal environmental precues in the form of simulated future thoughts by teleporting the user to a future location, where the outcome of a two-alternative forced-choice (2AFC) decision making task could be briefly seen. The navigational effectiveness of the interface was analysed using the Steering law: a geographic analysis technique for trajectory-based human-computer interactions. The results showed that Future Vision enhanced participants' navigational abilities through improvements in average task completion times and movement speed. When comparing the experimental interface (Future Vision) with the ii control interface (an HTC Vive controller), the results showed that the experimental interface was 2.9 times as effective for navigation. This was in comparison to an improvement of 3.3 times for real walking when compared to navigation using an Xbox 360 game controller in another study. The similarity in these values suggests that Future Vision allows for more realistic walking behaviours in virtual environments. Improvements were also seen in the 2AFC decision making task when compared to participants in the control group, who were unguided in their decision making. These improvements occurred even when participants reported being unaware of the precues. In addition, Future Vision produced a similar information transfer rate to brain-computer interfaces in virtual reality, where participants move virtual objects via motor imagery and the imagined performance of actions through thought. This suggests that visualisations of future thoughts operate in a motor imagery paradigm that is associated with the generation and execution of a user's goals and intentions. The results also suggest that Future Vision behaves as an optimally designed cognitive user interface for ambient narrative communication during navigation and decision making. Overall, these findings demonstrate how extended reality narrative style GIS digital representations may be incorporated into cognitively inspired geospatial interfaces. When employed in real or virtual geographic narrative environments, these interfaces may allow for new types of quantitative GIS analysis techniques to be carried out in the cognitive sciences, leading to insights that may result in improved geospatial interface designs in the future.
... The egocentric orientation was assessed by asking participants to point an arrow at the starting point toward each of the four main points of interest. This type of test comprises a scene and orientation-dependent pointing (SOP) task, which allows us to understand the ability of the participant to orientate through some locals in the space from their perspective (see Wang and Spelke, 2000). The allocentric orientation was performed by asking the participants to point to a certain point of interest from a given location without visual cues. ...
... Additionally, there is some evidence that at least in some brain regions, reaching tasks are represented in body-coordinate frames (Lacquaniti et al., 1995) which seems contrary to our results. However, the literature is quite mixed, with some researchers reporting results that seem to imply a neural preference for representing motion in hand or bodycoordinates (Soechting et al., 1990a;Wang and Spelke, 2000;McIntyre et al., 1998), and others reporting preference for eye or world-coordinates (Bosco et al., 2000;Poljac and Van Den Berg, 2003), and others reporting multiple coordinate systems may be used for different representations or in different brain regions (Carrozzo et al., 2002;Graziano et al., 1994). Perhaps, as was proposed in Andersen et al. (2007) there is a shared coordinate system used, with differentiated gains used to modulate or transform between coordinates in eye or body space. ...
Preprint
Full-text available
The inverse kinematics problem deals with the question of how the nervous system coordinates movement to resolve redundancy, such as in the case of arm reaching movements where more degrees of freedom are available at the joint versus hand level. In particular, this work focuses on determining which coordinate frames can best represent human movements, allowing the motor system to solve the inverse kinematics problem in the presence of kinematic redundancies. In particular, in this work we used a multi-dimension sparse source separation method called FADA to derive sets of basis functions (here called sources) for both the task and joint spaces, with joint space being represented in terms of either the absolute or anatomical joint angles. We assessed the similarities between the joint and task sources in each of these joint representations. We found that the time-dependent profiles of the absolute reference frame's sources show greater similarity to those of the corresponding sources in the task space. This result was found to be statistically significant. Hence, our analysis suggests that the nervous system represents multi-joint arm movements using a limited number of basis functions, to allow for simple transformations between task and joint spaces. Importantly, joint space seems to be represented in terms of an absolute reference frame to achieve successful performance and simplify inverse kinematics transformations in the face of the existing kinematic redundancies. Further studies will be needed to determine the generalizability of this finding and its implications concerning neural control of movement.
... Research shows egocentric referencing is vital for self-orientation of small and familiar environments (Wang & Spelke, 2000), such as navigating around your home or to a local shop. Whereas allocentric referencing is employed when navigating a new route or city for the first time. ...
Thesis
Full-text available
Detection of incipient cognitive impairment and dementia pathophysiology is critical to identifying preclinical populations and target potentially disease modifying interventions towards them. There are currently concerted efforts for such detection in Alzheimer’s disease (AD). By contrast, the examination of cognitive markers and their relationship to biomarkers for vascular cognitive impairment (VCI) is far less established, despite VCI being highly prevalent and often concomitantly presenting with AD. Critically, vascular risk factors are currently associated with the most viable treatment options via pharmacological and non-pharmacological intervention, hence developing selective and sensitive methods for the identification of vascular factors have important implications for modifying dementia disease trajectories. As outlined in Chapter one, this thesis focuses on uncovering spatial navigation deficits in established and preclinical VCI and investigates potential brain dysconnectivity in the frontoparietal regions and overlapping navigation systems. Chapter two reveals egocentric orientation deficits in established VCI to distinguish it from AD. In Chapter three, the VCI case study, RK, who previously displayed spatial navigation deficits is followed up three years after initial diagnosis. Results suggest an ongoing egocentric orientation deficit whilst there are improvements in cognitive scores assessed using conventional neuropsychological assessments. Diffusion tensor imaging (DTI) analysis suggests reduced superior longitudinal fasciculus (SLF) integrity to parietal segments. Chapter four shows that a novel test battery of navigation and ERP components capture deficits that precede the onset of general cognitive decline assessed by typical neuropsychological assessment in preclinical VCI. Taken together, this research advances our conceptual understanding of the pathological changes to cognition that characterise VCI and at-risk individuals.
... People tend to prioritize egocentric spatial cues during navigation because external reference points or axes are not always available (i.e., walls of a room). The egocentric spatial representation is developed by continually perceiving, processing, and updating spatial cues in the environment (Mou et al., 2004;Wang & Spelke, 2000). Though there is a significant body of work on the mechanisms behind navigation and wayfinding, little is known about how a person's navigation ability translates to their ability to communicate spatial information verbally. ...
Article
Navigation is critical for everyday tasks but is especially important for urban search and rescue (USAR) contexts. Aside from successful navigation, individuals must also be able to effectively communicate spatial information. This study investigates how differences in spatial ability affected overall performance in a USAR task in a simulated Minecraft environment and the effectiveness of an individual’s ability to communicate their location verbally. Randomly selected participants were asked to rescue as many victims as possible in three 10-minute missions. Results showed that sense of direction may not predict the ability to communicate spatial information, and that the skill of processing spatial information may be distinct from the ability to communicate spatial information to others. We discuss the implications of these findings for teaming contexts that involve both processes.
... Ainsi, peut-être que ces deux types de traitement ne sont pas les seuls possibles ou qu'ils sont moins indépendants l'un de l'autre que conceptualisé jusque-là. Par exemple, Wang et Spelke (2000) démontrent que lorsque le traitement égocentré de l'orientation de l'individu dans l'espace est perturbé par une procédure de désorientation, les relations entre objets sont également affectées négativement. Ainsi, le codage des relations entre objets ne semble pas être totalement indépendant de la position de l'observateur dans l'environnement. ...
Thesis
Les processus d’intégration d’informations visuelles et proprioceptives jouent un rôle fondamental dans notre aptitude à coordonner nos actions dans l’espace et à maintenir notre stabilité posturale. Des études récentes suggèrent que ces mêmes processus pourraient sous-tendre une capacité apparemment radicalement différente, l’évocation des souvenirs c’est-à-dire la mémoire épisodique. L’implication d’une même structure cérébrale, l’hippocampe, dans le traitement spatial et dans la capacité à récupérer consciemment un moment vécu a conduit à poser l’hypothèse d’un lien neuro-fonctionnel entre ces deux capacités. Cette relation est généralement attribuée à l’existence de représentations mnésiques spatialisées qui coderaient les relations spatiales entre objets (codage allocentré) nécessaires pour reconstituer ensuite un épisode précédemment vécu (Squire & Alvarez, 1995 ; Nadel & Moscovitch, 1998). Cependant la relation entre ces deux capacités pourrait être dépendante non pas de l’existence d’une trace mnésique spatialisée, mais de processus communs (Maguire & Mullaly, 2013). Cette hypothèse permettrait notamment de rendre compte des déficits conjoints de projection épisodique dans le futur et d’incapacité à évoquer des souvenirs observés chez les patients présentant une amnésie antérograde. Des recherches menées au LPNC ont indiqué que le processus de mise à jour égocentrée, qui repose sur l’intégration dynamique des informations proprioceptives et environnementales, pouvait constituer un bon candidat pour ce processus commun. En effet, maximiser un traitement de mise à jour égocentrée à l’apprentissage permet une augmentation des souvenirs à la récupération comparativement à une maximisation d’un traitement allocentré (Gomez, Rousset & Baciu, 2009). En retour, un traitement de mise à jour égocentrée effectué en tâche concurrente produit plus d’interférence sur la récupération de souvenirs qu’un traitement allocentré (Ce rles, Guinet & Rousset, 2015), cet effet n’étant pas présent en mémoire sémantique. Enfin les patients amnésiques montrent un déficit spécifique du traitement de mise à jour égocentrée (Gomez, Rousset, & Charnallet, 2012 ; Gomez, Rousset, Bonniot, Charnallet & Moreaud 2014). L’objectif du projet est d’approfondir l’implication de la mise à jour égocentrée dans la mémoire épisodique. Tout d’abord si les patients présentant une amnésie présentent bien un déficit de projection épisodique dans le futur associé à des troubles spatiaux spécifiques, il est toujours possible d’avancer que le lien est dû à des ségrégations fines de fonctions distinctes dans une même structure, fonction qui serait touchée conjointement par la lésion. Nous chercherons donc à mettre en évidence que, pour des sujets non lésés, la projection épisodique peut être perturbée en interférant en ligne sur les processus d’intégration spatiaux. Enfin ce projet de thèse s’inscrit également dans le cadre d’étud es en cours destinées à évaluer comment l’examen des traitements de mise à jour égocentrée permet de fournir des indices comportementaux pertinents dans l’évaluation de pathologies dégénératives évoluant vers une démence d’Alzheimer.
... People tend to prioritize egocentric spatial cues during navigation because external reference points or axes are not always available (i.e., walls of a room). The egocentric spatial representation is developed by continually perceiving, processing, and updating spatial cues in the environment (Mou et al., 2004;Wang & Spelke, 2000). Though there is a significant body of work on the mechanisms behind navigation and wayfinding, little is known about how a person's navigation ability translates to their ability to communicate spatial information verbally. ...
Conference Paper
Full-text available
Navigation is critical for everyday tasks but is especially important for urban search and rescue (USAR) contexts. Aside from successful navigation, individuals must also be able to effectively communicate spatial information. This study investigates how differences in spatial ability affected overall performance in a USAR task in a simulated Minecraft environment and the effectiveness of an individual's ability to communicate their location verbally. Randomly selected participants were asked to rescue as many victims as possible in three 10-minute missions. Results showed that sense of direction may not predict the ability to communicate spatial information, and that the skill of processing spatial information may be distinct from the ability to communicate spatial information to others. We discuss the implications of these findings for teaming contexts that involve both processes.
... This allowed us to test the extent to which map learning contributes more to putative allocentric forms of knowledge, which should preferentially benefit the JRD task and allow determination of the extent to which route learning contributes more to egocentric forms of knowledge, which should preferentially benefit the SOP task. In Experiments 2a and 2b, we compared route and tabletop data from Experiment 1 with JRD and SOP performance after what 2 The orientation within the environment can be established in various ways such as blocking the view of the environment from an already oriented person (Wang & Spelke, 2000), or providing a disoriented person with a viewpoint from within the environment and allowing them to change their view and/or position until they are sufficiently oriented (Zhang et al., 2012(Zhang et al., , 2014 we hypothesized to be a more ecologically variant of the map task-having participants reach a performance criterion rather than studying for a seemingly arbitrary period. 3 In all experiments, we expected to observe a pattern of results across route and map learning on the JRD and SOP tasks consistent with the pattern described by Zhang et al. ...
Article
Full-text available
Previous work has shown how different interfaces (i.e., route navigation, maps, or a combination of the two) influence spatial knowledge and recollection. To test for the existence of intermediate representations along an egocentric-to-allocentric continuum, we developed a novel task, tabletop navigation, to provide a mixture of cues that inform the emergence of egocentric and allocentric representations or strategies. In this novel tabletop task, participants navigated a remote-controlled avatar through a tabletop scale model of the virtual city. Participants learned virtual cities from either navigating routes, studying maps, or our new tabletop navigation task. We interleaved these learning tasks with either an in situ pointing task (the scene- and orientation-dependent pointing [SOP] task) or imagined judgements of relative direction (JRD) pointing. In Experiment 1, performance on each memory task was similar across learning tasks and performance on the route and map learning tasks correlated with more precise spatial recall on both the JRD and SOP tasks. Tabletop learning performance correlated with SOP performance only, suggesting a reliance on egocentric strategies, although increased utilization of the affordances of the tabletop task were related to JRD performance. In Experiment 2, using a modified criterion map learning task, participants who learned using maps provided more precise responses on the JRD compared to route or tabletop learning. Together, these findings provide mixed evidence for both optimization and egocentric predominance after learning from the novel tabletop navigation task.
... then tested how well participants could point to these remembered items; the question of interest was whether, after the participant was blindfolded and disoriented, they could still point to the relative locations of objects and geometric features, in spite of not knowing their absolute heading direction (Wang & Spelke, 2000). The pattern of pointing errors participants exhibited suggested that geometric features were encoded as a configuration, consistent with the idea of a viewpoint-invariant cognitive map. ...
Preprint
Full-text available
Our sense of space helps to provide the scaffolding upon which autobiographical memories are built. This allows us to situate event memories in particular locations, and underlies many aspects of cognition. Spatial memory draws upon a hierarchy of representations, from simple sensory features, body motion cues, and heading direction, to complex features indicative of boundaries, landmarks, environmental geometry and scenes. We preferentially encode features into spatial memory that are most relevant for orienting, navigating and future planning. To navigate in unfamiliar environments, we encode the most immediately relevant cues into spatial working memory and use path integration to update spatial representations. In familiar spatial environments, we rely on our habitual knowledge, drawing on well learned associations between local landmarks and bodily responses such as "turn left at the corner store". Alternatively, we use higher level spatial knowledge in a range of different reference frames, from viewpoint-specific snapshots to viewpoint-invariant cognitive maps, to navigate in more flexible ways,
... For instance, in the domain of Perception, object perception that is related to scene and landmark perception in navigation yielded robust correlation; in contrast, face perception was not robustly correlated with CRY2 expression. In addition, self-referential in the social cognition domain had robust correlation with CRY2, probably due to the involvement of egocentric navigation (39)(40)(41); however, the same correlation did not exist for empathy, which is also theoretically less relevant to navigation. Taken together, the analysis on related cognitive domains suggested CRY2's association with navigation is functionally specific. ...
Article
Full-text available
Navigation is a complex cognitive process. CRY2 gene has been proposed to play an important role in navigation behaviors in various non-human animal species. Utilizing a recently developed neuroimaging-transcriptomics approach, the present study reported a tentative link between the CRY2 gene and human navigation. Specifically, we showed a significant pattern similarity between CRY2 gene expression in the human brain and navigation-related neural activation in functional magnetic resonance imaging. To further illuminate the functionality of CRY2 in human navigation, we examined the correlation between CRY2 expression and various cognitive processes underlying navigation, and found high correlation of CRY2 expression with neural activity of multiple cognitive domains, particularly object and shape perception and spatial memory. Further analyses on the relation between the neural activity of human navigation and the expression maps of genes of two CRY2-related pathways, i.e., the magnetoreceptive and circadian-related functions, found a trend of correlation for the CLOCK gene, a core circadian regulator gene, suggesting that CRY2 may modulate human navigation through its role in circadian rhythm. This observation was further confirmed by a behavioral study where individuals with better circadian regularity in daily life showed better sense of direction. Taken together, our study presents the first neural evidence that links CRY2 with human navigation, possibly through the modulation of circadian rhythm.
... In memory, these spatial representations are described through two relationships: features relative to our own position are represented in an egocentric (subject-to-object) frame of reference; and features relative to each other are represented in an allocentric (object-to-object) frame of reference. Exploratory navigation from a ground-level perspective, also called route navigation, is typically associated with an egocentric frame of reference [5][6][7]. In contrast, information in an allocentric frame of reference is categorized independently of one's own location and echoes concepts from Tolman's [8] cognitive map, which posits that the brain forms map-like representations of our environment. ...
Article
Full-text available
In memory, representations of spatial features are stored in different reference frames; features relative to our position are stored egocentrically and features relative to each other are stored allocentrically. Accessing these representations engages many cognitive and neural resources, and so is susceptible to age-related breakdown. Yet, recent findings on the heterogeneity of cognitive function and spatial ability in healthy older adults suggest that aging may not uniformly impact the flexible use of spatial representations. These factors have yet to be explored in a precisely controlled task that explicitly manipulates spatial frames of reference across learning and retrieval. We used a lab-based virtual reality task to investigate the relationship between object–location memory across frames of reference, cognitive status, and self-reported spatial ability. Memory error was measured using Euclidean distance from studied object locations to participants’ responses at testing. Older adults recalled object locations less accurately when they switched between frames of reference from learning to testing, compared with when they remained in the same frame of reference. They also showed an allocentric learning advantage, producing less error when switching from an allocentric to an egocentric frame of reference, compared with the reverse direction of switching. Higher MoCA scores and better self-assessed spatial ability predicted less memory error, especially when learning occurred egocentrically. We suggest that egocentric learning deficits are driven by difficulty in binding multiple viewpoints into a coherent representation. Finally, we highlight the heterogeneity of spatial memory performance in healthy older adults as a potential cognitive marker for neurodegeneration, beyond normal aging.
... Some research indicates that the two reference frames act simultaneously in spatial memory (e.g., McNamara, 2002;Mou et al., 2004;Waller and Hodgson, 2006; also see a review Avraamides and Kelly, 2008). In other cases, knowledge for one reference frame is developed in the relative absence of the other (Wang and Spelke, 2000;Ishikawa and Montello, 2006;. Our study contributes to this discussion in two ways: First, it provides evidence that the reference frames in spatial memory can be top-down modulated, implying the adoption of reference frames influenced by intent of the participants. ...
Article
Full-text available
We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.
... Two tasks were used to assess spatial knowledge. The first was a pointing task, which is commonly used to assess spatial abilities in real-world studies [45][46][47] . In the pointing task (Fig. 3), a picture of a POI was presented on the phone and participants had to point, using the phone, from their current location to the POI. ...
Article
Full-text available
GPS navigation is commonplace in everyday life. While it has the capacity to make our lives easier, it is often used to automate functions that were once exclusively performed by our brain. Staying mentally active is key to healthy brain aging. Therefore, is GPS navigation causing more harm than good? Here we demonstrate that traditional turn-by-turn navigation promotes passive spatial navigation and ultimately, poor spatial learning of the surrounding environment. We propose an alternative form of GPS navigation based on sensory augmentation, that has the potential to fundamentally alter the way we navigate with GPS. By implementing a 3D spatial audio system similar to an auditory compass, users are directed towards their destination without explicit directions. Rather than being led passively through verbal directions, users are encouraged to take an active role in their own spatial navigation, leading to more accurate cognitive maps of space. Technology will always play a significant role in everyday life; however, it is important that we actively engage with the world around us. By simply rethinking the way we interact with GPS navigation, we can engage users in their own spatial navigation, leading to a better spatial understanding of the explored environment.
... Duration was varied to see whether additional context would improve scene recognition. Adding context might help if participants used context to build up a mental map of a scene to develop a viewpoint-invariant representation [106][107][108][109] ; however, participants might alternatively use the presence or absence of specific diagnostic objects or landmarks in a scene to recognize it. In that case, additional context may not help and may instead hurt performance because it adds potentially distracting information in the form of objects and landmarks that might be relatively far away from the target scene. ...
Article
Full-text available
Approved for public release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
... It is already known that environmental orientation is a crucial component of successful spatial navigation. During navigation, a sense of direction can help us to establish an understanding about spatial relationships between different locations and can improve the representational stability of situated real-world objects [34]. For humans, orientation and directional information are controlled predominantly by visual cues and hence it can be argued that for successful navigation in space one needs to operationalise already accumulated storage of visual information about previously visited locations or to create new mental images for current or future references. ...
Article
Full-text available
Semantic drift is a well-known concept in distributional semantics, which is used to demonstrate gradual, long-term changes in meanings and sentiments of words and is largely detectable by studying the composition of large corpora. In our previous work, which used ontological relationships between words and phrases, we established that certain kinds of semantic micro-changes can be found in social media emerging around natural hazard events, such as floods. Our previous results confirmed that semantic drift in social media can be used to for early detection of floods and to increase the volume of ‘useful’ geo-referenced data for event monitoring. In this work we use deep learning in order to determine whether images associated with ‘semantically drifted’ social media tags reflect changes in crowd navigation strategies during floods. Our results show that alternative tags can be used to differentiate naïve and experienced crowds witnessing flooding of various degrees of severity.
... Fourth, Huang et al. (2012) and Von Stülpnagel & Steffens (2012) stressed the positive role of familiarity with the environment on spatial learning i.e. participants familiar to the study area had overall better performance than non-familiar ones. Fifth, Wang & Spelke (2000) and Ruginski et al. (2019), point to disorientation and mental rotation that occur when navigating with mobile devices while on the move as interfering with spatial learning. ...
Article
Full-text available
Turn-by-turn (TBT) route guidance technology installed on mobile phones is very popular among car drivers for wayfinding purposes. Previous studies examined their effect on spatial knowledge predominantly on pedestrians or in virtual environments. Drivers' spatial knowledge was experimentally compared in two random groups: audiovisual route guidance using the TBT navigation feature of the Google Maps app installed on a mobile phone, and a paper map. Participants drove their own vehicles to a predesignated destination in an unfamiliar residential neighborhood. Spatial knowledge tests (orientation, landmark recognition and route recognition) were subsequently administered. The scores of map-assisted drivers were uncorrelated and, on average, higher in orientation (deviation in direction), landmark recognition and route recognition (error percentage). The landmark recognition scores of drivers assisted by TBT route guidance were significantly lower with a very large effect size. The route recognition scores of drivers assisted by route guidance showed strong correlations with orientation and with landmark recognition scores. Results can be attributed to the differences in cognitive effort required to complete the wayfinding task: unlike memorizing a global map survey, passively following TBT audiovisual instructions does not require drivers to actively encode, transform, and continuously monitor their egocentric position in space. Drivers also showed somewhat poorer performance relative to studies with pedestrians which can be explained by the greater mental effort, compared to wandering on foot, involved in wayfinding while safely driving a rapidly moving vehicle. The future implications of the increasing dependence on mobile navigation technologies are further discussed.
... La escogencia de este número de personas se justifica por el tipo de análisis que se plantea (análisis cualitativo de reportes verbales). Algunos estudios han utilizado grupos pequeños de personas, como el propuesto en esta investigación (Jacobson, 1998, Gladstone, 1991y Wang y Spelke, 2000. Las personas ciegas han desarrollado habilidades en cuanto a orientación y movilidad con manejo del bastón, han adquirido habilidad para comunicarse con iguales y videntes, han recibido entrenamiento en el manejo de mapas táctiles y tienen cierta independencia para realizar actividades de la vida diaria. ...
Article
Full-text available
La manera en que los invidentes se representan el espacio crea la necesidad de desarrollar ciertas habilidades para obtener información del entorno en estas personas. Una forma de obtener esta experiencia es por medio del aprendizaje espacial, el cual se caracteriza por la capacidad para aprender, representar y actualizar lugares y objetivos (Balakrishna , Bousquet y Honavar, 1998). Con el objeto de analizar estos procesos en los invidentes, se desarro¬lla la tesis doctoral “Representación espacial del entorno en invidentes estimulados de manera háptica con un dispositivo mecatrónico, DMREI y con el bastón clásico”, resumida en este artículo. El interés el trabajo doctoral se centra en el estudio de la representación espacial del entorno, producida por el efecto de dispositivos tecnológicos que se adaptan al ser humano mediante sus mecanismos sensoriales. Específicamente, hace referencia a la identificación de las diferencias en la representación espacial, en términos de estrategias con el uso de un sistema mecatrónico comparado con el uso del bastón clásico. Los resultados obtenidos muestran diferencias con el uso del dispositivo, por cuanto este elemento consolida una mejor representación del es¬pacio lo cual se pudo demostrar en el manejo de una variedad de estrategias e la anticipación perceptiva de la actualización del espacio, donde los invidentes lograron mejores realizaciones
... The congruence of these various perceptual inputs provides humans with the ability to assess their own movement through space efficiently. However, to encode spatial knowledge during navigation, humans do not always depend on all of these sensory channels simultaneously (Wang & Spelke, 2000). For example, individuals without visual impairment can efficiently construct spatial knowledge from visual information alone (e.g., Patrick Péruch & Wilson, 2004). ...
Article
In a fast-paced digital society, individuals increasingly rely on computerized location-based services to efficiently find their way through unfamiliar environments. However, scientific evidence is increasingly showing that despite digital navigation assistance helping people to find their way, it can cause wayfinders to become “mindless” of the traversed environment, thus acquiring no or very little spatial knowledge in the long term. It is still not entirely clear what causes these impairments or how the design of navigation devices can be improved to counteract such undesirable effects. The objective of this thesis is to gain empirical insights into the role of stressful navigation conditions for potential spatial learning impairments, and to identify the features in the environment for which it is particularly important that wayfinders’ pay attention to and thus increase their spatial knowledge even when experiencing stress. Building on existing work in spatial cognition, cognitive geography, and stress research, the studies of this thesis investigate whether and how highly visible landmarks can improve memory of large spaces like cities, and how that may be influenced by navigators’ stress states. It is widely accepted that landmarks serve a key role for the development of spatial knowledge, and there has been increasing interest in integrating landmarks into automated navigation instructions in recent decades. Specifically, recent studies have pointed to a potential advantage of so-called global landmarks that are visible from several locations in an environment for spatial orientation and route learning. However, there has been little research on the difference in mentally encoding and learning the locations of global landmarks as compared to landmarks that are only visible locally. In this thesis, I conducted two virtual reality experiments that assessed human participants’ capability to acquire spatial knowledge from local or global landmark configurations in situations with and without stress. Insights from this work can help designers of future navigation systems, and industry decision makers, to reconsider which and when landmarks should be presented in navigation systems. For example, future navigation assistance may dynamically adapt the display of local and global landmarks according to the contextual demands of the wayfinder. In Study I, I investigated the role of time pressure in learning the spatial relations among local landmarks (e.g., a shop along the route) as compared to global landmarks (e.g., a tower in the distance) during navigation through virtual cities. During this navigation, participants used a navigation aid and had explicit learning instructions for the different local or global landmark configurations. Participants’ performance in a survey knowledge test after navigation suggests that global landmark configurations were not represented more accurately than local landmark configurations, and that survey knowledge acquisition was not impaired under time pressure. In contrast to prior findings, the results of Study I indicate no advantage of distant global landmarks for spatial knowledge acquisition. In Study II, I investigated the role of working memory in acquiring survey knowledge from sequentially (locally) or simultaneously (globally) visible landmark configurations during navigation through virtual cities. As in Study I, participants navigated routes through virtual cities, but both local and global landmarks were located along these routes. Moreover, one group of participants performed a concurrent spatial task that aimed to interfere with the active processing of information in working memory. I expected that an increase in spatial working memory demands would impair survey knowledge for sequentially visible local landmarks more than for simultaneously visible global landmarks. I also assessed individuals’ working memory capacity, because I expected greater capacity to be beneficial for the sequential integration of local landmarks over time. My findings show a negative effect of concurrent task demands for both local and global landmark learning. Furthermore, the data indicates that participants had improved spatial knowledge of globally visible landmarks as compared to locally visible landmarks along the route. Finally, Study II revealed that individual working memory capacity moderates the accuracy of acquiring spatial knowledge of global landmarks. Only participants with greater working memory capacity are able to benefit from globally visible landmarks. In summary, this work has identified a number of cognitive and contextual conditions that impair users’ ability to take advantage of globally visible landmarks for spatial learning. Based on these conditions, the present work provides design guidelines for future learning-aware navigation systems. For example, my analysis of participants’ learning performance indicates that users with greater working memory capacities have the necessary cognitive resources available to take advantage of global landmarks for spatial learning. While this might imply that the navigation systems of tomorrow need to be aware of users’ spatial abilities to optimize information display, future research should also identify means to support navigators with low working memory capacity.
Article
Full-text available
While the brain has evolved robust mechanisms to counter spatial disorientation, their neural underpinnings remain unknown. To explore these underpinnings, we monitored the activity of anterodorsal thalamic head direction (HD) cells in rats while they underwent unidirectional or bidirectional rotation at different speeds and under different conditions (light vs dark, freely-moving vs head-fixed). Under conditions that promoted disorientation, HD cells did not become quiescent but continued to fire, although their firing was no longer direction specific. Peak firing rates, burst frequency, and directionality all decreased linearly with rotation speed, consistent with previous experiments where rats were inverted or climbed walls/ceilings in zero gravity. However, access to visual landmarks spared the stability of preferred firing directions (PFDs), indicating that visual landmarks provide a stabilizing signal to the HD system while vestibular input likely maintains direction-specific firing. In addition, we found evidence that the HD system underestimated angular velocity at the beginning of head-fixed rotations, consistent with the finding that humans often underestimate rotations. When head-fixed rotations in the dark were terminated HD cells fired in bursts that matched the frequency of rotation. This postrotational bursting shared several striking similarities with postrotational “nystagmus” in the vestibulo-ocular system, consistent with the interpretation that the HD system receives input from a vestibular velocity storage mechanism that works to reduce spatial disorientation following rotation. Thus, the brain overcomes spatial disorientation through multisensory integration of different motor-sensory inputs.
Article
Research Findings: Although the importance of block play to children’s spatial ability has been recognized globally, little is known about children’s use of spatial frames of reference during spatial processing. This study investigated the intervention with guided block play to promote children’s use of their intrinsic frame of reference, an identified effective frame of reference for spatial information. Participants included 42 kindergarten children (Mage=67.12 months, SD = 3.91, 48% girls) and 42 pre-kindergarten children (Mage=55.80 months, SD = 3.63, 57% girls) from one public kindergarten in Shanghai, China. A quasi-experiment method was used with a four-month intervention program designed for the experiment groups. Statistically significant differences were identified in the performance of the spatial tasks between the experiment and control groups in both kindergarten and pre-kindergarten children after the intervention. The results revealed that block-play interventions can effectively increase children’s ability to use their intrinsic frame of reference and their preference in using this frame for spatial representations. Practice or Policy: These findings provided a new perspective on analyzing children’s spatial competence and supported the benefits of block play interventions with empirical evidence.
Chapter
Solving spatial tasks is crucial for adaptation and is made possible by the representation of space. It is still debated which is the exact nature of this representation that can rely on egocentric and allocentric frames of reference.In this paper, a modelling approach is proposed to complement research on humans and animal models. Artificial agents, simulated mobile robots ruled by an artificial neural network, are evolved through Evolutionary strategies to solve a spatial task that consists in locating the central area between 2 landmarks in a rectangular enclosure. This is a non-trivial task that requires the agent to identify landmarks’ location, spatial relation between landmarks and landmark position relative to the environment.Different populations of agents with different spatial frames of reference are compared. Results indicate that both egocentric and allocentric frames of reference are effective, but allocentric frames gives advantages and leads to better performance.KeywordsSpatial tasksEvolutionary StrategiesSpatial frames of referenceActionEmbodied agents
Article
Résumé Avec l’approfondissement de la notion de carte cognitive spatiale, différents paramètres ont été identifiés comme jouant un rôle dans les processus d’encodage de la représentation de l’espace au sein des référentiels de types égo et allocentrés. L’activité du sujet et la nature de la configuration environnementale se révèlent alors être des facteurs déterminants de cet encodage. Bien que les travaux dans ce domaine ne s’accordent pas systématiquement, il semble que les actions du sujet participent à l’intégration de repères centrés sur lui-même alors que les caractéristiques géométriques propres aux configurations tendent à favoriser l’intériorisation de références externes. La coordination de ces encodages de types égo et allocentrés s’impose comme une clé de la réussite des tâches spatiales. Cette note théorique vise à préciser le rôle joué par les mouvements du sujet, sa désorientation, son point de vue initial d’apprentissage, l’axe intrinsèque à la configuration ainsi que la régularité de sa forme dans la coordination des représentations de types égo et allocentrés.
Thesis
Full-text available
Environmental representations have typically been categorized into procedural descriptions and survey knowledge. Based on recent findings about allocentric and egocentric encoding of spatial relations, we hypothesized that survey knowledge could be further classified into survey-allocentric and survey-egocentric representations (depending on which encoding the person uses). Our study examined the distinction in survey representations using map drawing task. The second goal was to examine how this distinction in environmental representations relates to individual differences in allocentric and egocentric spatial abilities, using spatial visualization and spatial orientation computerized tasks. The third goal was to explore how these environmental representations differ in landmark knowledge, using landmark recognition and landmark directional tasks. The map drawings were reliably classified into procedural, survey-allocentric, and survey-egocentric representations, based on the encoding of spatial relations. Individuals who drew survey-egocentric maps tended to perform more accurately and faster on egocentric spatial orientation task than those who drew procedural maps. Significant differences were found in accuracy and reaction times on landmark tasks between different types of landmarks: no-choice versus active, non-cultural versus cultural, permanent versus temporary, and scenes versus individual landmarks. There were no significant differences in landmark recognition and directional tasks between individuals with different environmental representations [COPYRIGHT CC BY-NC-ND 4.0, K.G. GOH & J. Y. ZHONG 2011, NATIONAL UNIVERSITY OF SINGAPORE].
Article
Se determinó el efecto de los ejercicios FIFA 11+ sobre el balance postural estático en futbolistas juveniles. Se incluyeron 20 futbolistas juveniles que fueron evaluados usando el test de Romberg con ojos abiertos y cerrados sobre una plataforma de fuerza. Los jugadores fueron divididos aleatoriamente en un grupo control (n=10), quienes continuaron sus sesiones de práctica de fútbol y un grupo intervención (n=10), quienes continuaron sus sesiones de practica de fútbol y a quienes se les realizaron los ejercicios FIFA11+, supervisado, durante de 22 sesiones. En los resultados del estudio no se encontraron cambios estadísticamente significativos en el Centro de Presión Plantar (COP), el p-valor promedio obtenido en dos ejes fue de 0,7869 (p<0.05), evidenciado a partir de las pruebas estadísticas Mann-Withney, Wilcoxon y Kolmogorov Smirnov. La aplicación del programa de prevención de lesiones deportivas 11+ en 22 sesiones no desarrolla mejoras importantes en el balance postural estático.
Article
Full-text available
Investigating spatial knowledge acquisition in virtual environments allows studying different sources of information under controlled conditions. Therefore, we built a virtual environment in the style of a European village and investigated spatial knowledge acquisition by experience in the immersive virtual environment and compared it to using an interactive map of the same environment. The environment was well explored, with both exploration sources covering the whole village area. We tested knowledge of cardinal directions, building-to-building orientation, and judgment of direction between buildings in a pointing task. The judgment of directions was more accurate after exploration of the virtual environment than after map exploration. The opposite results were observed for knowledge of cardinal directions and relative orientation between buildings. Time for cognitive reasoning improved task accuracies after both exploration sources. Further, an alignment effect toward the north was only visible after map exploration. Taken together, our results suggest that the source of spatial exploration differentially influenced spatial knowledge acquisition.
Chapter
This chapter provides a broad overview of the major findings in the neuroscience of navigation to date. It focuses on two issues: how specialized spatial cells located throughout the brain represent an organism's location in and movement through an environment and the roles of different brain areas and neural networks involved in accomplishing navigational tasks. Much of early neuroscience research has been guided by the principle of localization of function in which particular mental processes or representations are linked to specific brain areas. In the study of navigation, this led to a strong focus on the identification of the neural systems underlying egocentric representations and those responsible for allo‐centric representations. The chapter concludes with a discussion of the limitations of the work conducted to date and the challenges for the field in understanding how the brain accomplishes large‐scale navigation in natural environments.
Article
Representations of space in mind are crucial for navigation, facilitating processes such as remembering landmark locations, understanding spatial relationships between objects, and integrating routes. A significant problem, however, is the lack of consensus on how these representations are encoded and stored in memory. Specifically, the nature of egocentric and allocentric frames of reference in human memory is widely debated. Yet, in recent investigations of the spatial domain across the lifespan, these distinctions in mnemonic spatial frames of reference have identified age‐related impairments. In this review, we survey the ways in which different terms related to spatial representations in memory have been operationalized in past aging research and suggest a taxonomy to provide a common language for future investigations and theoretical discussion. This article is categorized under: • Psychology > Memory • Neuroscience > Cognition • Psychology > Development and Aging Abstract Space can be represented from a variety of perspectives, called spatial frames of reference. Whether spatial information is stored in human memory within these frames of reference—called “egocentric” and “allocentric”—is highly debated. In this review, we present a global taxonomy for spatial memory representations to encourage a common language in future research, using cognitive aging as a case study to support this aim.
Chapter
Strategic planning has recently focused its attention on the elements that characterize the spaces through which the agents move, paying particular attention on the way in which they incorporate them. Spatial environments are currently studied from different perspectives, from the cognitivist point of view they represent knowledge-intensive, significant spatial entities to which human agents need to relate adaptively. The way in which humans use the surrounding space is influenced by a series of implicit factors, such as perceptions, emotions, sensations. These elements, being often tacit, are difficult to identify although they strongly characterize these spaces. For this reason, these characteristics become basic for effective strategic planning at urban and regional level and for environmental decision-making processes. This study presents a method for quantitatively measuring the reactions of visitors to scenes they encounter in spaces with an extremely small population. We conducted an experiment that required participants to take photographs of elements that caught their attention in poorly structured rural areas. In this way, the photographed features and the related comments have made it possible to better grasp perceptions, sensations, emotions that can represent crucial spatial variables for structuring and interpreting spaces.
Article
Full-text available
This study examined functions of self-motion and visual cues in updating people’s actual headings in multiscale spaces. In an immersive virtual environment, the participants learned objects’ locations inside two misaligned rectangular rooms by locomoting within and between the rooms. In each testing trial, the participants locomoted to adopt an actual perspective in one room, and then they judged relative direction to a target from an imagined perspective in the other room (remote perspective-taking). The imagined and actual perspectives had the same/opposite cardinal directions (globally aligned/misaligned) or had the same/opposite orientations defined by room structures (locally aligned/misaligned). Global/local sensorimotor alignment effects mean that performance is better when imagined and actual perspectives were globally/locally aligned than misaligned. We examined these effects to infer updating actual headings in global/local representations. The results showed local but no global sensorimotor alignment effect. By contrast, there were both global and local sensorimotor alignment effects when the participants judged across-room relative headings prior to remote perspective-taking. These results indicate that people update headings in local representations based on visual similarities between local spaces. People update headings in global representations based on self-motion cues available in across-boundary navigation, but updating headings globally requires tasks to activate global-relevant sensorimotor representations.
Article
Full-text available
Neurophysiological studies show that the firing of place and head-direction (HD) cells in rats can become anchored to features of the perceptible environment, suggesting that those features partially specify the rat's position and heading. In contrast, behavioral studies suggest that disoriented rats and human children rely exclusively on the shape of their surroundings, ignoring much of the information to which place and HD cells respond. This difference is explored in the current study by investigating young children's ability to locate objects in a square chamber after disorientation. Children 18–24 months old used a distinctive geometric cue but not a distinctively colored wall to locate the object, even after they were familiarized with the colored wall. Results suggest that the spatial representations underlying reorientation and object localization are common to humans and other mammals. Together with the neurophysiological findings, these experiments raise questions for the hypothesis that hippocampal place and HD cells serve as a general orientation device for target localization.
Article
Full-text available
A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as asymmetries in similarity judgments, without positing distorted representations of physical scales.
Article
Full-text available
Describes characteristics of place navigation by rats in the Morris water task. Ss learned to find a small invisible platform in a large, circular swimming pool to escape from cool milk. A variety of spatial localization strategies, including spatial mapping, response sequencing, and distal cue navigational strategies, were demonstrated. Using variants of this task, the following was demonstrated: (a) Rats very readily learned true mapping strategies, being able to swim directly to the invisible platform from anywhere in the pool after only a few trials; this ability was not dependent on navigating by specific distal cues nor on starting from a familiar place. (b) Ss acquired information that facilitated subsequent place navigation merely by viewing the room from the location of the invisible goal. (c) The location of the invisible platform was remembered extremely well for several weeks. (d) The availability of only distal auditory beacons permitted acquisition of an accurate spatial mapping strategy. (e) A single S could acquire both mapping and nonmapping strategies in the swimming pool and apply them when required by the situation. Results emphasize the utility of the Morris water task and demonstrate some of the basic features of mapping and nonmapping strategies in solving a variety of spatial problems. (French abstract) (18 ref)
Article
Full-text available
Successful object recognition is essential for finding food, identifying kin, and avoiding danger, as well as many other adaptive behaviors. To accomplish this feat, the visual system must reconstruct 3-D interpretations from 2-D “snapshots” falling on the retina. Theories of recognition address this process by focusing on the question of how object representations are encoded with respect to viewpoint. Although empirical evidence has been equivocal on this question, a growing body of surprising results, including those obtained in the experiments presented in this case study, indicates that recognition is often viewpoint dependent. Such findings reveal a prominent role for viewpointdependent mechanisms and provide support for themultiple-views approach, in which objects are encoded as a set of view-specific representations that are matched to percepts using normalization procedures.
Article
Full-text available
Every eye movement produces a shift in the visual image on the retina. The receptive field, or retinal response area, of an individual visual neuron moves with the eyes so that after an eye movement it covers a new portion of visual space. For some parietal neurons, the location of the receptive field is shown to shift transiently before an eye movement. In addition, nearly all parietal neurons respond when an eye movement brings the site of a previously flashed stimulus into the receptive field. Parietal cortex both anticipates the retinal consequences of eye movements and updates the retinal coordinates of remembered stimuli to generate a continuously accurate representation of visual space.
Article
Full-text available
Hippocampal "place cells" fire when a freely moving rat is in a given location. The firing of these cells is controlled by visual and nonvisual environmental cues. The effects of darkness on the firing of place cells was studied using the task of Muller et al. (1987), in which rats were trained to chase randomly scattered food pellets in a cylindrical drum with a white cue-card attached to the wall. The position of the rats was tracked via an infrared LED on the headstage with a video system linked to computer. Two experimental protocols were used: in the first, lights were turned off after the rat had already been placed in the chamber; in the second, the rat was placed in the darkened chamber. The dark segments produced by these 2 methods were identical with respect to light and other cues but differed with respect to the rat's experience. The firing patterns of 24 of 28 cells were unaffected by darkness when it was preceded by a light period. In contrast, the firing patterns of 14 of 22 cells changed dramatically when the rats were put into the darkened chamber. Furthermore, the majority of cells that changed their firing pattern in initial darkness maintained that change when the lights were turned on. These results show that place cells can fire differently in identical cue situations and that the best predictor of firing pattern is a combination of current cues and the rat's recent experience. The results are discussed in terms of mnemonic properties of hippocampal cells and "remapping" of place cell representations.
Article
Full-text available
A "point-to-unseen-targets" task was used to test two theories about the nature of cognitive mapping. The hypothesis that a cognitive map is like a "picture in the head" predicts that (a) the cognitive map should have a preferred orientation and (b) all coded locations should be equally available. These predictions were confirmed in Experiments 1 and 3 when targets were cities in the northeastern United States and learning was from a map. The theory that a cognitive map is an orienting schema predicts that the cognitive map should have no preferred orientation and that targets in front of the body should be localized faster than targets behind the body. These predictions were confirmed in Experiments 1 and 2 when targets were local landmarks that had been learned via direct experience. In Experiment 3, when cities in the Northeast were targets and geographical knowledge had been acquired, in part, by traveling in the Northeast, the observed latency profiles were not as predicted by either theory of cognitive mapping. The results suggest that orienting schemata direct orientation with respect to local environments, but that orientation with respect to large geographical regions is supported by a different type of cognitive structure.
Article
Full-text available
Using the techniques set out in the preceding paper (Muller et al., 1987), we investigated the response of place cells to changes in the animal's environment. The standard apparatus used was a cylinder, 76 cm in diameter, with walls 51 cm high. The interior was uniformly gray except for a white cue card that ran the full height of the wall and occupied 100 degrees of arc. The floor of the apparatus presented no obstacles to the animal's motions. Each of these major features of the apparatus was varied while the others were held constant. One set of manipulations involved the cue card. Rotating the cue card produced equal rotations of the firing fields of single cells. Changing the width of the card did not affect the size, shape, or radial position of firing fields, although sometimes the field rotated to a modest extent. Removing the cue card altogether also left the size, shape, and radial positions of firing fields unchanged, but caused fields to rotate to unpredictable angular positions. The second set of manipulations dealt with the size and shape of the apparatus wall. When the standard (small) cylinder was scaled up in diameter and height by a factor of 2, the firing fields of 36% of the cells observed in both cylinders also scaled, in the sense that the field stayed at the same angular position and at the same relative radial position. Of the cells recorded in both cylinders, 52% showed very different firing patterns in one cylinder than in the other. The remaining 12% of the cells were virtually silent in both cylinders. Similar results were obtained when individual cells were recorded in both a small and a large rectangular enclosure. By contrast, when the apparatus floor plan was changed from circular to rectangular, the firing pattern of a cell in an apparatus of one shape could not be predicted from a knowledge of the firing pattern in the other shape. The final manipulations involved placing vertical barriers into the otherwise unobstructed floor of the small cylinder. When an opaque barrier was set up to bisect a previously recorded firing field, in almost all cases the firing field was nearly abolished. This was true even though the barrier occupied only a small fraction of the firing field area. A transparent barrier was effective as the opaque barrier in attenuating firing fields. The lead base used to anchor the vertical barriers did not affect place cell firing.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
The ability to evaluate traveled distance is common to most animal species. Head trajectory in space is measured on the basis of the converging signals of the visual, vestibular, and somatosensory systems, together with efferent copies of motor commands. Recent evidence from human studies has shown that head trajectory in space can be stored in spatial memory. A fundamental question, however, remains unanswered: How is movement stored? In this study, humans who were asked to reproduce passive linear whole-body displacement distances while blindfolded were also able to reproduce velocity profiles. This finding suggests that a spatiotemporal dynamic pattern of motion is stored and can be retrieved with the use of vestibular and somesthetic cues.
Article
Full-text available
Previous studies have shown that hippocampal place fields are controlled by the salient sensory cues in the environment, in that rotation of the cues causes an equal rotation of the place fields. We trained rats to forage for food pellets in a gray cylinder with a single salient directional cue, a white card covering 90 degrees of the cylinder wall. Half of the rats were disoriented before being placed in the cylinder, in order to disrupt their internal sense of direction. The other half were not disoriented before being placed in the cylinder; for these rats, there was presumably a consistent relationship between the cue card and their internal direction sense. We subsequently recorded hippocampal place cells and thalamic head direction cells from both groups of rats as they moved in the cylinder; between some sessions the cylinder and cue card were rotated to a new direction. All rats were disoriented before recording. Under these conditions, the cue card had much weaker control over the place fields and head direction cells in the rats that had been disoriented during training than in the rats that had not been disoriented. For the former group, the place fields often rotated relative to the cue card or completely changed their firing properties between sessions. In all recording sessions, the head direction cells and place cells were strongly coupled. It appears that the strength of cue control over place cells and head direction cells depends on the rat's learned perception of the stability of the cues.
Article
Full-text available
Disoriented rats and non-human primates reorient themselves using geometrical features of the environment. In rats tested in environments with distinctive geometry, this ability is impervious to non-geometric information (such as colours and odours) marking important locations and used in other spatial tasks. Here we show that adults use both geometric and non-geometric information to reorient themselves, whereas young children, like mature rats, use only geometric information. These findings provide evidence that: (1) humans reorient in accord with the shape of the environment; (2) the young child's reorientation system is impervious to all but geometric information, even when non-geometric information is available and is re-presented by the child--such information should improve performance and is used in similar tasks by the oriented child; and (3) the limits of this process are overcome during human development.
Article
Full-text available
Blindfolded sighted, adventitiously blind, and congenitally blind subjects performed a set of navigation tasks. The more complex tasks involved spatial inference and included retracing a multisegment route in reverse, returning directly to an origin after being led over linear segments, and pointing to targets after locomotion. As a group, subjects responded systematically to route manipulations in the complex tasks, but performance was poor. Patterns of error and response latency are informative about the internal representation used; in particular, they do not support the hypothesis that only a representation of the origin of locomotion is maintained. The slight performance differences between groups varying in visual experience were neither large nor consistent across tasks. Results provide little indication that spatial competence strongly depends on prior visual experience.
Article
Full-text available
The human hippocampus has been implicated in memory, in particular episodic or declarative memory. In rats, hippocampal lesions cause selective spatial deficits, and hippocampal complex spike cells (place cells) exhibit spatially localized firing, suggesting a role in spatial memory, although broader functions have also been suggested. Here we report the identification of the environmental features controlling the location and shape of the receptive fields (place fields) of the place cells. This was done by recording from the same cell in four rectangular boxes that differed solely in the length of one or both sides. Most of our results are explained by a model in which the place field is formed by the summation of gaussian tuning curves, each oriented perpendicular to a box wall and peaked at a fixed distance from it.
Article
Full-text available
Populations of hippocampal neurons were recorded simultaneously in rats shuttling on a track between a fixed reward site at one end and a movable reward site, mounted in a sliding box, at the opposite end. While the rat ran toward the fixed site, the box was moved. The rat returned to the box in its new position. On the initial part of all journeys, cells fired at fixed distances from the origin, whereas on the final part, cells fired at fixed distances from the destination. Thus, on outward journeys from the box, with the box behind the rat, the position representation must have been updated by path integration. Farther along the journey, the place field map became aligned on the basis of external stimuli. The spatial representation was quantified in terms of population vectors. During shortened journeys, the vector shifted from an alignment with the origin to an alignment with the destination. The dynamics depended on the degree of mismatch with respect to the full-length journey. For small mismatches, the vector moved smoothly through intervening coordinates until the mismatch was corrected. For large mismatches, it jumped abruptly to the new coordinate. Thus, when mismatches occur, path integration and external cues interact competitively to control place-cell firing. When the same box was used in a different environment, it controlled the alignment of a different set of place cells. These data suggest that although map alignment can be controlled by landmarks, hippocampal neurons do not explicitly represent objects or events.
Article
Full-text available
In a series of experiments, young children who were disoriented in a novel environment reoriented themselves in accord with the large-scale shape of the environment but not in accord with nongeometric properties of the environment such as the color of a wall, the patterning on a box, or the categorical identity of an object. Because children's failure to reorient by nongeometric information cannot be attributed to limits on their ability to detect, remember, or use that information for other purposes, this failure suggests that children's reorientation, at least in relatively novel environments, depends on a mechanism that is informationally encapsulated and task-specific: two hallmarks of modular cognitive processes. Parallel studies with rats suggest that children share this mechanism with at least some adult nonhuman mammals. In contrast, our own studies of human adults, who readily solved our tasks by conjoining nongeometric and geometric information, indicated that the most striking limitations of this mechanism are overcome during human development. These findings support broader proposals concerning the domain specificity of humans' core cognitive abilities, the conservation of cognitive abilities across related species and over the course of human development, and the developmental processes by which core abilities are extended to permit more flexible, uniquely human kinds of problem solving.
Article
Full-text available
Two triangulation methods for measuring perceived egocentric distance were examined. In the triangulation-by-pointing procedure, the observer views a target at some distance and, with eyes closed, attempts to point continuously at the target while traversing a path that passes by it. In the triangulation-by-walking procedure, the observer views a target and, with eyes closed, traverses a path that is oblique to the target; on command from the experimenter, the observer turns and walks toward the target. Two experiments using pointing and 3 using walking showed that perceived distance, averaged over observers, was accurate out to 15 m under full-cue conditions. For target distances between 15 and 25 m, the evidence indicates slight perceptual underestimation. Results also show that observers, on average, were accurate in imaginally updating the locations of previously viewed targets.
Article
Full-text available
Recent research has uncovered a number of different ways in which bees use cues derived from optic flow for navigational purposes. The distance flown to a food source is gauged by integrating the apparent motion of the visual world that is experienced en route. In other words, bees possess a visually driven 'odometer' that is robust to variations in wind load and energy expenditure. Bees flying through a tunnel maintain equidistance to the flanking walls by balancing the apparent speeds of the images of the walls. This strategy enables them to negotiate narrow passages or to fly between obstacles. The speed of flight in a tunnel is controlled by holding constant the average image velocity as seen by the two eyes. This avoids potential collisions by ensuring that the bee slows down when flying through narrow passages. Bees landing on a horizontal surface hold constant the image velocity of the surface as they approach it. This automatically ensures that flight speed decreases with altitude and is close to zero at touchdown. The movement-sensitive mechanisms underlying these various behaviours seem to be different, qualitatively as well as quantitatively, from those mediating the well-investigated optomotor response.
Article
Full-text available
There are at least four distinct ways in which familiar landmarks aid an insect on its trips between nest and foraging site. Recognising scenes: when bees are displaced unexpectedly from their hive to one of several familiar locations, they are able to head in the direction of home as though they had previously linked an appropriate directional vector to a view of the scene at the release site. Biased detours: ants recognise familiar landmarks en route and will correct their path by steering consistently to the left or to the right around them. Aiming at beacons: bees and ants also guide their path by approaching familiar landmarks lying on or close to the direct line between start and finish. Simulations suggest that such mechanisms acting together may suffice to account for the routes taken by desert ants through a landmark-strewn environment: the stereotyped trajectories of individual ants can be modelled by a weighted combination of dead reckoning, biased detours and beacon-aiming. These mechanisms guide an insect sufficiently close to an inconspicuous goal for image matching to be successfully employed to locate it. Insects then move until their current retinal image matches a stored view of the surrounding panorama seen from a vantage point close to the goal. Bees and wasps perform learning flights on their first departure from a site to which they will return. These flights seem to be designed to pick up the information needed for several navigational strategies. Thus, a large portion of the learning flight of a bee leaving a feeder tends to be spent close to the feeder so aiding the acquisition of a view from that vantage point, as is needed for image matching. Bees and social wasps also tend to inspect their surroundings while facing along preferred directions and to adopt similar bearings before landing, thereby making it easy to employ retinotopically stored patterns in image matching. Aiming at beacons, in contrast, requires a landmark to be familiar to the frontal retina. Objects tend to be viewed frontally while the insect circles through arcs centred on the goal. This procedure may help insects to pick out those objects close to the goal that are best suited for guiding later returns.
Article
Full-text available
Previous research on spatial memory indicated that memories of small layouts were orientation dependent (orientation specific) but that memories of large layouts were orientation independent (orientation free). Two experiments investigated the relation between layout size and orientation dependency. Participants learned a small or a large 4-point path (Experiment 1) or a large display of objects (Experiment 2) and then made judgments of relative direction from imagined headings that were either the same as or different from the single studied orientation. Judgments were faster and more accurate when the imagined heading was the same as the studied orientation (i.e., aligned) than when the imagined heading differed from the studied orientation (i.e., misaligned). This alignment effect was present for both small and large layouts. These results indicate that location is encoded in an orientation-dependent manner regardless of layout size.
Article
Full-text available
Under many circumstances, children and adult rats reorient themselves through a process which operates only on information about the shape of the environment (e.g., Cheng, 1986; Hermer & Spelke, 1996). In contrast, human adults relocate themselves more flexibly, by conjoining geometric and nongeometric information to specify their position (Hermer & Spelke, 1994). The present experiments used a dual-task method to investigate the processes that underlie the flexible conjunction of information. In Experiment 1, subjects reoriented themselves flexibly when they performed no secondary task, but they reoriented themselves like children and adult rats when they engaged in verbal shadowing of continuous speech. In Experiment 2, subjects who engaged in nonverbal shadowing of a continuous rhythm reoriented like nonshadowing subjects, suggesting that the interference effect in Experiment 1 did not stem from general limits on working memory or attention but from processes more specific to language. In further experiments, verbally shadowing subjects detected and remembered both nongeometric information (Experiment 3) and geometric information (Experiments 1, 2, and 4), but they failed to conjoin the two types of information to specify the positions of objects (Experiment 4). Together, the experiments suggest that humans' flexible spatial memory depends on the ability to combine diverse information sources rapidly into unitary representations and that this ability, in turn, depends on natural language.
Article
Full-text available
Neurophysiological studies show that the firing of place and head-direction (HD) cells in rats can become anchored to features of the perceptible environment, suggesting that those features partially specify the rat's position and heading. In contrast, behavioral studies suggest that disoriented rats and human children rely exclusively on the shape of their surroundings, ignoring much of the information to which place and HD cells respond. This difference is explored in the current study by investigating young children's ability to locate objects in a square chamber after disorientation. Children 18-24 months old used a distinctive geometric cue but not a distinctively colored wall to locate the object, even after they were familiarized with the colored wall. Results suggest that the spatial representations underlying reorientation and object localization are common to humans and other mammals. Together with the neurophysiological findings, these experiments raise questions for the hypothesis that hippocampal place and HD cells serve as a general orientation device for target localization.
Article
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.
Article
Ensemble recordings of 73 to 148 rat hippocampal neurons were used to predict accurately the animals' movement through their environment, which confirms that the hippocampus transmits an ensemble code for location. In a novel space, the ensemble code was initially less robust but improved rapidly with exploration. During this period, the activity of many inhibitory cells was suppressed, which suggests that new spatial information creates conditions in the hippocampal circuitry that are conducive to the synaptic modification presumed to be involved in learning. Development of a new population code for a novel environment did not substantially alter the code for a familiar one, which suggests that the interference between the two spatial representations was very small. The parallel recording methods outlined here make possible the study of the dynamics of neuronal interactions during unique behavioral events.
Article
1.If a homing ant (Cataglyphis bicolor,C. albicans) gets lost, it does not perform a random walk but adopts a stereotyped search strategy. During its search the ant performs a number of loops of ever-increasing size, starting and ending at the origin and pointing at different azimuthal directions. This strategy ensures that the centre area where the nest is most likely to be, is investigated most extensively.2.After one hour of continuous search the ant's search paths cover an area of about 104 m2. Nevertheless, the system of loops performed during this time is precisely centred around the origin. The ant's searching density does not depend on the azimuthal direction around the origin but only on the distance from the origin. It rapidly decreases with increasing distance.3.The ant's searching pattern can be characterized by two functions: thed/t-function correlating distance (d) with time (t), and thea/t-function correlating azimuthal direction (a) with time. If fixes of the ant's position are taken every 10 s, the vectors pointing from the origin to successive fixes change their lengthsd systematically (a/t-function) and their directionsa randomly (a/t-function). What is especially characteristic of the ant's searching pattern is the oscillatingd/t-function which clearly demonstrates that the searching ant repeatedly returns to the origin, even after it has walked, within one hour, along a search trajectory of more than 1 km (the latter number refers toC. albicans-A). The ant's walking speed does not change within a search time of 1 h.4.The distribution of changes in direction between successive segments of a search path,ß, is usually unimodal with a mean of 0°, if complete search paths are considered. Nevertheless, within smaller periods of time, especially during the initial portions of the search the integrated angleß may continuously change in the same direction. Such portions of the search crudely resemble a spiral which alternately expands and contracts.5.Although all 3 species ofCataglyphis studied in this paper adopt the same general search strategy, there are some differences in the fine structure of the search:C. albicans-A departs further from the origin than any other species, and performs the most rapid turns. The tendency towards spiralling is most pronounced inC. albicans-B.6.An efficient searching strategy is formulated, based on purely theoretical grounds. It is assumed that when the search begins the probability density function (PDF) for the location of the nest is Gaussian in two dimensions (a priori PDF). It is further assumed that the ant can never becompletely certain that a given area has been fully explored, so that it is only theprobability of encountering the nest within a certain region that decreases as the time spent in searching this region increases. Thus, the most promising region to search is specified by an a posteriori PDF which takes the ant's past performance into account.7.A computer model is presented that searches in optimum fashion, as proposed above. In the model, motion of the ant is characterized in terms of radial and tangential components, with the tangential component varying randomly and the radial component varying according to the a posteriori PDF. The model successfully describes what the ants are actually doing (e.g., compare Figs. 17 and 18 with Fig. 3, Figs. 19 and 20 with Figs. 8–10, and Fig. 21a and b with Figs. 4 and 5), indicating that the searching behaviour ofCataglyphis is geared to find the nest as quickly as possible.
Article
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.
Article
Two experiments investigated the viewpoint dependence of spatial memories In Experiment 1, participants learned the locations of objects on a desktop from a single perspective and then took part in a recognition test, test scenes included familiar and novel views of the layout Recognition latency was a linear function of the angular distance between a test view and the study view In Experiment 2, participants studied a layout from a single view and then learned to recognize the layout from three additional training views A final recognition test showed that the study view and the training views were represented in memory, and that latency was a linear function of the angular distance to the nearest study or training view These results indicate that interobject spatial relations are encoded in a viewpoint-dependent manner, and that recognition of novel views requires normalization to the most similar representation in memory These findings parallel recent results in visual object recognition.
Article
recent neurophysiological and behavioral studies suggest that the influence of the vestibular system on spatial orientation runs far deeper than its well-known role in the perception of attitude and motion and in the stabilization of visual images / the vestibular system also appears to play a central role in establishing the fundamental directional reference framework that is used to construct cognitive representations of the environment and to compute optimal trajectories to remembered locations on the basis of visual landmarks / this so-called sense of direction appears to involve the integration of angular velocity signals that arise primarily in the vestibular system evidence from behavioral studies suggests that . . . allocentric directional information, in conjuction with distance information obtained from other sources, leads to a vector-based internal representation of spatial relationships / in this view, optimal trajectories are computed by subtracting vectors to landmarks, remembered at the goal location, from the perceived vectors at the current location / consideration of the kind of internal representation necessary for such mapping leads to some new perspectives on the nature of spatial representations in both parietal cortex and hippocampus, as well as to some experimentally testable predictions (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The paper presents experimental evidence for homing by path integration in a bird. Using a mammalian model case, the essentials of a cybernetical theory of this type of navigation are developed. As a consequence of its application to the extant data on geese, the necessary information about the translatory component of the animal’s movement along its path appears to be provided by the visual system, viz. the translatory component of the visual flow, whereas the rotatory information must (also) have non-visual sources, e.g. the semicircular canals or the magnetic field.
Article
Recent evidence indicates that mental representations of large (i.e., navigable) spaces are viewpoint dependent when observers are restricted to a single view. The purpose of the present study was to determine whether two views of a space would produce a single viewpoint-independent representation or two viewpoint-dependent representations. Participants learned the locations of objects in a room from two viewpoints and then made judgments of relative direction from imagined headings either aligned or misaligned with the studied views. The results indicated that mental representations of large spaces were viewpoint dependent, and that two views of a spatial layout appeared to produce two viewpoint-dependent representations in memory. Imagined headings aligned with the study views were more accessible than were novel headings in terms of both speed and accuracy of pointing judgments.
Article
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315–320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.
Article
How do we recognize objects despite differences in their retinal projections when they are seen at different orientations? Marr and Nishihara (1978) proposed that shapes are represented in memory as structural descriptions in object-centered coordinate systems, so that an object is represented identically regardless of its orientation. An alternative hypothesis is that an object is represented in memory in a single representation corresponding to a canonical orientation, and a mental rotation operation transforms an input shape into that orientation before input and memory are compared. A third possibility is that shapes are stored in a set of representations, each corresponding to a different orientation. In four experiments, subjects studied several objects each at a single orientation, and were given extensive practice at naming them quickly, or at classifying them as normal or mirror-reversed, at several orientations. At first, response times increased with departure from the study orientation, with a slope similar to those obtained in classic mental rotation experiments. This suggests that subjects made both judgments by mentally transforming the orientation of the input shape to the one they had initially studied. With practice, subjects recognized the objects almost equally quickly at all the familiar orientations. At that point they were probed with the same objects appearing at novel orientations. Response times for these probes increased with increasing disparity from the previously trained orientations. This indicates that subjects had stored representations of the shapes at each of the practice orientations and recognized shapes at the new orientations by rotating them to one of the stored orientations. The results are consistent with a hybrid of the second (mental transformation) and third (multiple view) hypotheses of shape recognition: input shapes are transformed to a stored view, either the one at the nearest orientation or one at a canonical orientation. Interestingly, when mirrorimages of trained shapes were presented for naming, subjects took the same time at all orientations. This suggests that mental transformations of orientation can take the shortest path of rotation that will align an input shape and its memorized counterpart, in this case a rotation in depth about an axis in the picture plane.
Article
In three experiments, we explore distortions in subjects' judgments of relative geographical relations. People make large systematic errors in judging the geographical relations between two locations that are in different geographical or political units. There is a strong tendency to distort the judged relation to conform with the relation of the superordinate political unit. To account for this result, we present a model in which spatial information is stored hierarchically. Spatial relations between any two locations are stored explicitly only if those locations are within the same superordinate unit. Spatial relations not stored are inferred by combining the relations from between and within superordinate units.
Article
A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.
Article
Single unit activity was recorded from complex spike cells in the hippocampus of the rat while the animal was performing a spatial memory task. The task required the animal to choose the correct arm of a 4 arm plus-shaped maze in order to obtain reward. The location of the goal arm was varied from trial to trial and was identified by 6 controlled spatial cues which were distributed around the enclosure and which were rotated in step with the goal. On some trials these spatial cues were present throughout the trial (spatial reference memory trials) while on other trials they were present during the first part of the trial but were removed before the rat was allowed to choose the goal (spatial working memory trials). On these latter trials the animal had to remember the location of the cues and/or goal during the delay in order to choose correctly. 55 units were recorded during sufficient reference memory trials for the relationship between their firing pattern and different spatial aspects of the environment to be determined. 33 units had fields with significant relations to the controlled cues while 16 had significant relations to the static background cues, those cues in the environment which did not change position from trial to trial. Of 43 units which could be tested for their relation to the shape of the maze arms themselves, 15 showed such a relationship. Therefore the place units can be influenced by different aspects of the spatial environment but those related to the task requirement appear to be more potent. Interaction effects between the different spatial factors also influenced the firing pattern of some units. Of particular interest was the interaction between the controlled cues and the static background cues found in some cells since this might shed some light on how the hippocampus enables the rat to solve the memory task. 30 units with place fields related to the controlled cues were recorded during successful performance on spatial working memory trials as well as during spatial reference memory trials. The place fields of 90% of these units were maintained during the retention phase of the memory trials. During the recording of some units, other types of trial were given as well. On control trials, the cues were removed before the rat was placed on the maze. These trials provided controls for the potential influence of information left behind by the controlled cues and for the influence of the animal's behaviour on the unit activity.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Three classes of theories of the mental representation of spatial relations were tested. Nonhierarchical theories propose that spatial relations among objects in an environment are mentally represented in networks or in imagelike, analog formats. The distinctive claim of these theories is that there is no hierarchical structure to the mental representation. Hierarchical theories, on the other hand, propose that different “regions” of an environment are stored in different branches of a graph-theoretic tree. These theories can be divided into two classes of subtheories depending on the kinds of relations encoded in memory: Strongly hierarchical theories maximize storage efficiency by encoding only those spatial relations needed to represent a layout accurately; partially hierarchical theories predict redundancy in the representation, such that many spatial relations that can be computed also will be stored explicitly. These three classes of theories were tested by having subjects learn the locations of actual objects in spatial layouts or the locations of objects on maps of those layouts. Layouts and maps were divided into regions with transparent boundaries (for the layouts, string on the floor; for the maps, lines). After learning the layouts or maps, subjects participated in three tasks: item recognition, in which the variable of interest was spatial priming; direction judgments; and euclidean distance estimation. Results from all three tasks were sensitive (a) to whether objects were in the same region or in different regions and (b) to the euclidean distances between pairs of objects. These findings were interpreted as supporting partially hierarchical theories of spatial representations. Computer simulations supported this conclusion.
Article
Ensemble recordings of 73 to 148 rat hippocampal neurons were used to predict accurately the animals' movement through their environment, which confirms that the hippocampus transmits an ensemble code for location. In a novel space, the ensemble code was initially less robust but improved rapidly with exploration. During this period, the activity of many inhibitory cells was suppressed, which suggests that new spatial information creates conditions in the hippocampal circuitry that are conducive to the synaptic modification presumed to be involved in learning. Development of a new population code for a novel environment did not substantially alter the code for a familiar one, which suggests that the interference between the two spatial representations was very small. The parallel recording methods outlined here make possible the study of the dynamics of neuronal interactions during unique behavioral events.
Article
Drawing on studies of humans, rodents, birds and arthropods, I show that 'cognitive maps' have been used to describe a wide variety of spatial concepts. There are, however, two main definitions. One, sensu Tolman, O'Keefe and Nadel, is that a cognitive map is a powerful memory of landmarks which allows novel short-cutting to occur. The other, sensu Gallistel, is that a cognitive map is any representation of space held by an animal. Other definitions with quite different meanings are also summarised. I argue that no animal has been conclusively shown to have a cognitive map, sensu Tolman, O'Keefe and Nadel, because simpler explanations of the crucial novel short-cutting results are invariably possible. Owing to the repeated inability of experimenters to eliminate these simpler explanations over at least 15 years, and the confusion caused by the numerous contradictory definitions of a cognitive map, I argue that the cognitive map is no longer a useful hypothesis for elucidating the spatial behaviour of animals and that use of the term should be avoided.
Article
During locomotion, mammals update their position with respect to a fixed point of reference, such as their point of departure, by processing inertial cues, proprioceptive feedback and stored motor commands generated during locomotion. This so-called path integration system (dead reckoning) allows the animal to return to its home, or to a familiar feeding place, even when external cues are absent or novel. However, without the use of external cues, the path integration process leads to rapid accumulation of errors involving both the direction and distance of the goal. Therefore, even nocturnal species such as hamsters and mice rely more on previously learned visual references than on the path integration system when the two types of information are in conflict. Recent studies investigate the extent to which path integration and familiar visual cues cooperate to optimize the navigational performance.
Article
Honeybees and other nesting animals face the problem of finding their way between their nest and distant feeding sites. Many studies have shown that insects can learn foraging routes in reference to both landmarks and celestial cues, but it is a major puzzle how spatial information obtained from these environmental features is encoded in memory. This paper reviews recent progress by my colleagues and me towards understanding three specific aspects of this problem in honeybees: (1) how bees learn the spatial relationships among widely separated locations in a familiar terrain; (2) how bees learn the pattern of movement of the sun over the day; and (3) whether, and if so how, bees learn the relationships between celestial cues and landmarks.
Article
Medial temporal brain regions such as the hippocampal formation and parahippocampal cortex have been generally implicated in navigation and visual memory. However, the specific function of each of these regions is not yet clear. Here we present evidence that a particular area within human parahippocampal cortex is involved in a critical component of navigation: perceiving the local visual environment. This region, which we name the 'parahippocampal place area' (PPA), responds selectively and automatically in functional magnetic resonance imaging (fMRI) to passively viewed scenes, but only weakly to single objects and not at all to faces. The critical factor for this activation appears to be the presence in the stimulus of information about the layout of local space. The response in the PPA to scenes with spatial layout but no discrete objects (empty rooms) is as strong as the response to complex meaningful scenes containing multiple objects (the same rooms furnished) and over twice as strong as the response to arrays of multiple objects without three-dimensional spatial context (the furniture from these rooms on a blank background). This response is reduced if the surfaces in the scene are rearranged so that they no longer define a coherent space. We propose that the PPA represents places by encoding the geometry of the local environment.