Article

Isolating Observer-Based Reference Directions in Human Spatial Memory: Head, Body, and the Self-to-Array Axis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Several lines of research have suggested the importance of egocentric reference systems for determining how the spatial properties of one's environment are mentally organized. Yet relatively little is known about the bases for egocentric reference systems in human spatial memory. In three experiments, we examine the relative importance of observer-based reference directions in human memory by controlling the orientation of head and body during acquisition. Experiment 1 suggests that spatial memory is organized by a head-aligned reference direction; however, Experiment 2 shows that a body-aligned reference direction can be more influential than a head-aligned direction when the axis defined by the relative positions of the observer and the learned environment (the "self-to-array" axis) is properly controlled. A third experiment shows that the self-to-array axis is distinct from - and can dominate - retina, head, and body-based egocentric reference systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The second, independent motivation for the study concerned the reference frame within which knowledge was retrieved-namely, body-based or location perspective. Both reference frames are not identical, as has been indicated in a study by Waller, Lippa, and Richardson (2008). In their study, participants memorized an object layout placed left of where their body, head, and eye were facing. ...
... In spatial-cognition research these reference frames are typically not differentiated, and only body-based frames are considered. However, in accordance with Waller et al. (2008), our results showed that both reference frames are indeed relevant, and by analyzing conflicting situations (i.e., contra-alignment), we found the location perspective to be dominant. Although Waller et al. (2008) showed this for location memory in a laboratory setup, the present work extends these findings to spatial recall in navigable environments, suggesting that this differentiation is indeed a relevant aspect of everyday navigation. ...
... However, in accordance with Waller et al. (2008), our results showed that both reference frames are indeed relevant, and by analyzing conflicting situations (i.e., contra-alignment), we found the location perspective to be dominant. Although Waller et al. (2008) showed this for location memory in a laboratory setup, the present work extends these findings to spatial recall in navigable environments, suggesting that this differentiation is indeed a relevant aspect of everyday navigation. ...
... The second, independent motivation for the study concerned the reference frame within which knowledge was retrieved-namely, body-based or location perspective. Both reference frames are not identical, as has been indicated in a study by Waller, Lippa, and Richardson (2008). In their study, participants memorized an object layout placed left of where their body, head, and eye were facing. ...
... In spatial-cognition research these reference frames are typically not differentiated, and only body-based frames are considered. However, in accordance with Waller et al. (2008), our results showed that both reference frames are indeed relevant, and by analyzing conflicting situations (i.e., contra-alignment), we found the location perspective to be dominant. Although Waller et al. (2008) showed this for location memory in a laboratory setup, the present work extends these findings to spatial recall in navigable environments, suggesting that this differentiation is indeed a relevant aspect of everyday navigation. ...
... However, in accordance with Waller et al. (2008), our results showed that both reference frames are indeed relevant, and by analyzing conflicting situations (i.e., contra-alignment), we found the location perspective to be dominant. Although Waller et al. (2008) showed this for location memory in a laboratory setup, the present work extends these findings to spatial recall in navigable environments, suggesting that this differentiation is indeed a relevant aspect of everyday navigation. ...
Article
Reference frames in spatial memory encoding have been examined intensively in recent years. However, their importance for recall has received considerably less attention. In the present study, passersby used tags to arrange a configuration map of prominent city center landmarks. It has been shown that such configurational knowledge is memorized within a north-up reference frame. However, participants adjusted their maps according to their body orientations. For example, when participants faced south, the maps were likely to face south-up. Participants also constructed maps along their location perspective-that is, the self-target direction. If, for instance, they were east of the represented area, their maps were oriented west-up. If the location perspective and body orientation were in opposite directions (i.e., if participants faced away from the city center), participants relied on location perspective. The results indicate that reference frames in spatial recall depend on the current situation rather than on the organization in long-term memory. These results cannot be explained by activation spread within a view graph, which had been used to explain similar results in the recall of city plazas. However, the results are consistent with forming and transforming a spatial image of nonvisible city locations from the current location. Furthermore, prior research has almost exclusively focused on body- and environment-based reference frames. The strong influence of location perspective in an everyday navigational context indicates that such a reference frame should be considered more often when examining human spatial cognition.
... Memory of objects' locations is critical to our daily lives. Studies of spatial memory have extensively examined selection of spatial reference systems in organizing or reorganizing spatial memories of objects' locations (Shelton & McNamara, 1997, 2001Greenauer & Waller, 2008Mou, Liu, McNamara, 2009;Mou, Xiao, & McNamara, 2009;Waller, Lippa, & Richardson, 2008;Yamamoto & Shelton, 2008). However, the temporal and dynamical characteristics of selecting reference directions are still poorly understood (Kelly & McNamara, 2010). ...
... The results showed that JRDs were more accurate at the imagined headings parallel to the learning viewpoints than at novel imagined headings. Recently, Waller et al. (2008) showed that spatial reference directions may be determined by the axis between the body and the array of objects and not the orientation of the body. Yamamoto and Shelton (2008) showed that participants selected an egocentrically aligned reference direction even when they viewed objects sequentially in a random order. ...
Article
Full-text available
Three experiments examined the temporal characteristics in selection of a spatial reference direction. Participants learned a layout of objects presented sequentially in a random order. An array of disks with a symmetric axis different from participants' learning viewpoint was presented before, during, or after learning objects' locations. The results showed that the symmetric axis determined selection of a spatial reference direction when participants perceived the disk array before or during, but not after, learning the objects' locations. These results indicated that participants selected a reference direction prior to seeing objects.
... Other models of spatial memory stipulate that fixed spatial reference directions are used in spatial memory and claim that these reference directions are primarily egocentric (e.g., Greenauer & Waller, 2008;Waller, Lippa, & Richardson, 2008). Waller and his colleagues proposed that spatial reference directions are primarily determined by the body, especially the viewing direction (Greenauer & Waller, 2008) or body-to-array axis (Waller, Lippa, & Richardson, 2008). ...
... Other models of spatial memory stipulate that fixed spatial reference directions are used in spatial memory and claim that these reference directions are primarily egocentric (e.g., Greenauer & Waller, 2008;Waller, Lippa, & Richardson, 2008). Waller and his colleagues proposed that spatial reference directions are primarily determined by the body, especially the viewing direction (Greenauer & Waller, 2008) or body-to-array axis (Waller, Lippa, & Richardson, 2008). Because this model makes use of fixed reference axes, it makes identical predictions to our allocentric updating model in the present experiments. ...
Article
Full-text available
Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view that was caused by table rotation or by their own locomotion. The stick was removed at test. The results showed that detection of position change was better when an object not on the stick was moved than when an object on the stick was moved. Furthermore change detection was better in the observer locomotion condition than in the table rotation condition only when an object on the stick was moved but not when an object not on the stick was moved. These results indicated that when the reference direction was not accurately indicated in the test scene, detection of position change was impaired but this impairment was less in the observer locomotion condition. These results suggest that people not only represent objects' locations with respect to a fixed reference direction but also represent and update their orientation according to the same reference direction, which can be used to recover the accurate reference direction and facilitate detection of position change when no accurate reference direction is presented in the test scene.
... One study that contrasted head and travel direction found that head direction is coded more strongly than travel direction in a population of rodent entorhinal neurons (Raudies et al., 2015). A previous behavioral study in humans also showed different roles of body orientation and head orientation in forming spatial reference systems in memory (Waller et al., 2008). Further experimental findings in rodents and Drosophila also suggest the existence of neural signals for travel direction (Lu et al., 2022;Lyu et al., 2021). ...
Article
Full-text available
We often assume that travel direction is redundant with head direction, but from first principles, these two factors provide differing spatial information. Although head direction has been found to be a fundamental component of human navigation, it is unclear how self-motion signals for travel direction contribute to forming a travel trajectory. Employing a novel motion adaptation paradigm from visual neuroscience designed to preclude a contribution of head direction, we found high-level aftereffects of perceived travel direction, indicating that travel direction is a fundamental component of human navigation. Interestingly, we discovered a higher frequency of reporting perceived travel toward the adapted direction compared to a no-adapt control—an aftereffect that runs contrary to low-level motion aftereffects. This travel aftereffect was maintained after controlling for possible response biases and approaching effects, and it scaled with adaptation duration. These findings demonstrate the first evidence of how a pure travel direction signal might be represented in humans, independent of head direction.
... Evidence shows the brain may rely on an abstract construct, external to any body part, to encode the object's egocentric location, i.e., the object's location relative to the observer 1-3 . For example, Waller et al. 3 found that the smallest pointing errors in a spatial memory task lie along the line between object and observer regardless of the head or trunk position, suggesting that this line (or a similar construct) acted as an egocentric reference frame. The external construct most likely to serve as an egocentric reference frame for azimuthal localization (i.e., localization in the horizontal plane) is the subjective straight-ahead (SSA) 4,5 , i.e., the internal representation of the whole body's antero-posterior sagittal half-plane. ...
Article
Full-text available
Young children and adults process spatial information differently: the former use their bodies as primary reference, while adults seem capable of using abstract frames. The transition is estimated to occur between the 6th and the 12th year of age. The mechanisms underlying spatial encoding in children and adults are unclear, as well as those underlying the transition. Here, we investigated the role of the subjective straight-ahead (SSA), the body antero-posterior half-plane mental model, in spatial encoding before and after the expected transition. We tested 6–7-year-old and 10–11-year-old children, and adults on a spatial alignment task in virtual reality, searching for differences in performance when targets were placed frontally or sideways. The performance differences were assessed both in a naturalistic baseline condition and in a test condition that discouraged using body-centered coordinates through a head-related visuo-motor conflict. We found no differences in the baseline condition, while all groups showed differences between central and lateral targets (SSA effect) in the visuo-motor conflict condition, and 6–7-year-old children showed the largest effect. These results confirm the expected transition timing; moreover, they suggest that children can abstract from the body using their SSA and that the transition underlies the maturation of a world-centered reference frame.
... After completing the three test sessions, participants were debriefed, asked to draw the target object layout on a piece of letter-sized paper in their preferred orientation, paid, and thanked for their participation. The map-drawing was used to infer the preferred orientation of participants' mental representation of the target layout (Shelton & McNamara, 1997;Waller, Lippa, & Richardson, 2008). ...
Article
Imagined perspective switches are notoriously difficult, a fact often ascribed to sensorimotor interference between one’s to-be-imagined versus actual orientation. Here, we demonstrate similar interference effects, even if participants know they are in a remote environment with unknown spatial relation to the learning environment. Participants learned 15 target objects irregularly arranged in an office from one orientation (0°, 120°, or 240°). Participants were blindfolded and disoriented before being wheeled to a test room of similar geometry (exp.1) or different geometry (exp.2). Participants were seated facing 0, 120°, or 240°, and asked to perform judgments of relative direction (JRD, e.g., imagine facing “pen”, point to “phone”). JRD performance was improved when participants’ to-be-imagined orientation in the learning room was aligned with their physical orientation in the current (test) room. Conversely, misalignment led to sensorimotor interference. These concurrent reference frame facilitation/interference effects were further enhanced when the current and to-be-imagined environments were more similar. Whereas sensorimotor alignment improved absolute and relative pointing accuracy, sensorimotor misalignment predominately increased response times, supposedly due to increased cognitive demands. These sensorimotor facilitation/interference effects were sustained and could not be sufficiently explained by initial retrieval and transformation costs. We propose that facilitation/interference effects occurred between concurrent egocentric representations of the learning and test environment in working memory. Results suggest that merely being in a rectangular room might be sufficient to automatically re-anchor one’s representation and thus produce orientation-specific interference. This should be considered when designing perspective-taking experiments to avoid unintended biases and concurrent reference frame alignment effects.
... Although function and geometry may be the most salient factors determining the activation of image schemas, other influencing factors were found. Sometimes it is important how objects are labelled (Feist, 2000), whether absolute or relative frames of reference are used for spatial relations (Kelleher & Costello, 2005;Waller, Lippa, & Richardson, 2008), and even whether the related items are animate or inanimate (Feist, 2000). ...
Thesis
Full-text available
‚Intuitive Benutzung’ wird in dieser Arbeit als die Benutzung eines Produktes definiert, die in unterschiedlichem Maße durch die unbewusste Anwendung von Vorwissen charakterisiert ist und zu einer effektiven und zufriedenstellenden Interaktion bei minimalem Verbrauch kognitiver Ressourcen führt. Als neues Gestaltungsmittel mit hohem Potential werden Image Schemata vorgeschlagen und die Nützlichkeit von Image Schema Theorie für die Gestaltung intuitiver Benutzung wird untersucht. Image Schemata erfüllen als sensumotorische Form unterbewusster Wissensrepräsentation die Voraussetzungen für intuitive Benutzung und ihr Einsatz in der User-Interface-Gestaltung ist vielversprechend. In Hinblick auf bereits vorhandene Forschungsarbeiten in der Linguistik und Psychologie wird das Potential der Image-Schema-Theorie diskutiert und es werden empirische Forschungsfragen abgeleitet. Die erste Forschungsfrage betrifft die Anwendung von Image Schemata bei der Darstellung abstrakter Informationen in User Interfaces. In vier Experimenten wird gezeigt, dass Benutzer effektiver, mental effizienter und zufriedener mit theorie-konformen User Interfaces interagieren können als mit User Interfaces, die nicht theorie-konform gestaltet sind. Die Größe des Effekts ist dabei abhängig von der konkreten Aufgabe, der Aufgabenschwierigkeit und dem Vorhandensein weiterer Image-Schema-Instanzen im User Interface. Die zweite Forschungsfrage betrifft die praktische Einsetzbarkeit von Image Schemata als Gestaltungssprache bei der Entwicklung intuitiv benutzbarer User Interfaces. In zwei Studien werden die Inter-Rater-Reliabilitäten bei der Anwendung eines Image-Schema-Vokabulars untersucht, wobei sich zwischen Designern hohe bis mittlere Übereinstimmungen bei der image-schematischen Beschreibung von Aufgaben, Interaktionen, User Interfaces und Benutzeräußerungen ergeben. In einer weiteren Studie wandten Designer das Image-Schema-Vokabular in einem nutzerzentrierten Gestaltungsprozess an und entwickelten zwei neue Prototypen eines bestehenden Warenwirtschaftssystems. Dabei erwiesen sich Image Schemata als besonders nützlich bei der Umsetzung von Anforderungen in Gestaltungslösungen. Die image-schematisch gestalteten Prototypen wurden von denn Benutzern besser beurteilt als das bestehende System. Zur Unterstützung von Entwicklungsprozessen wurde eine Online-Datenbank entwickelt, die Produktentwicklern Definitionen von Image Schemata und Beispiele für ihre Anwendung in User Interfaces zugänglich macht. Als Ergebnis der Studien lässt sich feststellen, dass die Image-Schema-Theorie gültige Heuristiken für die Gestaltung intuitiver Benutzung liefert. Eine Image-Schema-Gestaltungssprache ist reliabel und praktisch anwendbar und kann in frühen Phasen der Produktentwicklung nutzbringend eingesetzt werden. Bisherige Ansätze zur Gestaltung intuitiv benutzbarer Produkte wie User Interface Metaphern, Populationsstereotypen oder Affordances können durch Image Schemata nicht nur ergänzt, sondern zum Teil in der Breite ihrer Anwendung übertroffen werden. Offene Fragen für die weitere Forschung werden ebenfalls diskutiert.
... Furthermore, [182] discovered that under certain circumstances, body-aligned reference directions can be have a greater influence on human spatial memory than a head-aligned reference direction. More research is needed to elucidate the value of anchoring mediated reality tools for navigation to a head versus an upper body reference. ...
... The current behavioral results argue for the importance of manipulating these features when studying the neural circuitry of spatial navigation on different species and comparing results across species and virtual reality paradigms (Shelton and McNamara, 2004;Zaehle et al., 2007;Jacobs et al., 2013). During natural navigation, kinesthetic and visual input provides important references for computing heading and position (Ekstrom et al., 2003;Waller et al., 2008) as we continuously update our knowledge of the environment. This position updating involves the interaction of several brain areas. ...
Article
Full-text available
Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory.
... Tlauka et al. (2008) did not distinguish between different types of cues such as egocentric and allocentric cues which are typically examined in the literature (Burgess, 2006; see also Kelly, Sjolund, & Sturz, 2013). An egocentric representation (Waller, Lippa, & Richardson, 2008) refers to a memory structure in which locations are remembered with reference to one's own position in space using observer-based cues (e.g., a person's trunk or head). Egocentric representations are also referred to as self-to-objects relations (Sholl & Nolin, 1997). ...
Article
Full-text available
The study examined people's spatial memory of a small-scale array of objects. Earlier work has primarily relied on short-retention intervals, and to date it is not known whether performance is affected by longer intervals between learning and recall. In the present investigation, university students studied seven target objects. Recall was tested immediately after learning and after an interval of seven days. Performance was found to be similar in the immediate and delayed conditions, and the results suggested that recall was facilitated by egocentric and intrinsic cues. The findings are discussed with reference to recent investigations that have shown task parameters can influence spatial recall.
... There are previous researches on sighted participants which examine near space coding through various sensory modalities (Newell, Woods, Mernagh, & Bü lthoff, 2005), different frames of reference (Kappers, 2007;Waller, Lippa, & Richardson, 2008) and viewpoints (Mou, Fan, McNamara, & Owen, 2008;Mou, McNamara, Valinquette, & Rump, 2004), under blindfolded conditions or conditions of non-informative vision (Newport, Rabb, & Jackson, 2002;Newell et al., 2005). On the other hand, the same procedures cannot be used nor can the same conclusions be validated for individuals with blindness. ...
Article
The aim of this study is to examine the performance in coding and representing of near-space in relation to vision status (blindness vs. normal vision) and sensory modality (touch vs. vision). Forty-eight children and teenagers participated. Sixteen of the participants were totally blind or had only light perception, 16 were blindfolded sighted individuals, and 16 were non-blindfolded sighted individuals. Participants were given eight different object patterns in different arrays and were asked to code and represent each of them. The results suggest that vision influences performance in spatial coding and spatial representation of near space. However, there was no statistically significant difference between participants with blindness who used the most effective haptic strategy and blindfolded sighted participants. Thus, the significance of haptic strategies is highlighted.
... Near space has been defined as the space stretching out to an arm's reach (Warren, 1994, p. 102). Various research has been implemented for the evaluation of the ability in spatial coding and spatial representation of near space, for both sighted individuals (Kappers, 2007;Mou et al., 2004;Mou, Fan, McNamara, & Owen, 2008;Mou, Xiao, & McNamara, 2008;Nardini, Burgess, Breckenridge, & Atkinson, 2006;Newell, Woods, Mernagh, & Bülthoff, 2005;Newport, Rabb, & Jackson, 2002;Pasqualotto & Newell, 2007;Platsidou, 1993;Waller, Lippa, & Richardson, 2008) and individuals with visual impairments (Hollins & Kelley, 1988;Millar, 1979;Monegato et al., 2007;Pasqualotto & Newell, 2007;Postma et al., 2007;Ungar et al., 1995). All these studies used similar tests, even if different research aims were set or certain differences existed in the method and design of the experiments. ...
Article
Full-text available
Loss of vision is believed to have a great impact on the acquisition of spatial knowledge. The aims of the present study are to examine the performance of individuals with visual impairments on spatial tasks and the impact of residual vision on processing these tasks. In all, 28 individuals with visual impairments-blindness or low vision-participated in this study. The results reveal that participants with visual impairments were competent to perform spatial tasks, and their performance is related to the existence of residual vision.
... Contrast weights (-0.75, 0.25, 1.25, 0.25, 1.25, 0.25, -0.75, -1.75) were selected using the procedure described by Levin and Neumann (1999), and were the same as those employed by Greenauer and Waller (2008). These weights represent a quadratic pattern of performance demonstrating superior performance at one of the eight headings, a linear decrease in performance as imagined headings deviate from this preferred orientation, and facilitation at the counter-aligned heading (facilitation at a counter-aligned heading is frequently observed in studies on spatial reference frames: e.g., Greenauer & Waller, 2008;Roskos-Ewoldsen et al. 1998;Hintzman et al. 1981;Rieser, 1989;Shelton & McNamara, 2004;Waller et al. 2008). For each judgment type, the contrast was fit to the data twice: once with the minimum of the contrast corresponding to the initial learning view (0°) and once with the minimum corresponding to the second learning view (315°). ...
Article
Full-text available
The current study examined the potential influence of existing spatial knowledge on the coding of new spatial information. In the Main experiment, participants learned the locations of five objects before completing a perspective-taking task. Subsequently, they studied the same five objects and five additional objects from a new location before completing a second perspective-taking task. Task performance following the first learning phase was best from perspectives aligned with the learning view. However, following the second learning phase, performance was best from the perspective aligned with the second view. A supplementary manipulation increased the salience of the initial view through environmental structure as well as the number of objects present. Results indicated that the initial learning view was preferred throughout the experiment. The role of assimilation and accommodation mechanisms in spatial memory, and the conditions under which they occur, are discussed.
... Near space has been defined as the space stretching out to arm's reach (Warren, 1994). Various researches were implemented for the evaluation of the ability in spatial coding and spatial representation of near space, for both the sighted individuals (Mou, Xiao, & McNamara, 2008;Nardini et al., 2006;Newell, Woods, Mernagh, & Bü lthoff, 2005;Pasqualotto & Newell, 2007;Platsidou, 1993;Waller, Lippa, & Richardson, 2008), and the individuals with visual impairments (Hollins & Kelley, 1988;Millar, 1979;Monegato, Cattaneo, Pece, & Vecchi, 2007;Papadopoulos et al., 2010Pasqualotto & Newell, 2007;Postma, Zuidhoek, Noordzij, & Kappers, 2007;Ungar et al., 1995). All these studies used similar tests, even if different research aims were set or certain differences existed in the methodology and the designing of the experiments. ...
... Spatial locations are necessarily relative (e.g., the sink is left of the refrigerator, Iowa is west of Illinois, etc.), and so memories for spatial layouts must be stored in the context of a spatial reference system. Research indicates that spatial memories are commonly organized around allocentric reference frames centered on the environment (Avraamides & Kelly, 2005; Hintzman, O'Dell and Arndt, 1981; Kelly, Avraamides & Loomis, 2007; Kelly & McNamara, 2008; McNamara, 2003; McNamara, Rump & Werner, 2003; Montello, 1991; Mou & McNamara, 2002; Shelton & McNamara, 2001; Valiquette, McNamara & Smith, 2003; Werner & Schmidt, 1999; but see Wang & Spelke, 2000 Waller, Lippa & Richardson, 2008; Waller, Montello, Richardson & Hegarty, 2002, for evidence of egocentric reference frames centered on the body, head, eyes, etc.). Shelton and McNamara (2001) conducted a series of studies investigating spatial memory organization. ...
Conference Paper
Full-text available
Research on spatial memory indicates that locations are remembered relative to reference frames, which define a spatial reference system. Reference frames are thought to be selected on the basis of environment-based and experience-based cues present during learning. Results from new experiments indicate that reference frames provide scaffolding during the development of spatial memories: the reference frame used to organize locations studied from one perspective was also used to organize new locations studied from another perspective. Further results indicate that the role of reference frames during spatial memory development can cross sensory modalities. Reference frames that organized memories of a visually-experienced environment also organized memories of haptically-experienced locations studied within the same environment. These findings indicate a role for reference frames during spatial memory development, and demonstrate that reference frames influence cross-modal spatial learning. KeywordsReference frames-Spatial memory development-Perspective taking-Multi-modal learning
... Neurophsiological studies show that egocentric coordinate systems exist in the posterior parietal cortex ([1], [12]). Behavioral experiments indicate that head and trunk-based reference frames can be stored in memory ([50]). From the theoretical side, Grush ([13]) showed how a coordinate structure can be derived from lower level coupled sensory and action channels by a process he called s-coordination. ...
Conference Paper
In the last decade many studies examined egocentric and allocentric spatial relations. For various tasks, navigators profit from both kinds of relations. However, their interrelation seems to be underspecified. We present four elementary representations of allocentric and egocentric relations (sensorimotor contingencies, egocentric coordinate systems, allocentric coordinate systems, and perspective-free representations) and discuss them with respect to their encoding and retrieval. Elementary representations are problematic for capturing large spaces and situations which encompass both allocentric and egocentric relations at the same time. Complex spatial representations provide a solution to this problem. They combine elementary coordinate representations either by pair-wise connections or by hierarchical embedding. We discuss complex spatial representations with respect to computational requirements and their plausibility regarding behavioral and neural findings. This work is meant to clarify concepts of egocentric and allocentric, to show their limitations, benefits and empirical plausibility and to point out new directions for future research.
... This functional dissociation observed in amnesia raises the hypothesis that episodic memory relies on spatial processes involved in shifted-view conditions, rather than in iconic–egocentric ones. However, spatial memory cannot be limited to either allocentric or iconic–egocentric processes (Vann et al., 2009; Avraamides and Kelly, 2008; Waller et al., 2008). In the previous studies, the shifted-view condition can be concurrently solved using two types of process: an allocentric process or an egocentric-updating one. ...
Article
Mediotemporal lobe structures are involved in both spatial processing and long-term memory. Patient M.R. suffers from amnesia, due to bilateral hippocampal lesion and temporoparietal atrophy following carbon monoxide poisoning. We compared his performance in immediate spatial memory tasks with the performance of ten healthy matched participants. Using an immediate reproduction of path, we observed a dissociation between his performance in three allocentric tasks and in five egocentric-updating tasks. His performance was always impaired on tasks requiring the use of an egocentric-updating representation but remained preserved on allocentric tasks. These results fit with the hypothesis that the hippocampus plays a role in spatial memory, but they also suggest that allocentric deficits previously observed in amnesia might actually reflect deficits in egocentric-updating processes. Furthermore, the co-occurrence of deficits in episodic long-term memory and short-term egocentric-updating representation without any short-term allocentric deficit suggests a new link between the mnemonic and navigational roles of the hippocampus. The Cognitive Map theory, the Multiple Trace theory, as well as further models linking spatial and nonspatial functions of the hippocampus are discussed.
... Neurophsiological studies show that egocentric coordinate systems exist in the posterior parietal cortex ([1], [12]). Behavioral experiments indicate that head and trunk-based reference frames can be stored in memory ([50]). From the theoretical side, Grush ([13]) showed how a coordinate structure can be derived from lower level coupled sensory and action channels by a process he called s-coordination. ...
Article
Full-text available
In the last decade many studies examined egocentric and allocentric spatial relations. For various tasks, navigators profit from both kinds of relations. However, their interrelation seems to be underspecified. We present four elementary representations of allocentric and egocentric relations (sensorimotor contingencies, egocentric coordinate systems, allocentric coordinate systems, and perspective-free representations) and discuss them with respect to their encoding and retrieval. Elementary representations are problematic for capturing large spaces and situations which encompass both allocentric and egocentric relations at the same time. Complex spatial representations provide a solution to this problem. They combine elementary coordinate representations either by pair-wise connections or by hierarchical embedding. We discuss complex spatial representations with respect to computational requirements and their plausibility regarding behavioral and neural findings. This work is meant to clarify concepts of egocentric and allocentric, to show their limitations, benefits and empirical plausibility and to point out new directions for future research.
... Conversely , the results observed in the rotate with map condition of Experiment 1 support this same view, for updating was not observed when participants made aligned and misaligned judgments after turning in place while holding the map, that is, maintained a 0° heading difference between the physical (and perceived) body orientation and the map. Our results are consistent with evidence demonstrating the importance of the reference axis defined between an observer and a visual object array on the formation of spatial memory (Waller, Lippa, & Richardson, 2008), as well as findings showing that observer movement does not necessarily modify the privileged status of the learning orientation in memory (Kelly, Avraamides, & Loomis, 2007; Mou et al., 2004; Shelton & McNamara, 2001; Waller et al., 2002). Our results also agree with other studies showing functionally equivalent updating performance between encoding modalities such as vision, spatial hearing, and spatial language (Avraamides et al., 2004; Klatzky et al., 2003; Loomis et al., 2002). ...
Article
Full-text available
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
... This finding suggests that an intermediate environmental cue, between what has traditionally been conceived of as local (e.g., axis of bilateral symmetry) and global structure (e.g., room geometry), may influence how spatial relations in the environment are organized in memory. In particular, the geometry created by sets of objects in an environment may also serve as a strong cue for selecting a reference frame (a similar proposal has been suggested by Waller, Lippa, & Richardson, 2008). ...
Article
Full-text available
A substantial amount of research has demonstrated the importance of reference frames in organizing memory of object locations in both small and large environments. However, to date, little research has examined how the object locations represented in one reference frame are specified relative to object locations represented in another. In a series of 4 experiments, we demonstrate that multiple microreference frames can be established in memory for sets of objects that are spatially and semantically distinct, and that the spatial relations between these microreference frames are specified in memory by means of a more global, macroreference frame. Additionally, these experiments demonstrate that an established macroreference frame can influence which of several microreference frames will be coded in memory, but that a previously established microreference frame had no appreciable influence on the subsequent formation of a macroreference frame. These results are interpreted as indicating that the same cognitive mechanisms underlie interobject coding across multiple environmental scales. The implications for reference frame theories and theories positing hierarchical memory organization are discussed.
... As acknowledged by its authors, the shiftedview condition might have been concurrently solved using an allocentric processing or an egocentric-updated processing. In fact, spatial memory cannot be reduced to only allocentric and iconic-egocentric representations (Avraamides & Kelly, 2008;van Asselen et al., 2006;Waller, Lippa, & Richardson, 2008). Behavioural, electrophysiological and fMRI data suggest that it could be useful to consider another type of representation involved in navigation (Burgess & O'Keefe, 1996;Farrell & Robertson, 1998;Maguire et al., 2003;Mellet et al., 2000;Nardini, Burgess, Breckenridge, & Atkinson, 2006;Wang & Spelke, 2000;Whishaw, McKenna, & Maaswinkel, 1997). ...
Article
Influential models suggest that spatial processing is essential for episodic memory [O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. London: Oxford University Press]. However, although several types of spatial relations exist, such as allocentric (i.e. object-to-object relations), egocentric (i.e. static object-to-self relations) or egocentric updated on navigation information (i.e. self-to-environment relations in a dynamic way), usually only allocentric representations are described as potentially subserving episodic memory [Nadel, L., & Moscovitch, M. (1998). Hippocampal contributions to cortical plasticity. Neuropharmacology, 37(4-5), 431-439]. This study proposes to confront the allocentric representation hypothesis with an egocentric updated with self-motion representation hypothesis. In the present study, we explored retrieval performance in relation to these two types of spatial processing levels during learning. Episodic remembering has been assessed through Remember responses in a recall and in a recognition task, combined with a "Remember-Know-Guess" paradigm [Gardiner, J. M. (2001). Episodic memory and autonoetic consciousness: A first-person approach. Philosophical Transactions of the Royal Society B: Biological Sciences, 356(1413), 1351-1361] to assess the autonoetic level of responses. Our results show that retrieval performance was significantly higher when encoding was performed in the egocentric-updated condition. Although egocentric updated with self-motion and allocentric representations are not mutually exclusive, these results suggest that egocentric updating processing facilitates remember responses more than allocentric processing. The results are discussed according to Burgess and colleagues' model of episodic memory [Burgess, N., Becker, S., King, J. A., & O'Keefe, J. (2001). Memory for events and their spatial context: models and experiments. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 356(1413), 1493-1503].
... This raises the question of which cues were used in Experiment 1 to select an allocentric reference direction given that all environmental cues were eliminated by presenting objects in a dark room. We speculate that participants used their viewing perspective or the self-to-array axis (e.g., Waller, Lippa, & Richardson, 2008) to establish a spatial reference direction to specify interobject spatial relations. Note that such a spatial reference direction should not be interpreted as egocentric, rather it is still allocentric because the spatial reference direction is fixed in the mental representation and not fixed to the egocentric front of the observer when he or she moves and changes perspective. ...
Article
Full-text available
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary than when non-target objects were moved. This context effect was observed when participants were tested both at the original learning perspective and at a novel perspective. In Experiment 2, the arrays of five objects were presented on a rectangular table and two of the non-target objects were aligned with the longer axis of the table. Change detection was more accurate when the target object was presented with the two objects that were aligned with the longer axis of the table during learning than when the target object was presented with the two objects that were not aligned with the longer axis of the table during learning. These results indicated that the spatial memory of a briefly viewed layout has interobject spatial relations represented and utilizes an allocentric reference direction.
Article
This study employed an information accumulation model of choice reaction times to investigate alignment effects in mental representations of maps. University students studied a map from a single orientation (with North at the top). In a subsequent two-choice reaction time task, the students’ spatial knowledge of the map was assessed employing spatial left/right judgments, which were made from imagined perspectives that were either North-aligned or South-aligned. Data showed a standard alignment effect, favouring North- over South-aligned trials. To examine the locus of this effect, data were fit using the Linear Ballistic Accumulator (LBA) model of speeded decisions(Brown & Heathcote, 2008). Of interest were three model parameters: drift rate, the speed at which evidence accumulates toward a response; response threshold, the amount of evidence demanded from the decision maker before selecting a response; and non-decision time, the time consumed by pre- and post-decisional processes. The best-fitting model suggested that non-decision time accounted for the alignment effect. The difference in non-decision time between North and South-aligned judgments suggests a mental alignment stage on South-aligned trials, accounting for the longer reaction times for judgements misaligned with the presented North orientation of the map.
Article
Full-text available
The aim of this study is to examine the ability of children and adolescents with visual impairments to code and represent near space. Moreover, it examines the impact of the strategies they use and individual differences in their performance. A total of 30 individuals with visual impairments up to the age of 18 were given eight different object patterns in different arrays and were asked to code and represent each of them. The results revealed better performances by those who use an allocentric approach during spatial coding and those with residual vision. In fact, allocentric strategies were more prevalent in coding near space than egocentric ones. Moreover, the ability of participants to move independently was positively correlated with their ability to use the most effective haptic strategies. These findings suggest that children and adolescents with visual impairments are capable of using allocentric reference and providing a different perspective to the currently dominant one.
Article
The aim of this study is to examine the ability of children and adolescents with visual impairments to code and represent near space. Moreover, it examines the impact of the strategies they use and individual differences in their performance. A total of 30 individuals with visual impairments up to the age of 18 were given eight different object patterns in different arrays and were asked to code and represent each of them. The results revealed better performances by those who use an allocentric approach during spatial coding and those with residual vision. In fact, allocentric strategies were more prevalent in coding near space than egocentric ones. Moreover, the ability of participants to move independently was positively correlated with their ability to use the most effective haptic strategies. These findings suggest that children and adolescents with visual impairments are capable of using allocentric reference and providing a different perspective to the currently dominant one.
Article
Embodied cognitive science holds that cognitive processes are deeply and inescapably rooted in our bodily interactions with the world. Our finite, contingent, and mortal embodiment may be not only supportive, but in some cases even constitutive of emotions, thoughts, and experiences. My discussion here will work outward from the neuroanatomy and neurophysiology of the brain to a nervous system which extends to the boundaries of the body. It will extend to nonneural aspects of embodiment and even beyond the boundaries of the body to prosthetics of various kinds, including symbioses with a broad array of cultural artifacts, our symbolic niche, and our relationships with other embodied human beings. While cognition may not always be situated, its origins are embedded in temporally and spatially limited activities. Cognitive work also can be off-loaded to the body and to the environment in service of action, tool use, group cognition, and social coordination. This can blur the boundaries between brain areas, brain and body, and body and environment, transforming our understanding of mind and personhood to provide a different grounding for faith traditions in general, and of the historically dualist Christian tradition in particular.
Article
A target object’s location within a configuration of objects can be described by spatially relating it to a reference object that is selected from among its neighbors, with a preference for reference objects that are spatially close and aligned with the target. In the spatial memory literature, these properties of alignment and proximity are defined with respect to a set of intrinsic axes that organizes the configuration of objects. The current study assesses whether the intrinsic axes used to encode a display influences reference object selection in a spatial description task. In Experiments 1–4, participants selected reference objects from displays that were perceptually available or retrieved from memory. There was a significant bias to select reference objects consistent with the intrinsic axes used to organize the displays. In Experiment 5, participants learned the display from one viewpoint, but described it from another viewpoint. Both viewpoints influenced reference object selection. Across experiments, these results suggest that the spatial features underlying reference object selection are the intrinsic axes used to encode the displays.Highlights► We assess the link between intrinsic axes and selecting reference objects. ► When reference objects were perceptually available or retrieved from memory, they were more selected along intrinsic axes. ► Both viewpoints from which participants learned the display and described it influenced reference object selection. ► The spatial features underlying reference object selection are the intrinsic axes used to encode the displays.
Article
Full-text available
A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as asymmetries in similarity judgments, without positing distorted representations of physical scales.
Article
Full-text available
A single view of a room-sized path produces an orientation-specific memory representation, yet when memory is tested at a location on the path, orientation-free performance is observed. Either a virtual-views or an updating hypothesis can account for orientation-free performance by attributing it, respectively, to an orientation-free long-term-memory representation or to a working-memory representation of the body's updated location relative to the path. Experiments 1 and 2 test these hypotheses by manipulating the test-site location and the complexity of the trajectory from the study site to the test site. Experiment 3 tests orientation to the test space as a function of trajectory complexity. Results support a virtual-views explanation for the orientation-free performance of males and an updating explanation for females.
Article
Full-text available
This article examines the degree to which knowledge about the body's orientation affects transformations in spatial memory and whether memories are accessed with a preferred orientation. Participants learned large paths from a single viewpoint and were later asked to make judgments of relative directions from imagined positions on the path. Experiments 1 and 2 contribute to the emerging consensus that memories for large layouts are orientation specific, suggesting that prior findings to the contrary may not have fully accounted for latencies. Experiments 2 and 3 show that knowledge of one's orientation can create a preferred direction in spatial memory that is different from the learned orientation. Results further suggest that spatial updating may not be as automatic as previously thought.
Article
Full-text available
Although some studies have shown that a single view produces an orientation-free representation of place (C. C. Presson, N. DeLange, & M. D. Hazelrigg, 1989; C. C. Presson & M. D. Hazelrigg, 1984), others suggest that an orientation-specific representation is formed (J. J. Rieser, 1989). Five experiments are reported that together with existing studies, suggest that orientation-free performance requires a conjunction of study-test conditions, including a "horizontal" viewing angle during encoding, a room-sized test space, and "on-path" testing. If any one of these conditions was not satisfied, orientation-specific performance was observed at test. The findings support a multiple-view model of orientation invariance and suggest that there is something special about on-path testing that permits orientation-free performance under some conditions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Subjects read narratives describing directions of objects around a standing or reclining observer, who was periodically reoriented. RTs were measured to identify which object was currently located beyond the observer’s head, feet, front, back, right, and left. When the observer was standing, head/feet RTs were fastest, followed by front/back and then right/left. For the reclining observer, front/back RTs were fastest, followed by head/feet and then right/left. The data support the spatial framework model, according to which space is conceptualized in terms of three axes whose accessibility depends on body asymmetries and the relation of the body to the world. The data allow rejection of the equiavailability model, according to which RTs to all directions are equal, and the mental transformation model, according to which RTs increase with angular disparity from front.
Article
Full-text available
Human navigation in well-known environments is guided by stored memory representations of spatial information. In three experiments (N = 43) we investigated the role of different spatial reference systems when accessing information about familiar objects at different locations in the city in which the participants lived. Our results indicate that two independent reference systems underly the retrieval of spatial knowledge. Environmental characteristics, e.g., the streets at an intersection, determine which headings are easier to imagine at a given location and lead to differences in accessibility of spatial information (orientation-specific behavior). In addition, access to spatial information depends on the relative direction of a location with respect to the imagined heading, such that information about locations imagined in front of oneself is easier to access than about locations towards the back. This influence of an egocentric reference system was found for environmental knowledge as well as map-based knowledge. In light of these reference system effects, position-dependent models of spatial memory for large-scale environments are discussed. To account for the simultaneous effect of an environmental and an egocentric reference system, we present a 2-level model of spatial memory access.
Article
Full-text available
Three studies investigated the factors that lead spatial information to be stored in an orientation-specific versus orientation-free manner. In Experiment 1, we replicated the findings of Presson and Hazelrigg (1984) that learning paths from a small map versus learning the paths directly from viewing a world leads to different functional characteristics of spatial memory. Whether the route display was presented as the path itself or as a large map of the path did not affect how the information was stored. In Experiment 2, we examined the effects of size of stimulus display, size of world, and scale transformations on how spatial information in maps is stored and available for use in later judgments. In Experiment 3, we examined the effect of size on the orientation specificity of the spatial coding of paths that are viewed directly. The major determinant of whether spatial information was stored and used in an orientation-specific or an orientation-free manner was the size of the display. Small displays were coded in an orientation-specific way, whereas very large displays were coded in a more orientation-free manner. These data support the view that there are distinct spatial representations, one more perceptual and episodic and one more integrated and model-like, that have developed to meet different demands faced by mobile organisms.
Article
Full-text available
A "point-to-unseen-targets" task was used to test two theories about the nature of cognitive mapping. The hypothesis that a cognitive map is like a "picture in the head" predicts that (a) the cognitive map should have a preferred orientation and (b) all coded locations should be equally available. These predictions were confirmed in Experiments 1 and 3 when targets were cities in the northeastern United States and learning was from a map. The theory that a cognitive map is an orienting schema predicts that the cognitive map should have no preferred orientation and that targets in front of the body should be localized faster than targets behind the body. These predictions were confirmed in Experiments 1 and 2 when targets were local landmarks that had been learned via direct experience. In Experiment 3, when cities in the Northeast were targets and geographical knowledge had been acquired, in part, by traveling in the Northeast, the observed latency profiles were not as predicted by either theory of cognitive mapping. The results suggest that orienting schemata direct orientation with respect to local environments, but that orientation with respect to large geographical regions is supported by a different type of cognitive structure.
Article
Full-text available
In the current study we tested whether multiple orientations in kinesthetic learning affected how flexibly spatial information is stored and later used in making location judgments. Three groups learned simple routes by walking them while blindfolded, with (1) multiple orientations achieved through normal walking, (2) multiple orientations achieved through backward walking, or (3) a single orientation achieved through walking without turning (which required forward, backward, and sideways walking). When subjects had experienced multiple orientations while learning the routes, later directional judgments were equally accurate (and equally rapid) regardless of whether the judgments were aligned or were contra-aligned with the orientation of the routes as originally learned. In contrast, when routes were learned in a single orientation (without turning), subsequent judgments on contra-aligned trials were both less accurate and slower than judgments on aligned trials. Thus, multiple orientations are important to establish orientation-free, flexible use of spatial information in a kinesthetic learning environment. This contrasts with the pattern of results typically found in visual spatial learning and suggests that the factors that affect orientation specificity of spatial use may differ across spatial modality.
Article
Full-text available
Surrounding space is not inherently organized, but we tend to treat it as though it consisted of regions (e.g., front, back, right, and left). The current studies show that these conceptual regions have characteristics that reflect our typical interactions with space. Three experiments examined the relative sizes and resolutions of front, back, left, and right around oneself. Front, argued to be the most important horizontal region, was found to be (a) largest, (b) recalled with the greatest precision, and (c) described with the greatest degree pf detao. Our findings suggest that some of the characteristics of the category model proposed by Huttenlocher, Hedges, and Duncan (1991) regarding memory for pictured circular displays may be generalized to space around oneself. More broadly, our results support and extend the spatial framework analysis of representation of surrounding space (Franklin & Tversky, 1990).
Article
Full-text available
As people move through an environment, they typically change both their heading and their location relative to the surrounds. During such changes, people update their changing orientations with respect to surrounding objects. People can also update after only imagining such typical movements, but not as quickly or accurately as after actual movement. In the present study, blindfolded subjects pointed to objects after real and imagined walks. The role of rotational and translational components of movement were contrasted. The difficulty of imagined updating was found to be due to imagined rotation and not to imagined translation; updating after the latter was just as quick and accurate as updating after actual rotations and translations. Implications for understanding primary spatial orientation, the organization of spatial knowledge, and spatial-imagination processes are discussed.
Article
Full-text available
Previous research on spatial memory indicated that memories of small layouts were orientation dependent (orientation specific) but that memories of large layouts were orientation independent (orientation free). Two experiments investigated the relation between layout size and orientation dependency. Participants learned a small or a large 4-point path (Experiment 1) or a large display of objects (Experiment 2) and then made judgments of relative direction from imagined headings that were either the same as or different from the single studied orientation. Judgments were faster and more accurate when the imagined heading was the same as the studied orientation (i.e., aligned) than when the imagined heading differed from the studied orientation (i.e., misaligned). This alignment effect was present for both small and large layouts. These results indicate that location is encoded in an orientation-dependent manner regardless of layout size.
Article
Full-text available
In this study, the nature of the spatial representations of an environment acquired from maps, navigation, and virtual environments (VEs) was assessed. Participants first learned the layout of a simple desktop VE and then were tested in that environment. Then, participants learned two floors of a complex building in one of three learning conditions: from a map, from direct experience, or by traversing through a virtual rendition of the building. VE learners showed the poorest learning of the complex environment overall, and the results suggest that VE learners are particularly susceptible to disorientation after rotation. However, all the conditions showed similar levels of performance in learning the layout of landmarks on a single floor. Consistent with previous research, an alignment effect was present for map learners, suggesting that they had formed an orientation-specific representation of the environment. VE learners also showed a preferred orientation, as defined by their initial orientation when learning the environment. Learning the initial simple VE was highly predictive of learning a real environment, suggesting that similar cognitive mechanisms are involved in the two learning situations.
Article
Full-text available
An object's location is best retrieved from the orientation in which it was learned. Otherwise, retrieval necessitates a mental effort to restore the original perspective. In this case there is a cost to speed and accuracy of location responses known as the alignment effect. We hypothesised that one can attenuate this alignment effect by systematically referring objects in an exocentric frame of reference during learning. Sixteen male students were asked to learn the location of five objects disposed in a totally new environment either by locating the objects in an egocentric or in an exocentric spatial frame of reference. After the learning phase, the participants were asked to imagine orienting themselves to an object in the scene and to point to another object. The analysis of pointing accuracy, orientation, and pointing times showed that the performances of participants engaged in the exocentric condition remained insensitive to the augmentation of the angle between their actual position on the path and the imagined orientation. On the other hand, the participants engaged in egocentric learning were disoriented when the difference between their actual orientation and the imagined orientation was great. We conclude that when an object's location is intentionally referred to in an exocentric reference frame, alignment effect can be significantly reduced.
Article
Full-text available
Three experiments investigated the frames of reference used in memory to represent the spatial structure of the environment. Participants learned the locations of objects in a room according to an intrinsic axis of the configuration; the axis was different from or the same as their viewing perspective. Judgments of relative direction using memory were most accurate for imagined headings parallel to the intrinsic axis, even when it differed from the viewing perspective, and there was no cost to learning the layout according to a nonegocentric axis. When the shape of the layout was bilaterally symmetric relative to the intrinsic axis of learning, novel headings orthogonal to that axis were retrieved more accurately than were other novel headings. These results indicate that spatial memories are defined with respect to intrinsic frames of reference, which are selected on the basis of egocentric experience and environmental cues.
Article
Full-text available
Eight participants were presented with auditory or visual targets and then indicated the target's remembered positions relative to their head eight seconds after actively moving their eyes, head or body to pull apart head, retinal, body, and external space reference frames. Remembered target position was indicated by repositioning sounds or lights. Localization errors were found related to head-on-body position but not of eye-in-head or body-in-space for both auditory (0.023 dB/deg in the direction of head displacement) and visual targets (0.068 deg/deg in the direction opposite to head displacement). The results indicate that both auditory and visual localization use head-on-body information, suggesting a common coding into body coordinates--the only conversion that requires this information.
Article
Full-text available
Navigation in humans and many other animals relies on spatial representations of their environments. Three experiments examined how humans maintain sense of orientation between nested environments. Subjects can acquire new spatial representations easily without integrating them into their existing spatial knowledge system. While navigating between nested environments, subjects seemed to constantly switch between the currently processed environment by reorienting to approaching environments and losing track of old environments at given spatial regions. These results suggest that spatial updating in naturalistic, nested environments does not occur for all environments at the same time. Implications for the hierarchical theory of spatial representations and the path integration theory of navigation are discussed.
Article
Full-text available
This experiment investigated the frames of reference used in memory to represent the spatial structure of a large-scale outdoor environment. Participants learned the locations of eight objects in an unfamiliar city park by walking through the park on one of two prescribed paths that encircled a large rectangular building. The aligned path was oriented with the building; the misaligned path was rotated by 45 degrees. Later, participants pointed to target objects from imagined vantage points using their memories. Pointing accuracy was higher in the aligned than in the misaligned path group, and the patterns of results differed: In the aligned condition, accuracy was higher for imagined headings parallel to legs of the path and for an imagined heading oriented toward a nearby lake, a salient landmark. In the misaligned condition, pointing accuracy was highest for the imagined heading oriented toward the lake, and decreased monotonically with angular distance. These results indicated that locations of objects were mentally represented in terms of frames of reference defined by the environment but selected on the basis of egocentric experience.
Article
Full-text available
In 4 experiments, the authors investigated spatial updating in a familiar environment. Participants learned locations of objects in a room, walked to the center, and turned to appropriate facing directions before making judgments of relative direction (e.g., "Imagine you are standing at X and facing Y. Point to Z.") or egocentric pointing judgments (e.g., "You are facing Y. Point to Z."). Experiments manipulated the angular difference between the learning heading and the imagined heading and the angular difference between the actual heading and the imagined heading. Pointing performance was best when the imagined heading was parallel to the learning view, even when participants were facing in other directions, and when actual and imagined headings were the same. Room geometry did not affect these results. These findings indicated that spatial reference directions in memory were not updated during locomotion.
Article
1.0 What this is all about 2.0 Cross-modal transfer of frame of reference: evidence from Tenejapan Tzeltal 2.1 Tzeltal absolute linguistic frame of reference 2.2. Use of absolute frame of reference in non-verbal tasks
Article
Two experiments investigated the viewpoint dependence of spatial memories In Experiment 1, participants learned the locations of objects on a desktop from a single perspective and then took part in a recognition test, test scenes included familiar and novel views of the layout Recognition latency was a linear function of the angular distance between a test view and the study view In Experiment 2, participants studied a layout from a single view and then learned to recognize the layout from three additional training views A final recognition test showed that the study view and the training views were represented in memory, and that latency was a linear function of the angular distance to the nearest study or training view These results indicate that interobject spatial relations are encoded in a viewpoint-dependent manner, and that recognition of novel views requires normalization to the most similar representation in memory These findings parallel recent results in visual object recognition.
Chapter
View-dependent representations have been shown in studies of navigation, object and scene recognition, and spatial reasoning. In this chapter, we discuss the relationship between different types of view-dependent representations. Based on previous studies and new data presented in this chapter, we proposed a model that contains an egocentric spatial working memory and a LTM representation of similar nature, and discussed theoretical issues that can potentially distinguish between different models of spatial representations.
Article
To represent a stable environment despite the experience of changes during self movements, one can either develop an invariant allocentric representation, or update the egocentric representations as one moves. Using a disorientation paradigm, three sets of studies investigated these mechanisms in human navigation and scene recognition. Accuracy in the configuration of multiple object localization is impaired by disorientation, an effect not due to artifacts such as memory deterioration over time, intervening physical activities, uncertainty in self position and orientation, etc., suggesting one can locate objects primarily by updating their egocentric positions as she or he moves. Disorientation also impaired the judgment of changes to a scene after viewer movements, suggesting a similar egocentric updating process. On the contrary, representation of the shape of the surroundings is invariant and persists through disorientation. The coexistence of multiple mechanisms may increase the flexibility and robustness of the system.
Article
Two experiments are reported that use a ``point-to-unseen-targets'''' task to study the role of egocentric reference frames in the retrieval of survey knowledge learned from either studying a map or navigating an environment. In Experiment 1, performance was generally consistent with the hypothesis that map knowledge is retrieved using a frame of reference centered on the eye, characterized by (a) a fixed orientation in a ``frontal representational plane'''' and (b) equal access to spatial relations both in front of, or above, and behind, or below, a right-left retrieval axis. The results of Experiment 2 were consistent with the hypothesis that environment knowledge is retrieved within a frame of reference centered on the body, characterized by (a) flexible orientation within a ``transverse representational plane'''' and (b) privileged access to spatial relations located in front of the right-left retrieval axis in representational space. Both types of knowledge function as if they preserve information about the Euclidean angles connecting elements in physical space.
Article
Recent evidence indicates that mental representations of large (i.e., navigable) spaces are viewpoint dependent when observers are restricted to a single view. The purpose of the present study was to determine whether two views of a space would produce a single viewpoint-independent representation or two viewpoint-dependent representations. Participants learned the locations of objects in a room from two viewpoints and then made judgments of relative direction from imagined headings either aligned or misaligned with the studied views. The results indicated that mental representations of large spaces were viewpoint dependent, and that two views of a spatial layout appeared to produce two viewpoint-dependent representations in memory. Imagined headings aligned with the study views were more accessible than were novel headings in terms of both speed and accuracy of pointing judgments.
Conference Paper
This chapter summarizes a new theory of spatial memory. According to the theory, when people learn the locations of objects in a new environment, they interpret the spatial structure of that environment in terms of a spatial reference system. Our current conjecture is that a reference system intrinsic to the collection of objects is used. Intrinsic axes or directions are selected using egocentric (e.g., viewing perspective) and environmental (e.g., walls of the surrounding room) cues. The dominant cue is egocentric experience. The reference system selected at the first view is typically not updated with additional views or observer movement. However, if the first view is misaligned but a subsequent view is aligned with natural and salient axes in the environment, a new reference system is selected and the layout is reinterpreted in terms of this new reference system. The chapter also reviews evidence on the orientation dependence of spatial memories and recent results indicating that two representations may be formed when people learn a new environment; one preserves interobject spatial relations and the other comprises visual memories of experienced views.
Article
A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.
Article
Adults were asked to judge the self-to-object directions in a room from novel points of observation that differed from their actual point at times only by a rotation and at other times only by a translation. The results show for the rotation trials that the errors and latencies when a novel point was imagined were worse than the baseline responses from their actual points of observation, and the latencies varied as a function of the magnitude of the to-be-imagined rotation. For the translation trials, on the other hand, the errors and latencies when a novel point was imagined were comparable to the baseline responses from their actual point and did not vary significantly across the different imagined station points. The evidence indicates that subjects know the object-to-object relations directly, without going through the origin of a coordinate system. In addition, similarities in processing during imagination on the one hand, and perception and action on the other are discussed.
Article
TRANSFER STUDIES SHOW THAT PEOPLE NORMALLY ASSOCIATE RESPONSES WITH PHYSICAL RATHER THAN RETINAL STIMULUS ORIENTATIONS. 4 GROUPS OF 16 SS WERE INSTRUCTED TO ADOPT A HEAD-ANCHORED REFERENCE SYSTEM ("THINK OF THE TOP OF YOUR HEAD AS 'UP.' ) WITH HEADS TILTED, DURING EITHER INITIAL LEARNING OR TRANSFER. THESE INSTRUCTIONS STRONGLY FACILITATED TRANSFER BASED ON RETINAL INVARIANCE WITH HEAD POSITION CHANGED. FASTER RESPONSE TO RETINAL VERTICALS AND HORIZONTALS THAN TO RETINAL DIAGONALS WITH HEAD TILTED, PRIOR TO TRANSFER, PREDICTED SUPERIOR PERFORMANCE ON THE TRANSFER TASK, WHICH REQUIRED SAME RESPONSE TO SAME RETINAL STIMULUS WITH HEAD UPRIGHT. CONCLUSIONS: (1) INVARIANCE OF PERCEIVED OR PHENOMENAL SLANT (RATHER THAN EITHER PHYSICAL OR RETINAL SLANT) IS THE CRITICAL DETERMINANT OF TRANSFER. (2) LINES PERCEIVED AS VERTICAL AND HORIZONTAL TEND TO EVOKE FASTER RESPONSES THAN THOSE PERCEIVED AS OBLIQUES. (3) PHENOMENAL SLANT DEPENDS ON THE ORIENTATION OF A FRAME OF REFERENCE, WHICH IS SUBJECT TO VOLUNTARY AS WELL AS PROPRIOCEPTIVE CONTROL.
Article
In 14 experiments, subjects had to “point to” surrounding environmental locations (targets) while imagining themselves in a particular spot facing in various directions (orientations). The spatial information was either committed to memory (cognitive maps) or directly presented on each trial in the visual or tactile modality. Reaction times (RT) indicated that orientation shifts were achieved through mental rotation in the visual task, but not in the cognitive map or tactile tasks. Further, in the latter two tasks targets were located most quickly when they were adjacent to or opposite the imagined orientation. Several explanations of this finding were tested. Various aspects of the data suggest that cognitive maps are not strictly holistic, but consist of orientation-specific representations, and—at least in part—of relational propositions specific to object pairs.
Article
Experiments are reported that assessed the ability of people, without vision, to locate the positions of objects from imagined points of observation that are related to their actual position by rotational or translational components. Theoretical issues addressed were whether spatial relations stored in an object-to-object system are directly retrieved or whether retrieval is mediated by a body-centered coordinate system, and whether body-centered access involves a process of imaging updating of self-position. The results, with those of Rieser (1989), indicate that in the case of regularly structured object arrays, interobject relations are directly retrieved for the translation task, but for the rotation task, retrieval occurs by means of a body-centered coordinate system, requiring imagined body rotation. For irregularly structured arrays, access of interobject spatial structure occurs by means of a body-centered coordinate system for both translation and rotation tasks, requiring imagined body translation or rotation. Array regularity affected retrieval of spatial structure in terms of global shape of interobject relations and local object position within global shape.
Article
In certain simple rotations of objects, the orientation of the axis and planes of rotation can determine whether people are able to visualize the motion or perceive it as simple and coherent. This finding affords the opportunity to investigate the spatial reference systems used to define the orientation of the axis and planes of rotation. The results of two experiments suggest that the permanent environment is the primary reference system, apart from the rotating object, used for this purpose. Subjects also were able to use a local spatial environment to determine the orientation of the motion; some subjects were particularly adept at this. The viewer perspective, in contrast, was irrelevant as a reference system in these experiments. These results argue strongly for the primacy of environmental reference systems in the perception and imagination of orientation and extend the set of findings common between the comprehension of rotational motion and orientation-sensitive form perception.
Article
Spatial terms such as "above" must be used and interpreted with respect to some frame of reference. Perceptual cues for verticality were varied in four experiments to investigate whether the comprehension and production of "above" is based on a viewer-centered (deictic) frame, an environment-centered (extrinsic) frame, or an object-centered (intrinsic) frame of reference. "Above" was usually interpreted with respect to an environment-centered reference frame, but there was a significant contribution from object-centered reference frames as well; the viewer-centered reference frame made no independent contribution to "above". The meaning of "above" appears not to specify a particular reference frame; rather, selection of a reference frame during spatial assignment determines how spatial terms such as "above" and "below" will be used and interpreted.
Article
Guidelines for submitting commentsPolicy: Comments that contribute to the discussion of the article will be posted within approximately three business days. We do not accept anonymous comments. Please include your email address; the address will not be displayed in the posted comment. Cell Press Editors will screen the comments to ensure that they are relevant and appropriate but comments will not be edited. The ultimate decision on publication of an online comment is at the Editors' discretion. Formatting: Please include a title for the comment and your affiliation. Note that symbols (e.g. Greek letters) may not transmit properly in this form due to potential software compatibility issues. Please spell out the words in place of the symbols (e.g. replace “α” with “alpha”). Comments should be no more than 8,000 characters (including spaces ) in length. References may be included when necessary but should be kept to a minimum. Be careful if copying and pasting from a Word document. Smart quotes can cause problems in the form. If you experience difficulties, please convert to a plain text file and then copy and paste into the form.
Article
Seven experiments tested whether human navigation depends on enduring representations, or on momentary egocentric representations that are updated as one moves. Human subjects pointed to unseen targets, either while remaining oriented or after they had been disoriented by self-rotation. Disorientation reduced not only the absolute accuracy of pointing to all objects ('heading error') but also the relative accuracy of pointing to different objects ('configuration error'). A single light providing a directional cue reduced both heading and configuration errors if it was present throughout the experiment. If the light was present during learning and test but absent during the disorientation procedure, however, subjects showed low heading errors (indicating that they reoriented by the light) but high configuration errors (indicating that they failed to retrieve an accurate cognitive map of their surroundings). These findings provide evidence that object locations are represented egocentrically. Nevertheless, disorientation had little effect on the coherence of pointing to different room corners, suggesting both (a) that the disorientation effect on representations of object locations is not due to the experimental paradigm and (b) that room geometry is captured by an enduring representation. These findings cast doubt on the view that accurate navigation depends primarily on an enduring, observer-free cognitive map, for humans construct such a representation of extended surfaces but not of objects. Like insects, humans represent the egocentric distances and directions of objects and continuously update these representations as they move. The principal evolutionary advance in animal navigation may concern the number of unseen targets whose egocentric directions and distances can be represented and updated simultaneously, rather than a qualitative shift in navigation toward reliance on an allocentric map.
Article
Many common activities rely on spatial knowledge acquired from nonvisual modalities. We investigated the nature of this knowledge by having people look at a collection of objects on a desktop and manually reconstruct their arrangement, without vision, as though the display had been rotated by 0 degrees 45 degrees 90 degrees 135 degrees or 180 degrees relative to the view they could see. Performance on several measures of visual-spatial memory showed that participants had better visual memory for the view they had manually reconstructed than for the view they had studied visually for several minutes. These findings provide compelling new evidence that visual-spatial knowledge of very high fidelity can be acquired from nonvisual modalities, and reveal how, visual and nonvisual spatial information may even be confused in the brain.
Article
Seven experiments examined the spatial reference systems used in memory to represent the locations of objects in the environment. Participants learned the locations of common objects in a room and then made judgments of relative direction using their memories of the layout (e.g., "Imagine you are standing at the shoe, facing the lamp; point to the clock"). The experiments manipulated the number of views that observers were allowed to experience, the presence or absence of local and global reference systems (e.g., a rectangular mat on which objects were placed and the walls of the room, respectively), and the congruence of local and global reference systems. Judgments of relative direction were more accurate for imagined headings parallel to study views than for imagined headings parallel to novel views, even with up to three study views. However, study views misaligned with salient reference systems in the environment were not strongly represented if they were experienced in the context of aligned views. Novel views aligned with a local reference system were, under certain conditions, easier to imagine than were novel views misaligned with the local reference system. We propose that learning and remembering the spatial structure of the surrounding environment involves interpreting the layout in terms of a spatial reference system. This reference system is imposed on the environment but defined by egocentric experience.
Article
A single view of a room-sized path produces an orientation-specific memory representation, yet when memory is tested at a location on the path, orientation-free performance is observed. Either a virtual-views or an updating hypothesis can account for orientation-free performance by attributing it, respectively, to an orientation-free long-term-memory representation or to a working-memory representation of the body's updated location relative to the path. Experiments 1 and 2 test these hypotheses by manipulating the test-site location and the complexity of the trajectory from the study site to the test site. Experiment 3 tests orientation to the test space as a function of trajectory complexity. Results support a virtual-views explanation for the orientation-free performance of males and an updating explanation for females.
Article
HUMAN NAVIGATION IS SPECIAL: we use geographic maps to capture a world far beyond our unaided locomotion. In consequence, human navigation is widely thought to depend on internalized versions of these maps - enduring, geocentric 'cognitive maps' capturing diverse information about the environment. Contrary to this view, we argue that human navigation is best studied in relation to research on navigating animals as humble as ants. This research provides evidence that animals, including humans, navigate primarily by representations that are momentary rather than enduring, egocentric rather than geocentric, and limited in the environmental information that they capture. Uniquely human forms of navigation build on these representations.
Article
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.
Article
In three experiments, we examined the effects of locomotion and incidental learning on the formation of spatial memories. Participants learned the locations of objects in a room and then made judgments of relative direction, using their memories (e.g., "Imagine you are standing at the clock, facing the jar. Point to the book"). The experiments manipulated the number of headings experienced, the amount of interaction with the objects, and whether the participants were informed that their memories of the layout would be tested. When participants were required to maintain a constant body orientation during learning (Experiment 1), they represented the layout in terms of a single reference direction parallel to that orientation. When they were allowed to move freely in the room (Experiment 2), they seemed to use two orthogonal reference axes aligned with the walls of the enclosing room. Extensive movement under incidental learning conditions (Experiment 3) yielded a mixture of these two encoding strategies across participants. There was no evidence that locomotion, interaction with objects, or incidental learning led to the formation of spatial memories that differed from those formed from static viewing.