ArticlePDF Available

Mental rotation and automatic updating of body-centered spatial relationships

Authors:

Abstract

Blindfolded adult participants (7 male and 9 female) were asked to point to previously seen targets after a body rotation. In 1 condition, participants had to update their positions relative to the targets during rotation; in another condition, they had to ignore the rotation and to imagine that they were still in their initial orientation. In the updating condition, replicating research of J. J. Rieser (1989), response latencies were only slightly affected by the magnitude of the body rotation. In the ignoring condition, however, response latencies increased with the angular difference between the participants' new position and their original orientation, suggesting that the participants updated their positions and then retrospectively "undid" this updating to mentally reestablish their original orientation. The results are supportive of the idea that heading is updated automatically as a person moves so that she or he is always primarily oriented with respect to her or his actual position. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
A preview of the PDF is not available
... Ce concept de mise à jour est intéressant pour la mémoire épisodique car, dans le cadre d'un traitement égocentré, la multitude de points de vue expériencés à l'apprentissage engendre une variation des codages qui n'est pas compatible ni avec la notion de stockage ni avec l'unicité de la perspective à la première personne lors de la remémoration. Farrell et Robertson (1998) ont démontré que la mise à jour de sa propre orientation dans l'espace (i.e., point de vue) suite à un mouvement est très efficiente et automatisée. Leur étude est centrale dans le raisonnement qui amène à la conceptualisation d'un processus spatial en mémoire épisodique, nous ferons donc référence à cette expérience à plusieurs reprises dans la suite de cette thèse. ...
... En effet, si la mise à jour est nécessaire à la remémoration épisodique, elle ne devrait pas être disponible pendant le mouvement pour mettre à jour l'orientation de l'individu dans l'espace. Ainsi, les pointages qui ne nécessitent pas la mise à jour du point de vue (i.e., pointer comme si aucun déplacement n'avait été effectué, condition ignorée de Farrell et Robertson, 1998) devraient être moins impactés par la présence de la tâche épisodique que les pointages nécessitant cette mise à jour égocentrée (i.e., pointer les objets en tenant compte du déplacement, condition mise à jour de Farrell et Robertson, 1998). Il est intéressant de souligner que l'interférence (i.e., diminution de performance) est attendue lors de la réalisation de la tâche de pointage la plus facile. ...
... Une réplication de la condition « ignorée » (i.e., le participant se déplace et son point de vue change mais il doit pointer les objets comme s'il n'avait pas bougé, Farrell et Robertson, 1998) lors de la présentation d'un stimulus auditif engendrant une illusion de mouvement de soi pourrait renseigner sur l'automaticité de la mise à jour du point de vue en vection. Cette question de l'automaticité permettrait, elle aussi, de mieux appréhender les points communs et les différences de la vection et du déplacement réel dans cette tâche d'orientation spatiale. ...
Thesis
Les processus d’intégration d’informations visuelles et proprioceptives jouent un rôle fondamental dans notre aptitude à coordonner nos actions dans l’espace et à maintenir notre stabilité posturale. Des études récentes suggèrent que ces mêmes processus pourraient sous-tendre une capacité apparemment radicalement différente, l’évocation des souvenirs c’est-à-dire la mémoire épisodique. L’implication d’une même structure cérébrale, l’hippocampe, dans le traitement spatial et dans la capacité à récupérer consciemment un moment vécu a conduit à poser l’hypothèse d’un lien neuro-fonctionnel entre ces deux capacités. Cette relation est généralement attribuée à l’existence de représentations mnésiques spatialisées qui coderaient les relations spatiales entre objets (codage allocentré) nécessaires pour reconstituer ensuite un épisode précédemment vécu (Squire & Alvarez, 1995 ; Nadel & Moscovitch, 1998). Cependant la relation entre ces deux capacités pourrait être dépendante non pas de l’existence d’une trace mnésique spatialisée, mais de processus communs (Maguire & Mullaly, 2013). Cette hypothèse permettrait notamment de rendre compte des déficits conjoints de projection épisodique dans le futur et d’incapacité à évoquer des souvenirs observés chez les patients présentant une amnésie antérograde. Des recherches menées au LPNC ont indiqué que le processus de mise à jour égocentrée, qui repose sur l’intégration dynamique des informations proprioceptives et environnementales, pouvait constituer un bon candidat pour ce processus commun. En effet, maximiser un traitement de mise à jour égocentrée à l’apprentissage permet une augmentation des souvenirs à la récupération comparativement à une maximisation d’un traitement allocentré (Gomez, Rousset & Baciu, 2009). En retour, un traitement de mise à jour égocentrée effectué en tâche concurrente produit plus d’interférence sur la récupération de souvenirs qu’un traitement allocentré (Ce rles, Guinet & Rousset, 2015), cet effet n’étant pas présent en mémoire sémantique. Enfin les patients amnésiques montrent un déficit spécifique du traitement de mise à jour égocentrée (Gomez, Rousset, & Charnallet, 2012 ; Gomez, Rousset, Bonniot, Charnallet & Moreaud 2014). L’objectif du projet est d’approfondir l’implication de la mise à jour égocentrée dans la mémoire épisodique. Tout d’abord si les patients présentant une amnésie présentent bien un déficit de projection épisodique dans le futur associé à des troubles spatiaux spécifiques, il est toujours possible d’avancer que le lien est dû à des ségrégations fines de fonctions distinctes dans une même structure, fonction qui serait touchée conjointement par la lésion. Nous chercherons donc à mettre en évidence que, pour des sujets non lésés, la projection épisodique peut être perturbée en interférant en ligne sur les processus d’intégration spatiaux. Enfin ce projet de thèse s’inscrit également dans le cadre d’étud es en cours destinées à évaluer comment l’examen des traitements de mise à jour égocentrée permet de fournir des indices comportementaux pertinents dans l’évaluation de pathologies dégénératives évoluant vers une démence d’Alzheimer.
... Yet, when their model fit the direction of imagined rotation they did not have trouble, although the action was identical. The second bit of information is that participants can have trouble disregarding their actions (see Farrell & Robertson, 1998). In the incongruent condition, participants stated they could not ignore their motor activity. ...
Article
Full-text available
Three studies examined the claim that hand movements can facilitate imagery for object rotations but that this facilitation depends on people’s model of the situation. In Experiment 1, physically turning a block without vision reduced mental rotation times compared with imagining the same rotation without bodily movement. In Experiment 2, pulling a string from a spool facilitated participants’ mental rotation of an object sitting on the spool. In Experiment 3, depending on participants’ model of the spool, the exact same pulling movement facilitated or interfered with the exact same imagery transformation. Results of Experiments 2 and 3 indicate that the geometric characteristics of an action do not specify the trajectory of an imagery transformation. Instead, they point to people’s ability to model the tools that mediate between motor activity and its environmental consequences and to transfer tool knowledge to a new situation.
... Experiment 6 explored this hypothesis further by incorporating a physical rotation into die design. Several studies have demonstrated that spatial updating during imagined self-rotation is improved when accompanied by a corresponding physical rotation (Farrell & Robertson;199$;Hardwick, Mclntyre, & Pick, 1976;Presson & Montelkv 1994;Riesex, 1989;Rieser, Guth, & Hill, 1986). For example, Rieser (1989;Rieser et al., 1986) found that participants took less time to point to objects after an imagined transformation when they were actually guided to the new viewpoint compared with when they simply imagined moving to it. ...
Article
Full-text available
Six experiments compared spatial updating of an array after imagined rotations of the array versus viewer. Participants responded faster and made fewer errors in viewer tasks than in array tasks while positioned outside (Experiment 1) or inside (Experiment 2) the array. An apparent array advantage for updating objects rather than locations was attributable to participants imagining translations of single objects rather than rotations of the array (Experiment 3). Superior viewer performance persisted when the array was reduced to 1 object (Experiment 4); however, an object with a familiar configuration improved object performance somewhat (Experiment 5). Object performance reached near-viewer levels when rotations included haptic information for the turning object. The researchers discuss these findings in terms of the relative differences in which the human cognitive system transforms the spatial reference frames corresponding to each imagined rotation.
... As for determining minimum and maximum translational and rotational speed of the beam, prior user studies reported mixed results regarding if providing limited selfmotion cues can improve locomotion performance in spatial orientation tasks [49]. For example, while some studies did not show improving spatial orientation when providing physical rotation without limited translational motion cues [50], [51], other studies showed that providing physical rotation could help the user to better stay spatially oriented [52], [53], [54], [55], [56]. Some previous studies even reported that providing physical rotation resulted in performance comparable to actual walking in a navigational search task when used with leaning-based interfaces [4] and handheld interfaces [57]. ...
Article
Physical walking is often considered the gold standard for VR travel whenever feasible. However, limited free-space walking areas in the real-world do not allow exploring larger-scale virtual environments by actual walking. Therefore, users often require handheld controllers for navigation, which can reduce believability, interfere with simultaneous interaction tasks, and exacerbate adverse effects such as motion sickness and disorientation. To investigate alternative locomotion options, we compared handheld Controller (thumbstick-based) and physical walking versus a seated ( HeadJoystick ) and standing/stepping ( NaviBoard ) leaning-based locomotion interface, where seated/standing users travel by moving their head toward the target direction. Rotations were always physically performed. To compare these interfaces, we designed a novel simultaneous locomotion and object interaction task, where users needed to keep touching the center of upward moving target balloons with their virtual lightsaber, while simultaneously staying inside a horizontally moving enclosure. Walking resulted in the best locomotion, interaction, and combined performances while the controller performed worst. Leaning-based interfaces improved user experience and performance compared to Controller, especially when standing/stepping using NaviBoard, but did not reach walking performance. That is, leaning-based interfaces HeadJoystick (sitting) and NaviBoard (standing) that provided additional physical self-motion cues compared to controller improved enjoyment, preference, spatial presence, vection intensity, motion sickness, as well as performance for locomotion, object interaction, and combined locomotion and object interaction. Our results also showed that less embodied interfaces (and in particular the controller) caused a more pronounced performance deterioration when increasing locomotion speed. Moreover, observed differences between our interfaces were not affected by repeated interface usage.
... Physical navigation is an interplay between the navigator's action and perception of the resultant information, received from several sensory channels and converging as the basis for a robust spatial representation (Gramann, 2013). For example, each rotation is associated with proprioceptive and vestibular feedback that is automatically used to update the navigator's orientation in space (Farrell & Robertson, 1998). ...
Chapter
Immersive virtual reality (VR) allows its users to experience physical space in a non-physical world. It has developed into a powerful research tool to investigate the neural basis of human spatial navigation as an embodied experience. The task of wayfinding can be carried out by using a wide range of strategies, leading to the recruitment of various sensory modalities and brain areas in real-life scenarios. While traditional desktop-based VR setups primarily focus on vision-based navigation, immersive VR setups, especially mobile variants, can efficiently account for motor processes that constitute locomotion in the physical world, such as head-turning and walking. When used in combination with mobile neuroimaging methods, immersive VR affords a natural mode of locomotion and high immersion in experimental settings, designing an embodied spatial experience. This in turn facilitates ecologically valid investigation of the neural underpinnings of spatial navigation.KeywordsEmbodimentMobile brain–body imagingMultisensory integrationReference frameSpatial navigationVR
Article
Full-text available
Participants attempted to return to the origin of travel after following an outbound path by locomotion on foot (Experiments 1–3) or in a virtual visual environment (Experiment 4). Critical conditions interrupted the outbound path with verbal distraction or irrelevant, to-be-ignored movements. Irrelevant movement, real or virtual, had greater effects than verbal or cognitive distraction, indicating inability to ignore displacement during path integration. Effects of the irrelevant movement's direction (backward vs. rightward) and location (1st vs. 2nd leg of path) indicated that participants encoded a configural representation of the pathway and then cognitively compensated for the movement, producing errors directly related to the demands of compensation. An encoding-error model fit to the data indicated that backward movement produced downward rescaling, whereas movement that led to implied rotation (rightward on 2nd leg) produced distortions of shape and scale.
Article
Full-text available
It is a prevailing theoretical claim that path integration is the primary means of developing global spatial representations. However, this claim is at odds with reported difficulty to develop global spatial representations of a multiscale environment using path integration. The current study tested a new hypothesis that locally similar but globally misaligned rooms interfere with path integration. In an immersive virtual environment, participants learned objects’ locations in one room and then physically walked, while being blindfolded, to a neighbouring room for testing. These rooms were rectangular but globally misaligned. Adopting different actual perspectives in the testing room, the participants judged relative directions (JRDs) from the imagined perspectives in the learning room. The imagined and actual perspectives were aligned or misaligned according to either local room structures or global cardinal directions. Prior to JRDs, participants did not conduct other tasks (Experiment 1) or judged relative global headings of the two rooms to activate global representations while seeing the testing room (Experiment 2) or in darkness (Experiment 3). Participants performed better at locally aligned than misaligned imagined perspectives in all experiments. Better performances for globally aligned imagined perspectives appeared only in Experiment 3. These results suggest that structurally similar but misaligned rooms interfered with updating global heading by path integration, and this interference occurred during but not after the activation of global representations. These findings help to settle the inconsistency between the theoretical claims and empirical evidence of the importance of path integration in developing global spatial memories.
Article
The ability to judge spatial relations from perspectives that differ from one's current body orientation and location is important for many everyday activities. Despite considerable research on imaginal perspective taking, however, detailed computational accounts of the processes involved in this ability are missing. In this contribution, I introduce Smart (Spatial Memory Access by Reference Frame SelecTion) as a computational cognitive model of imaginal perspective taking processes. In assuming that imaginal perspective taking is governed by reference frame selection for memory access and subsequent motor activation, Smart is able to explain and simulate key findings on human imaginal perspective taking. In addition to providing novel insight into the mechanisms underlying imaginal perspective taking, Smart also has several implications for our view on spatial memory, more generally. In particular, Smart supports the idea that enduring spatial representations are essentially orientation‐free and that spatial representations are best viewed as flexible combinations of representation structures and reference frames.
Article
Full-text available
Evaluated the hypotheses that (a) spatial learning produces a cognitive map and (b) this map is picturelike. It was also hypothesized that special properties of pictures would be demonstrated by the behavior of the Ss. The special properties were (a) simultaneous representation of sequentially placed points and (b) orientation. 106 blindfolded college students learned simple paths either by moving their fingers over the successive points of a map of the path, walking through the path laid out on the floor, or (with blindfold temporarily removed) viewing a map of the path. They were tested for knowledge of the path by having to locate a target; still blindfolded, they were placed at a point on that path and required to move to another point on the path. This required either moving toward the next point in the sequence or taking a shortcut. It is concluded that Ss had an internal pictorial map since they took shortcuts with the same ease as they took the originally learned path segments. The manifestation of orientation was particularly dramatic, with Ss moving in the wrong direction (angle error greater than 90°) on more than 25% of the specified trials. (39 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
During locomotion, people need to keep up-to-date on their changing spatial orientation so that they can coordinate the force and direction of their actions with their surroundings. In 4 experiments concerning spatial orientation while walking without vision, 4-yr-olds and adults viewed 1 or more targets, were blindfolded, were guided to a new point of observation, and were asked to aim a pointer at the target(s). Spatial orientation was assessed as a function of the number of target objects (1, 3, or 5), the complexity of the route walked, and the time delay between last viewing the targets and responding. The number of targets did not influence accuracy. The significant effects of age and route complexity on spatial orientation are discussed in terms of processes involved in visual perception of distance, in sensitivity to proprioceptive information while walking, and in calibration of the scale of vision and proprioception. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The results of two types of experiments are reported. In 1 type, Ss matched depth intervals on the ground plane that appeared equal to frontal intervals at the same distance. The depth intervals had to be made considerably larger than the frontal intervals to appear equal in length, with this physical inequality of equal-appearing intervals increasing with egocentric distance of the intervals (4 m-12 m). In the other type of experiment, Ss viewed targets lying on the ground plane and then, with eyes closed, attempted either to walk directly to their locations or to point continuously toward them while walking along paths that passed off to the side. Performance was quite accurate in both motoric tasks, indicating that the distortion in the mapping from physical to visual space evident in the visual matching task does not manifest itself in the visually open-loop motoric tasks.
Article
The present paper demonstrates that mental rotation as used in the processing of disoriented objects (Cooper and Shepard 1973) can also be used as an explanatory concept for the processing of perspective problems in which the task is to imagine how an environment will appear from another vantage point. In a cognitive map, subjects imagined an initial line of vision and subsequently processed a reorientation stimulus, requesting them to imagine a turn over 0, 45, 90, 135, or 180 degrees. Time for a reorientation increased linearly with the size of the imaginary turn up to 135 degrees and decreased for turns of 180 degrees; apparently, about-faces were relatively easy to imagine. The increment of reorientation time between 0 and 135 degrees was larger for maps presented in unfamiliar orientations such as South-West up. Both the increment and the interaction with familiarity are consistent with an explanation in terms of mental rotation.
Article
Two experiments were performed to assess the accuracy and precision with which adults perceive absolute egocentric distances to visible targets and coordinate their actions with them when walking without vision. In experiment 1 subjects stood in a large open field and attempted to judge the midpoint of self-to-target distances of between 4 and 24 m. In experiment 2 both highly practiced and unpracticed subjects stood in the same open field, viewed the same targets, and attempted to walk to them without vision or other environmental feedback under three conditions designed to assess the effects on accuracy of time-based memory decay and of walking at an unusually rapid pace. In experiment 1 the visual judgments were quite accurate and showed no systematic constant error. The small variable errors were linearly related to target distance. In experiment 2 the briskly paced walks were accurate, showing no systematic constant error, and the small, variable errors were a linear function of target distance and averaged about 8% of the target distance. Unlike Thomson's (1983) findings, there was not an abrupt increase in variable error at around 9 m, and no significant time-based effects were observed. The results demonstrate the accuracy of people's visual perception of absolute egocentric distances out to 24 m under open field conditions. The accuracy of people's walking without vision to previously seen targets shows that efferent and proprioceptive information about locomotion is closely calibrated to visually perceived distance. Sensitivity to the correlation of optical flow with efferent/proprioceptive information while walking with vision may provide the basis for this calibration when walking without vision.
Article
Adults were asked to judge the self-to-object directions in a room from novel points of observation that differed from their actual point at times only by a rotation and at other times only by a translation. The results show for the rotation trials that the errors and latencies when a novel point was imagined were worse than the baseline responses from their actual points of observation, and the latencies varied as a function of the magnitude of the to-be-imagined rotation. For the translation trials, on the other hand, the errors and latencies when a novel point was imagined were comparable to the baseline responses from their actual point and did not vary significantly across the different imagined station points. The evidence indicates that subjects know the object-to-object relations directly, without going through the origin of a coordinate system. In addition, similarities in processing during imagination on the one hand, and perception and action on the other are discussed.
Article
Three experiments were conducted to investigate the ability of subjects to make judgments of direction when using misaligned maps. Two hypotheses were proposed (i) errors would fall into two lawful categories--mirror-image errors and alignment errors; (ii) the effect of map orientation would generalize to a different mode of responding than has been used in previous studies. Support for both hypotheses was obtained. The results are discussed in terms of the mental processes used to align maps to spaces, and the task demands required by different response modes.
Article
Experiments are reported of the nonvisual sensitivity of observers to their paths of locomotion and to the resulting changes in the structure of their perspectives, ie changes in the network of directions and distances spatially relating them to objects fixed in the surrounding environment. In the first experiment it was found that adults can keep up to date on the changing structure of their perspectives even in the absence of sights and sounds that specify changes in self-to-object relations. They do this rapidly, accurately, and, according to the subjects' reports, automatically, as if perceiving the new perspective structures. The second experiment was designed to investigate the role of visual experience in the development of sensitivity to occluded changes in perspective structure by comparing the judgments of sighted adults with those of late-blinded adults (who had extensive life histories of vision) and those of early-blinded adults (who had little or no history of vision). The three groups performed similarly when asked to judge perspective while imagining a new point of observation. However, locomoting to the new point greatly facilitated the judgments of the sighted and late-blinded subjects, but not those of the early-blinded subjects. The findings indicate that visual experience plays an important role in the development of sensitivity to changes in perspective structure when walking without vision.