ArticlePDF Available

Frames of reference and categorical and coordinate spatial relations: A hierarchical organisation

Authors:

Abstract and Figures

This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.
Content may be subject to copyright.
RESEARCH ARTICLE
Frames of reference and categorical and coordinate spatial
relations: a hierarchical organisation
Francesco Ruotolo Tina Iachini Albert Postma
Ineke J. M. van der Ham
Received: 10 June 2011 / Accepted: 27 August 2011 / Published online: 13 September 2011
ÓSpringer-Verlag 2011
Abstract This research is about the role of categorical
and coordinate spatial relations and allocentric and ego-
centric frames of reference in processing spatial informa-
tion. To this end, we asked whether spatial information is
firstly encoded with respect to a frame of reference or with
respect to categorical/coordinate spatial relations. Partici-
pants had to judge whether two vertical bars appeared on
the same side (categorical) or at the same distance (coor-
dinate) with respect to the centre of a horizontal bar
(allocentric) or with respect to their body midline (ego-
centric). The key manipulation was the timing of the
instructions: one instruction (reference frame or spatial
relation) was given before stimulus presentation, the other
one after. If spatial processing requires egocentric/allo-
centric encoding before coordinate/categorical encoding,
then spatial judgements should be facilitated when the
frame of reference is specified in advance. In contrast, if
categorical and coordinate dimensions are primary, then a
facilitation should appear when the spatial relation is
specified in advance. Results showed that participants were
more accurate and faster when the reference frame rather
than the type of spatial relation was provided before
stimulus presentation. Furthermore, a selective facilitation
was found for coordinate and categorical judgements after
egocentric and allocentric cues, respectively. These results
suggest a hierarchical structure of spatial information
processing where reference frames play a primary role and
selectively interact with subsequent processing of spatial
relations.
Keywords Spatial processing Egocentric/allocentric
frames of reference Categorical/coordinate spatial
relations Instruction timing
Introduction
In order to successfully deal with everyday activities,
human beings need to continuously process spatial infor-
mation. For example, if we want to sit down, we need to
know where the chair is with respect to our body. To
recognise an object, we may rely on the relationships
between the parts of the object (e.g. usually, the handle is
not on top of a cup, but on its side). These two kinds of
spatial information are encoded by using an egocentric and
allocentric frame of reference, respectively (Kosslyn 1994;
Milner and Goodale 1995; Paillard 1991). Egocentric
frames of reference define spatial information with respect
to the observer, whereas allocentric frames of reference are
independent of the observers’ position and spatial infor-
mation is referred to external elements such as objects or
parts of the objects (Kosslyn 1994; O’Keefe and Nadel
1978; Paillard 1971). The distinction between egocentric
and allocentric frames of reference is supported by many
behavioural and neurofunctional studies (Committeri et al.
2004; Iachini et al. 2009a,b; Vallar et al. 1999; Zaehle
et al. 2007).
F. Ruotolo (&)T. Iachini
Department of Psychology, Second University of Naples,
Via Vivaldi 43, 81100 Caserta, Italy
e-mail: francesco.ruotolo@unina2.it
A. Postma I. J. M. van der Ham
Helmholtz Institute, Experimental Psychology,
Utrecht University, Utrecht, The Netherlands
A. Postma
Department of Neurology, University Medical Centre Utrecht,
Utrecht, The Netherlands
123
Exp Brain Res (2011) 214:587–595
DOI 10.1007/s00221-011-2857-y
Typically, frames of reference are necessary to organise
spatial relations. While frames of reference specify the
point where to anchor a location, spatial relations indicate
the kind of spatial information. Spatial relations can be
coordinate (i.e. metric: the chair is one metre from you) or
categorical (non-metric, such as left/right, above/below), as
first proposed by Kosslyn (1987, see also 1994). Numerous
studies have shown that separate neural circuits in the left
hemisphere and in the right hemisphere subserve categor-
ical and coordinate spatial processing, respectively (for a
review: Jager and Postma 2003; Laeng 1994).
These two dimensions seem to represent different but
complementary aspects; we cannot process a metric or
abstract spatial relation without specifying a frame of ref-
erence (Ruotolo et al. 2011). Furthermore, they have been
assigned similar functions in two influential theories. In
particular, Kosslyn suggested that coordinate representa-
tions specify precise spatial relations in a way that is useful
for guiding action (e.g. reaching, grasping, navigating),
whereas categorical representations are more useful in
perception/recognition tasks because they are involved in a
critical aspect of the invariant representation of an object’s
shape (Kosslyn 1987,1994; Kosslyn et al. 1992).
Importantly, similar functions are assigned by Milner
and Goodale (1995,2008) to the allocentric/egocentric
distinction in their vision-for-action and vision-for-per-
ception model. Their model, as recently highlighted by
Schenk and McIntosh (2010), suggests that in order to
visually guide an action towards an object, the object’s
position must be coded egocentrically and its spatial
dimensions measured in absolute metrics. In contrast, if
the purpose is to recognise an object, the visual system
must rely on viewer–invariant relationships (not absolute
metrics), thus spatial features should be coded in allo-
centric frames of reference. However, several studies have
shown that egocentric representations can also be used in
recognition (Shelton and McNamara 2004) and visual
search tasks (Ball et al. 2009; van der Ham et al. 2011)
and can interact with allocentric representations during
visuo-spatial judgement tasks (Ruotolo et al. 2011). This
suggests that there is not a strict relationship between
allocentric frames of reference and object/scene recogni-
tion because egocentric frames of reference may also play
an important role. On the other hand, allocentric repre-
sentations may also interact with egocentric processing
during online grasping control (Heath et al. 2006). Over-
all, these findings indicate that the use of an egocentric
and/or an allocentric frame is not necessarily dependent
on task purposes. Furthermore, the distinction between
egocentric and allocentric frames of reference seems
crucial to interpret several behavioural and neurofunc-
tional findings (Possin 2010; Schenk 2006; Schenk and
McIntosh 2010). Contrary to Milner and Goodale’s
interpretation (1995), some studies about visual illusions
suggest that the presence of illusion biases on visuo-per-
ceptual but not visuo-motor performance may depend
more on the type of spatial coding used to accomplish the
task, that is, egocentric or allocentric, than on the response
modality (action-dependent or perception-dependent)
required by the task (Bruno et al. 2008; see also Becker
et al. 2009; Franz 2001; Wraga et al. 2000).
The argument of the ‘spatial encoding primacy’ is also
supported by a recent neuropsychological study with
patient DF (Schenk 2006). DF suffers from bilateral dam-
age to the ventral stream. Schenk (2006) decoupled the
perception/action functions from egocentric/allocentric
encodings and, contrary to what was found by Goodale and
Milner (1992), showed that DF had a ‘normal’ performance
in both perceptual judgments and motor responses when
egocentric encoding was requested, whereas she showed an
impairment when allocentric encoding was required.
These studies underscore the important role of spatial
encoding, in particular of egocentric and allocentric refer-
ence frames, in organising information for different, both
perceptual and motor, behavioural purposes. However, it is
not clear how egocentric/allocentric and categorical/coor-
dinate dimensions are related; do they have the same or
different role in processing spatial information? In other
words, is spatial information primarily encoded with
respect to a frame of reference or with respect to categor-
ical/coordinate spatial relations?
There are three possibilities: (a) spatial processing
requires egocentric/allocentric encoding before coordinate/
categorical encoding; (b) categorical/coordinate distinction
has a primary role with respect to egocentric/allocentric
processing; (c) egocentric/allocentric and categorical/
coordinate dimensions have the same role in spatial
information processing.
We addressed this issue by using an experimental par-
adigm similar to that used by Ruotolo and colleagues in a
previous study (2011). Participants had to judge whether
two vertical bars appeared on the same side (categorical)
or at the same distance (coordinate) with respect to the
centre of an horizontal bar (allocentric) or with respect to
their body midline (egocentric). While keeping this same
general task design, in the current study, the timing of the
instructions was manipulated; in one condition, participants
were instructed to encode the stimulus according to one of
the two frames of reference (egocentric or allocentric)
before stimulus presentation, and after stimulus presenta-
tion, they were asked to give a categorical or a coordinate
judgement. In another condition, participants were
instructed to encode the categorical or coordinate charac-
teristics of the stimulus before, and only after stimulus
presentation, they were asked to give an egocentric or an
allocentric judgement.
588 Exp Brain Res (2011) 214:587–595
123
If spatial processing requires egocentric/allocentric
encoding before coordinate/categorical encoding, then
spatial judgements should be facilitated when the frame of
reference is specified in advance. In contrast, if categorical
and coordinate dimensions are primary, then spatial
judgements should be facilitated when the kind of spatial
relation is specified in advance. Finally, if the two spatial
processes have the same role, then no significant effect of
timing of instructions should emerge.
Method
Participants
Forty-eight students (24 men and 24 women, mean
age =22.80, SD =2.60; range: 18–28) from the Second
University of Naples participated in the experiment in
exchange for course credit. All participants were right-
handed and had a normal or corrected to normal vision.
Apparatus
The experiment took place in a darkened room in order to
prevent any interference from allocentric cues. Participants
were seated in front of a 17-inch computer screen
(1,280 9768 pixels), at a distance of 50 cm. All participants
were asked whether they were able to see the monitor’s edges
either before or during the experimental trials, and nobody
reported they could. A chin rest was used to keep the head
still in front of the exact centre of the screen. The stimuli
were displayed on a black background (24 bits RGB colour
coding: 0, 0, 0). They were generated by a PC, the operating
system was Microsoft Windows XP and the software
SUPERLAB-Pro 2.0 was used for stimuli presentation.
A serial mouse was used to register participants’ responses.
Stimuli
The combination of two vertical white target bars (width:
0.3 mm; length: 2 mm; 24 bits RGB colour coding: 255,
255, 255) one above and the other below a white horizontal
bar (width: 0.3 mm; length: 4.5 cm; 24 bits RGB colour
coding: 255, 255, 255) formed the stimuli. Participants
judged if the two vertical bars appeared at same distance or
not (coordinate task) or if they appeared on the same side
or not (categorical task) with respect to egocentric and
allocentric reference points. When the vertical bars were
referred to the centre of the horizontal bar, they constituted
the allocentric positions; when they were referred to the
participant’s body midline, they were considered the ego-
centric positions. The combination of ‘distance’ and ‘side’
generated three possible spatial configurations of the
vertical bars with respect to the reference points. The two
vertical bars could be placed at the Same Distance on
Different Sides (SDDS), at Different Distances on the Same
Side (DDSS) or at Different Distances on Different Sides
(DDDS) with respect to the reference point (see Fig. 1).
For the configurations ‘DDSS’ and ‘DDDS’, the position of
the two vertical bars was manipulated in order to obtain
three levels of metric difficulty: 2 mm, 4 mm, 8 mm. For
example, a difficulty of 4 mm in DDSS configuration was
obtained by placing one of the two vertical bars at 4 mm
and the other at 8 mm on the same side with respect to the
reference point (the centre of the horizontal bar or the body
midline), whereas in DDDS configuration, one of the two
vertical bars was located at 4 mm on the right and the other
at 8 mm on the left with respect to the reference point. So,
in all trials, judgements about the position of the two
vertical bars were based on a metric difference of 4 mm.
By following the same logic, metric difficulties of 2 and
8 mm were obtained. In this way, we could ensure that all
judgments were based on the same level of metric
difficulty.
Finally, in the SDDS stimuli, the two vertical bars were
placed both at 2, 4, and 8 mm on opposite sides with
respect to the reference point and, of course, the metric
difference was zero. These factors, i.e., the three distance/
side combinations and the three metric levels, led to nine
basic arrangements of stimuli.
In order to distinguish the two frames reference, either
the horizontal bar or the entire configuration was displaced.
In the egocentric condition, for each egocentric position of
the two vertical bars, the centre of the horizontal bar could
appear at 4 or 8 mm, to the right or to the left, with respect
to the centre of the screen. This way the target positions
with respect to the body midline remained the same, but
irrelevant allocentric information, i.e., the centre of the
horizontal bar, varied. In the allocentric condition, the
entire stimulus configuration could appear at 4 or 8 mm,
rightmost or leftmost with respect to the centre of the
screen. Therefore, the allocentric positions of the two
vertical bars remained the same, but irrelevant egocentric
information, i.e., the position of the target with respect to
the extension of the body midline varied. We chose the two
levels of misalignment, 4 and 8 mm, in order to avoid
excessive allocentric facilitation and obtain a comparable
level of egocentric and allocentric frames discrimination
(see Neggers et al. 2005). In total, nine stimuli were
aligned, nine misaligned on the right and nine misaligned
on the left. In order to obtain the same number of confir-
mative (i.e. same distance or same side) and negative (i.e.
different distance or different side) responses, nine stimuli
of two spatial configurations were presented twice in both
the coordinate and categorical tasks. Therefore, for each
task, 36 trials were presented (total number =288 trials).
Exp Brain Res (2011) 214:587–595 589
123
Procedure
Participants saw two white vertical bars (targets), one
above and the other below a white horizontal bar. They had
to judge if the two vertical bars appeared at the same dis-
tance or not with respect to their body midline (egocentric-
coordinate task) or with respect to the centre of the hori-
zontal bar (allocentric-coordinate task). Moreover, they
had to decide whether the two vertical bars were on the
same side or not with respect to either their body midline
(egocentric-categorical task) or with respect to the centre
of the horizontal bar (allocentric-categorical task). How-
ever, information about what kind of spatial relation and
what kind of frame of reference to use for the spatial
judgements was given in two separate moments with
respect to stimulus presentation. In the ‘Frames of refer-
ence before (FoRb)’ condition, participants were asked to
encode the stimulus according to one of the two frames of
reference (egocentric or allocentric) before stimulus pre-
sentation and after stimulus presentation, they were asked
to give a categorical or a coordinate judgement. Instead, in
the ‘Spatial relations before (SRb)’ condition, participants
were asked to encode the categorical or coordinate char-
acteristics of the stimulus before stimulus presentation and
only after stimulus presentation, they were asked to give an
egocentric or an allocentric judgement. The experiment
consisted of eight different tasks. During the experimental
session, there was a 10-s pause after every 36 trials. A trial
in the FoRb condition started with the auditory presentation
of the word ‘Corpo’ (‘body’ in English) if participants had
to encode the relationship between the two vertical bars
with respect to their body midline or with the word ‘Barra’
(‘bar’ in English) if they had to encode the relationship
between the two vertical bars with respect to the centre of
the horizontal bar. Afterwards, a grey fixation cross
(width =0.3 mm; length: 2 mm) in a grey dotted square
(3.5 93.5 cm) was presented at the centre of the screen.
Participants were instructed to fixate the fixation cross for
500 ms; next, the cross disappeared and they had to
maintain ocular fixation within the dotted square for
1.000 ms. This was inserted in order to avoid the scope of
attention being differently primed in each frame of refer-
ence condition (Okubo et al. 2010; Laeng et al. 2011).
Participants knew that the area of presentation of the
stimuli was limited to that indicated by the dotted square,
that is, the centre of the screen. As soon as the dotted
square disappeared, one of the stimuli was presented for
100 ms. Afterwards, the screen was blank and the word
‘Distanza’ (‘distance’ in English) was auditory presented if
a coordinate judgment had to be expressed. Instead, in case
of a categorical judgement, the word ‘Lato’ (‘side’ in
English) was provided. Participants had 2 s to press the
Fig. 1 Examples of stimuli
used in the experiment. The
dotted grey lines indicate the
body midline. The white dotted
lines indicate the centre of the
horizontal bar. In the first row,
the three spatial configurations
of the two vertical bars are
shown: 1Different Distances
Different Sides; 2Different
Distances Same Side; 3Same
Distance Different Sides. In 1,2
and 3examples of alignment
between egocentric and
allocentric reference frames are
shown. In 1b,2c and 3a
examples of misalignment
between egocentric and
allocentric reference frames are
shown; in 1b, misalignment is
created by displacing the entire
stimulus configuration; in 2c,
misalignment is created by
displacing only the horizontal
bar;in3a, misalignment is
created by displacing only the
horizontal bar
590 Exp Brain Res (2011) 214:587–595
123
right (affirmative answer) or left (negative answer) button
of the mouse to give the response. If they failed to respond
within 2 s, a text was presented on the screen indicating
that they did not respond in time. The duration of all
auditory cues was 500 ms. All the cues were given in
Italian. Figure 2gives an example of the experimental flow
for both ‘FoRb’ and ‘SRb’ conditions.
Before the experimental session, there was a training
session. Participants were explained the different tasks and
performed 20 trials in which a feedback was given. The
presentation of all the 288 trials was randomised for each
participant.
The experimental design comprised a 2-level within
variable Frames of reference (Egocentric vs. Allocentric), a
2-level within variable Spatial relations (Coordinate vs.
Categorical) and a 2-level within variable Instructions
timing (FoRb vs. SRb). Accuracy (mean number of correct
judgements) and Response Time (in milliseconds) were the
dependent variables.
Results
Analyses were based on 3-way Anovas for repeated mea-
sures with terms for ‘Frames of reference’ (egocentric,
allocentric), ‘Spatial Relations’ (categorical, coordinate) and
‘Instructions Timing’ (FoRb, SRb) variable. The Scheffe’s
test was used to analyse post hoc effects.
For accuracy, the ANOVA revealed a main effect of the
variable ‘Frames of Reference’ due to allocentric judg-
ments (M=.77; SD =.05) being more accurate than
egocentric judgments (M=.66; SD =.03), F(1, 47) =
202.02, P\.0001, g
p
2
=.81. A main effect of ‘Spatial
Relations’ was also found, F(1, 47) =98.79, P\.0001,
g
p
2
=.67. This effect was due to categorical (M=.76;
SD =.06) being better than coordinate judgments
(M=.66; SD =.04). Furthermore, the main effect of the
variable ‘Instructions Timing’ appeared: F(1, 47) =51.67,
P\.0001, g
p
2
=.52. Participants performed significantly
better when the instructions about the frames of reference
were given before (M=.74; SD =.04) instead of after
(M=.69; SD =.04) stimulus presentation.
No significant 2-way interaction was found (Frames of
reference 9Spatial Relations: F(1, 47) =1.20; P=.28,
g
p
2
=.025; Frames of reference 9Instructions Timing:
F(1, 47) =.36; P=.55, g
p
2
=.0075; Spatial Rela-
tions 9Instructions Timing: F(1, 47) =1.36; P=.21,
g
p
2
=.034) but a 3-way interaction appeared: F(1, 47) =
4.85; P\.05, g
p
2
=.09. The post hoc analysis revealed
Fig. 2 This figure depicts a schematic overview of two trials of the
experiment. aCondition ‘SRb’. At t=0 ms, participants hear the
word ‘Distanza’ (‘distance’ in English) if they have to encode
coordinate relations, or the word ‘Lato’ (‘side’ in English) if they
have to encode categorical relations; at t=500 ms, the fixation cross
is displayed; when the cross disappears, only the dotted square
remains for 1,000 ms, then the to be judged stimulus is displayed for
100 ms, after which the participants can hear the word ‘Corpo’
(‘body’ in English) for egocentric judgement or the word ‘Barra’
(‘bar’ in English) for allocentric judgment. After the second word,
participants have 2 s for the response; bCondition ‘FoRb’. In this
condition, participants hear the word about the frame of reference
before stimulus presentation and the word about the spatial relation
after stimulus presentation
Exp Brain Res (2011) 214:587–595 591
123
that the variable ‘Instructions Timing’ influenced the
relationship between Frames of reference and Spatial
relations. In the ‘SRb’ condition, allocentric categorical
judgements (M=.79; SD =.12) were more accurate than
all other judgements (Ps\.0001; allocentric coordinate:
M=.71; SD =.08; egocentric categorical: M=.70;
SD =.04; egocentric coordinate: M=.57; SD =.08),
whereas the egocentric coordinate judgements were the
least accurate (Ps\.01). The same pattern appeared in the
‘FoRb’ condition: allocentric categorical (M=.86;
SD =.09) were the most accurate judgements (Ps\.001;
allocentric coordinate: M=.73; SD =.08; egocentric
categorical: M=.75; SD =.04; egocentric coordinate:
M=.63; SD =.08), egocentric coordinate were the least
accurate judgements (Ps\.0001). However, as shown in
Fig. 3, the coordinate performance improved when an
egocentric frame of reference was specified before than
after stimulus presentation, F(1, 47) =9.92, P\.005,
g
p
2
=.17, whereas the categorical performance improved
when the allocentric frame was given before stimulus
presentation: F(1, 47) =23.53, P\.0001, g
p
2
=.33. No
other significant differences were found.
For response latency (ms), the Anova revealed a main
effect of ‘Frames of reference’, F(1, 47) =11.68,
P\.001, g
p
2
=.20. This effect was due to allocentric
judgements (M=913.5; SD =97.65) being faster than
egocentric ones (M=948.3; SD =102.5). A main effect
of ‘Spatial relations’ was also found, F(1, 47) =33.28,
P\.0001, g
p
2
=.41, due to categorical judgements
(M=903.4; SD =102.6) being faster than coordinate
judgements (M=958.4; SD =95.9). In line with the
previous results, a main effect of the ‘Instructions timings’
was also found, F(1, 47) =14.25, P\.0005, g
p
2
=.23.
Participants were faster when the frames of reference were
given before (M=913.7; SD =88.4) than after the pre-
sentation of the stimulus (M=948.13; SD =108.3).
A 2-way interaction between coordinate and categorical
spatial processing and instructions timing emerged,
F(1, 47) =5.28, P\.05, g
p
2
=.10 (see Fig. 4). The post
hoc analysis revealed that the presentation of the instruc-
tions about the frames of reference before or after the
stimulus did not influence the categorical performance
(F\1), but the coordinate performance, F(1, 47) =17.76,
P\.0001, g
p
2
=.27. Coordinate judgments were signifi-
cantly faster when the instructions about the frames of
reference were given before (M=930.74; SD =99.41) the
presentation of the stimulus than after (M=986;
SD =112.42). Finally, categorical judgements were faster
than coordinate ones, either when the frame of reference
was presented before the stimulus (categorical FoRb:
M=896.61; SD =91.24), F(1, 47) =10.86, P\.005,
g
p
2
=.19 or after it (categorical SRb: M=930.74;
SD =99.41), F(1, 47) =24.1, P\.0001, g
p
2
=.34.
In order to verify whether artifactual procedural fea-
tures, such as the level of difficulty of the various trials and
the alignment/misalignment between egocentric and allo-
centric frames of reference could have influenced the pat-
tern of results further analyses were carried out. Taking
into account the possibility that metric difficulty could
selectively affect spatial judgements, we performed a four-
way repeated ANOVA with terms for frames of reference,
spatial relations type, instructions timing and metric
Fig. 3 Mean accuracy for
categorical and coordinate
spatial judgements as a function
of egocentric and allocentric
frames of reference and timing
conditions
592 Exp Brain Res (2011) 214:587–595
123
difficulty (2, 4, and 8 mm). With regard to accuracy, results
did not show a main effect of difficulty levels (F\1).
Furthermore, the absence of any significant interaction
effect involving the variable ‘metric difficulty’ (Ps[.19)
indicated that the levels of difficulty of the tasks did not
influence the pattern of results previously found. The same
pattern of results was found for response latency.
Some studies showed that in condition of misalignment,
irrelevant allocentric information can influence egocentric
visuo-spatial judgements (Neggers et al. 2005,2006) and
viceversa (Sterken et al. 1999; Ruotolo et al. 2011). To see
whether this influence was present also in this study, a four-
way ANOVA for repeated measures with terms for frames
of reference, spatial relations type, instructions timing and
alignment (aligned/misaligned) was carried out. Results
showed neither four or three-way interaction effects (in all
cases P[.19), but only the main effect of the variable
‘Alignment’: F(1, 47) =27.44, P\.0001, g
p
2
=.36. The
effect reflects that the judgments in the aligned condition
(M=.76; SD =.08) were more accurate than in the
misaligned condition (M=.70; SD =.06). The same
pattern of results appeared for the response time. These
results indicated that although the influence of irrelevant
information on egocentric and allocentric judgments
appeared in misalignment conditions, it did not affect the
relationship between frames of reference and spatial
relations.
Discussion
The research reported here was focused on the role of
allocentric/egocentric frames of reference with respect to
categorical/coordinate spatial relations in organising spatial
information. Participants made spatial judgements that
combined one type of reference frame with one type of
spatial relation. Overall, the results indicate that pre-cueing
participants with a frame of reference leads to more
accurate and faster spatial judgments than pre-cueing par-
ticipants with spatial relations. The improvement was
particularly evident for categorical judgements when the
stimulus had to be encoded according to an allocentric
reference frame and for coordinate judgements when an
egocentric frame was specified before stimulus presenta-
tion. It is important to point out that this effect was not
influenced by the difficulty of the task or the conditions of
alignment/misalignment between the two frames of refer-
ence. Moreover, in any case, spatial judgments did not
improve when spatial relations cues were given before
stimulus presentation. Therefore, it seems that for spatial
information to be processed efficiently, the first necessary
operation is to encode it according to frames of reference.
This suggests that spatial information processing could be
considered to be organised in a hierarchical fashion, with
frames of reference having a primary role with respect to
categorical/coordinate spatial relations. However, it is
important to highlight that this interpretation is limited to
visuo-perceptual organisation of spatial information, given
the current experimental design. Nevertheless, this finding
can be considered as a further support to studies that show
a ‘primacy’ of reference frames in several tasks addressing
visual illusion effects and perception/action dissociations
(Becker et al. 2009; Heath et al. 2006; Schenk 2006; Wraga
et al. 2000). Furthermore, it seems in line with recent
proposals suggesting that visual spatial cognition is
organised according to egocentric–allocentric frames of
reference (e.g. Possin 2010).
The results also showed that coordinate judgments
improved particularly when an egocentric frame was given
before stimulus presentation, whereas categorical judg-
ments improved particularly when an allocentric frame was
required. This gives some insight into the relationship
between the two spatial distinctions. In line with Jager and
Postma (2003) and Ruotolo et al. (2011), the selective
facilitation for the egocentric/coordinate and allocentric/
categorical combinations would suggest that frames of
reference and spatial relations form interactive dimensions.
One possibility is that the pattern of facilitation might be
the expression of the functional link between frames of
reference and spatial relations. The models proposed by
Kosslyn (1994) and Milner and Goodale (1995) would
suggest that coordinate spatial information specified
according to egocentric frames of reference are necessary
to guide actions, whereas allocentric categorical informa-
tion are more useful to recognise scenes or objects. In other
words, allocentric processing would be closer to categori-
cal coding of spatial relations, whereas egocentric pro-
cessing would be closer to coordinate coding.
Fig. 4 Mean response time for categorical and coordinate spatial
judgements as a function of timing conditions
Exp Brain Res (2011) 214:587–595 593
123
Our data are partially in line with this interpretation.
Even though an advantage of allocentric categorical over
allocentric coordinate judgments appeared, an advantage of
egocentric categorical over egocentric coordinate judg-
ments also emerged. More specifically, participants per-
formed worse in egocentric coordinate tasks than in all
other tasks, although a small improvement was observed
when the egocentric frame was indicated before stimulus
presentation. Instead, the results clearly speak in favour of
the combination allocentric and categorical whose related
judgements were the most accurate and fastest. As sug-
gested by Ruotolo et al. (2011), this could be due to the fact
that the task used in this research, based on 2-D stimuli and
requiring non-visually driven motor response modality,
might have enhanced perceptual/recognition components
and hence favoured allocentric and categorical judgments.
In turn, the characteristics of the task could have limited
the emergence of the interaction between egocentric and
coordinate components. For example, Ruotolo et al. (2011)
showed that it was sufficient to reduce the luminance of the
horizontal bar with respect to the two vertical bars to
determine an improvement of coordinate judgments par-
ticularly when combined with an egocentric frame of ref-
erence. Therefore, in our experiment, the two vertical bars
(target bars) and the horizontal bar (that is the allocentric
cue) had the same luminance that could have stressed the
allocentric relations to the detriment of the egocentric ones
and, in turn, could have masked the interaction with
coordinate components.
In sum, although the appearance of a selective facilita-
tion suggests particular links between egocentric coordinate
and allocentric categorical, the general pattern of results, in
line with our hypotheses, suggests that egocentric and
allocentric frames of reference have a primary role with
respect to categorical and coordinate spatial relations in
processing spatial information; when spatial information is
primarily processed and organised according to a frame of
reference, spatial performance is generally more accurate
and faster. This would confirm the importance of the kind of
spatial encoding in visuo-spatial information processing
(Ball et al. 2009; Bruno 2001; Schenk 2006). Moreover,
these data shed light on the relationship between the two
kinds of spatial encoding (Jager and Postma 2003). Indeed,
the presence of a hierarchical organisation between the two
dimensions seems to suggest that they can be considered as
different but not independent processes because egocentric
and allocentric frames of reference may interact with
coordinate and categorical spatial relations. However,
future studies should investigate the possible influence of
task characteristics such as kinds of stimuli (2D/3D),
response modalities (verbal/motor) and temporal parame-
ters (immediate/delayed response) on the relationship
between egocentric/allocentric frames of reference and
coordinate/categorical spatial relations.
References
Ball K, Smith D, Ellison A, Schenk T (2009) Both egocentric and
allocentric cues support spatial priming in visual search.
Neuropsychologia 47:1585–1591
Becker SI, Ansorge U, Turatto M (2009) Saccades reveal that
allocentric coding of the moving object causes mislocalisation in
the flash-lag effect. Atten Percept Psychophys 71:1313–1324
Bruno N (2001) When does action resist visual illusions? Trends
Cogn Sci 5:379–382
Bruno N, Bernardis P, Gentilucci M (2008) Visually guided pointing,
the Mu
¨ller-Lyer illusion, and the functional interpretation of the
dorsal-ventral split: conclusions from 33 independent studies.
Neurosci Biobehav Rev 32:423–437
Committeri G, Galati G, Paradis A, Pizzamiglio L, Berthoz A,
LeBihan D (2004) Reference frame for spatial cognition:
different brain areas are involved in viewer-, object-, and
landmark-centered judgements about object location. J Cogn
Neurosci 16:1517–1535
Franz VH (2001) Action does not resist visual illusions. Trends Cogn
Sci 5:457–459
Goodale MA, Milner AD (1992) Separate visual pathways for
perception and action. Trends Neurosci 15:20–25
Heath M, Rival C, Neely K, Krigolson O (2006) Muller-Lyer figures
influence the online reorganization of visually guided grasping
movements. Exp Brain Res 169:473–481
Iachini T, Ruggiero G, Ruotolo F (2009a) The effect of age on
egocentric and allocentric spatial frames of reference. Cogn
Process 10:222–224
Iachini T, Ruotolo F, Ruggiero G (2009b) The effects of familiarity
and gender on spatial representation of a real environment.
J Environ Psychol 20:227–234
Jager G, Postma A (2003) On the hemispheric specialization for
categorical and coordinate spatial relations: a review of the
current evidence. Neuropsychologia 41:504–515
Kosslyn SM (1987) Seeing and imagining in the cerebral hemi-
spheres: a computational analysis. Psychol Rev 94:148–175
Kosslyn SM (1994) Image and brain: the resolution of the imagery
debate. MIT Press, Cambridge
Kosslyn SM, Chabris CF, Marsolek CJ, Koenig O (1992) Categorical
versus coordinate spatial relations: computational analyses and
computer. J Exp Psychol Hum Percept Perform 181:562–577
Laeng B (1994) Lateralization of categorical and coordinate spatial
functions. A study of unilateral stroke patients. J Cogn Neurosci
6:189–203
Laeng B, Okubo M, Saneyoshi A, Michimata C (2011) Processing
spatial relations with different apertures of attention. Cogn Sci
35:297–329
Milner AD, Goodale MA (1995) The visual brain in action. Oxford
University Press, Oxford
Milner AD, Goodale MA (2008) Two visual systems re-viewed.
Neuropsychologia 46:774–785
Neggers SFW, Scho
¨lvinck ML, van der Lubbe RHJ, Postma A (2005)
Quantifying the interactions between allocentric and egocentric
representations of space. Acta Psychol 118:25–45
Neggers SFW, van der Lubbe RHJ, Ramsey NF, Postma A (2006)
Interactions between ego- and allocentric neuronal representa-
tions of space. Neuroimage 31:320–331
594 Exp Brain Res (2011) 214:587–595
123
O’Keefe J, Nadel L (1978) The hippocampus as a cognitive map.
Oxford University Press, Oxford
Okubo M, Laeng B, Saneyoshi A, Michimata C (2010) Exogenous
attention differentially modulates the processing of categorical
and coordinate spatial relations. Acta Psychol 135:1–11
Paillard J (1971) Les de
´terminants moteurs de l’organisation spatiale.
Cahiers de Psychologie 14:261–316
Paillard J (1991) Brain and space. Oxford Science Publications,
Oxford
Possin KL (2010) Visual spatial cognition in neurodegenerative
disease. Neurocase 16:466–487
Ruotolo F, van der Ham IJM, Iachini T, Postma A (2011) The
relationship between allocentric and egocentric frames of
reference and categorical and coordinate spatial information
processing. Q J Exp Psychol 64:1138–1156. doi:10.1080/174702
18.2010.539700
Schenk T (2006) An allocentric rather than perceptual deficit in
patient D. F. Nat Neurosci 9:1369–1370
Schenk T, McIntosh RD (2010) Do we have independent visual
streams for perception and action? Cogn Neurosci 1:52–78
Shelton AL, McNamara TP (2004) Spatial memory and perspective
taking. Mem Cogn 32:416–426
Sterken Y, Postma A, de Haan EHF, Dingemans A (1999) Egocentric
and exocentric spatial judgements of visual displacement. Q J
Exp Psychol 52:1047–1055
Vallar G, Lobel E, Galati G, Berthoz A, Pizzamiglio L, Le Bihan D
(1999) A fronto-parietal system for computing the egocentric
spatial frame of reference in humans. Exp Brain Res 124:
281–286
van der Ham IJM, van Zandvoort MJE, Frijns CJM, Kappelle LJ,
Postma A (2011) Hemispheric differences in spatial relation
processing in a scene perception task: a neuropsychological
study. Neuropsychologia 49:999–1005
Wraga M, Creem SH, Proffitt DR (2000) Perception-action dissoci-
ations of a walkable Mu
¨ller-Lyer configuration. Psychol Sci 11:
239–243
Zaehle T, Jordan K, Wu
¨stenberg T, Baudewig J, Dechent P, Mast FW
(2007) The neural basis of the egocentric and allocentric spatial
frame of reference. Brain Res 1137:92–103
Exp Brain Res (2011) 214:587–595 595
123
... Human beings represent spatial information by using egocentric (i.e., subject-toobject) and/or allocentric (i.e., object-to-object) frames of reference combined with categorical (e.g., left/right to) and/or coordinate (e.g., 1 mt far) spatial relations (Aguirre & D'Esposito, 1999;Bianchini et al., 2014;Burgess, 2006;Kosslyn, 2006;Lopez, O. Caffò, Spano, & Bosco, 2019;O'Keefe & Nadel, 1978;Paillard, 1991;Postma & De Haan, 1996;Ruotolo et al., 2019;Ruggiero, Ruotolo, & Iachini, 2012;Ruggiero, Frassinetti, Iavarone, & Iachini, 2014; for reviews see Colombo et al., 2017;Galati, Pelle, Berthoz, & Committeri, 2010). The combination of frames of reference and spatial relations gives rise to four basic spatial representations that allow to specify, for example, if 'a cup is at 30 cm to us or on our right' (egocentric-coordinate or egocentric-categorical combination, respectively) or if 'a cup is 30 cm from the spoon or on the left of the spoon' (allocentric-coordinate or allocentric-categorical combination, respectively) (Ruotolo, van der Ham, Postma, Ruggiero, & Iachini, 2015;Ruotolo, Iachini, Postma, & van der Ham, 2011;Ruotolo, Iachini, Ruggiero, van der Ham, & Postma, 2016). It has been suggested that action-related tasks should strongly involve egocentric-coordinate (Ego-Coor) representations. ...
... The fixed arm versus free arm condition was combined with a spatial memory task that explicitly required to retrieve coordinate or categorical spatial relations combined with egocentric or allocentric reference frames (Ego-Allo/Cat-Coor Task; Iachini & Ruggiero, 2006;Ruotolo et al., 2015). This task has been adopted in several previous studies to assess spatial memory in healthy adults (Iachini & Ruggiero, 2006;Ruotolo et al., 2011Ruotolo et al., , 2015, patients Ruggiero, Iavarone, & Iachini, 2018) and blind people (Ruggiero et al., 2012;Ruggiero, Ruotolo, & Iachini, 2018). ...
... The results of Experiment 1 revealed that participants were more accurate and faster with egocentric than allocentric judgements, and more accurate with categorical than coordinate spatial relations. These results are fully in line with previous evidence that showed an advantage for body-centred egocentric representations over object-centred allocentric representations (Iachini & Ruggiero, 2006;Ruggiero et al., 2014; for reviews see Colombo et al., 2017;Galati et al., 2010) and for categorical-invariant relations over coordinate-metric ones Ruggiero, Iavarone, et al., 2018;Ruotolo et al., 2011Ruotolo et al., , 2015Ruotolo et al., , 2016. Moreover, participants were more accurate and faster when having their arms free rather than fixed. ...
Article
Full-text available
Research on visuo-spatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) spaces. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here we explored the role of motor resources on the combinations of these visuo-spatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: 1) closest to/farthest from you (egocentric coordinate); 2) on your right/left (egocentric categorical); 3) closest to/farthest from an object (allocentric coordinate); 4) on the right/left of an object (allocentric categorical). The triads appeared in participants’ peripersonal (Experiment 1) or extrapersonal (Experiment 2) spaces. The results of Experiment 1 showed that motor interference selectively damaged egocentric coordinate judgments but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Overall, the findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.
... Coordinate spatial relations are based on a finegrained metric code that allows for precise distance discrimination between different positions, such as the object is closer to me or to the window. Instead, categorical spatial relations are based on a more abstract non-metric code, such as right/left, above/below [27][28][29][30]. We cannot process metric or non-metric spatial relations without specifying a frame of reference and vice versa. ...
... Therefore, the egocentric/allocentric and the categorical/coordinate components seem to reflect a flexible, complex and interactive organization that is modulated by the spatial task at hand [e.g. [28][29][30][31]. Importantly, these spatial representations seem to be supported by differently lateralized neural networks. ...
Article
Research has reported deficits in egocentric (subject-to-object) and mainly allocentric (object-to-object) spatial representations in the early stages of the Alzheimer’s disease (eAD). To identify early cognitive signs of neurodegenerative conversion, several studies have shown alterations in both reference frames, especially the allocentric ones in amnestic-Mild Cognitive Impairment (aMCI) and eAD patients. However, egocentric and allocentric spatial frames of reference are intrinsically connected with coordinate (metric/variant) and categorical (non-metric/invariant) spatial relations. This raises the question of whether allocentric deficit found to detect the conversion from aMCI to dementia is differently affected when combined with categorical or coordinate spatial relations. Here, we compared eAD and aMCI patients to Normal Controls (NC) on the Ego-Allo/Cat-Coor spatial memory task. Participants memorized triad of objects and then were asked to provide right/left (i.e. categorical) and distance based (i.e. coordinate) judgments according to an egocentric or allocentric reference frame. Results showed a selective deficit of coordinate, but not categorical, allocentric judgments in both aMCI and eAD patients as compared to NC group. These results suggest that a sign of the departure from normal/healthy aging towards the AD, may be traced in elderly people’s inability to represent and compared distances among elements in the space.
... In a first study aimed at understanding the relationship between FoR and SR processing, Ruotolo et al. (2011a), but see also Ruotolo et al. (2011b), asked participants to judge whether two 2-dimensional vertical bars were on the same side (categorical task) or at the same distance (coordinate task) with respect to their body midline (egocentric reference) or with respect to an horizontal bar (allocentric reference). Results showed that categorical judgments with respect to the allocentric reference were more accurate than all others. ...
... However, these conclusions are drawn from studies that differ not only in the response modality (visuomotor vs. visuo-perceptual), but also with respect to the stimuli and procedural details. For example, in Ruotolo et al. (2011a), only non-manipulable stimuli and an Immediate visuo-perceptual response were used. More importantly, some evidence, labeled by Foley et al. (2015) as the "perspectival accounts of visual experience," argues against the possibility that the mere use of visuo-perceptual tasks would favor allocentric rather than egocentric spatial representations due to the fundamentally egocentric nature of visual experience. ...
Article
Full-text available
The aim of this study was to explore how people use egocentric (i.e., with respect to their body) and allocentric (i.e., with respect to another element in the environment) references in combination with coordinate (metric) or categorical (abstract) spatial information to identify a target element. Participants were asked to memorize triads of 3D objects or 2D figures, and immediately or after a delay of 5 s, they had to verbally indicate what was the object/figure: (1) closest/farthest to them (egocentric coordinate task); (2) on their right/left (egocentric categorical task); (3) closest/farthest to another object/figure (allocentric coordinate task); (4) on the right/left of another object/figure (allocentric categorical task). Results showed that the use of 2D figures favored categorical judgments over the coordinate ones with either an egocentric or an allocentric reference frame, whereas the use of 3D objects specifically favored egocentric coordinate judgments rather than the allocentric ones. Furthermore, egocentric judgments were more accurate than allocentric judgments when the response was Immediate rather than delayed and 3D objects rather than 2D figures were used. This pattern of results is discussed in the light of the functional roles attributed to the frames of reference and spatial relations by relevant theories of visuospatial processing.
... Finally, in the present study only tasks requiring an egocentric (i.e., based on the body and participants' perspective) rather than an allocentric strategy (i.e., based on the relationship between environmental landmarks) were used [74,75]. As a consequence, we cannot generalize these results to the way individuals represent environmental knowledge like a map. ...
Article
Full-text available
This study assesses the influence of valence and arousal of element/landmarks along a route on the spatio-temporal representation of the route itself. Participants watched a movie of a virtual route containing landmarks with high arousal and positive (HP) or negative valence (HN), or landmarks with low arousal and positive (LP) or negative valence (LN). Afterwards, they had to (a) imagine walking distances between landmarks, (b) indicate the position of the landmarks along the route, (c) judge the spatial and temporal length of the route, and (d) draw the route. Results showed that the tasks were differentially influenced by the valence and arousal levels. Specifically, participants were more accurate in representing distances between positive, rather than negative, landmarks and in localizing positive high arousing landmarks. Moreover, the high arousing landmarks improved performance at the route drawing task. Finally, participants in the negative and low arousing conditions judged the route as being metrically and temporally longer than participants in positive and high arousing conditions. These results are interpreted in the light of theories about the effects of emotions on memory processes and the "feelings-as-information" theory. In brief, the results support the idea that representations of a route reflect a combination of cognitive and emotional processes.
... relative positions of objects such as left/right or front/behind) or coordinate information (i.e. distance between any two objects) [1,2]. Several studies have shown distinct neuronal networks are engaged with these two types of spatial encoding [3,4]. ...
Article
Full-text available
Background: Previous studies have reported that coordinate information (i.e. distance between any two objects in a specific direction) is encoded differently from Virtual Reality (VR) and physical scenes. However, the accuracy of encoding categorical information (i.e. relative positions of objects) from VR scenes has not been adequately investigated. During this study, we used a novel rotating visual scene to study the effects of aging, prior experience with VR, and dementia on the accuracy of encoding categorical information between physical and virtual environments. Methods: We recruited a cohort of 60 cognitively-healthy older adults, with and without previous VR experience (Experiment 1), as well as 18 older adults with mild to moderate Alzheimer disease (AD) (Experiment 2). During both of the experiments, the participants were asked to attend to a target window in a virtual or real small-scale model building (dependent upon group assignment) as the building was rotated around its vertical axis in depth of the scene. Participants were required to verbally judge the final position of the target in terms of direction (e.g., left, right, back, and front) with respect to the entrance of the buildings after the full rotation has stopped. A score was calculated for each participant based on s/her accuracy in locating the target window. Results: Healthy older adults succeeded in accurately localizing the target's position from both environments, whereas individuals with AD were only able to encode the target’s position from the physical environment. Conclusions: Our results suggest the inability to encode from a rotating VR scene might be a symptom of dementia.
... More specifically, the non-switching task involved the repetition of two questions concerning the same reference frame, whereas the switching task involved two questions concerning two different reference frames. This experimental paradigm is based on previous studies dealing with healthy adults [33][34][35][36][37][38], brain damaged patients [28,39], blind people [40][41][42], children with cerebral palsy [43,44], and has proved its efficacy in inducing a specific involvement of spatial frames of reference. Accuracy measured the performance. ...
Article
Full-text available
Objective: Deficits in egocentric (subject-to-object) and allocentric (object-to-object) spatial representations, with a mainly allocentric impairment, characterize the first stages of the Alzheimer's disease (AD). Methods: To identify early cognitive signs of AD conversion, some studies focused on amnestic-Mild Cognitive Impairment (aMCI) by reporting alterations in both reference frames, especially the allocentric ones. However, spatial environments in which we move need the cooperation of both reference frames. Such cooperating processes imply that we constantly switch from allocentric to egocentric frames and vice versa. This raises the question of whether alterations of switching abilities might also characterize an early cognitive marker of AD, potentially suitable to detect the conversion from aMCI to dementia. Here, we compared AD and aMCI patients with Normal Controls (NC) on the Ego-Allo-Switching spatial memory task. The task assessed the capacity to use switching (Ego-Allo, Allo-Ego) and non-switching (Ego-Ego, Allo-Allo) verbal judgments about relative distances between memorized stimuli. The novel finding of this study is the neat impairment shown by aMCI and AD in switching from allocentric to egocentric reference frames. Interestingly, in aMCI when the first reference frame was egocentric, the allocentric deficit appeared attenuated. Conclusion: This led us to conclude that allocentric deficits are not always clinically detectable in aMCI since the impairments could be masked when the first reference frame was body-centred. Alongside, AD and aMCI also revealed allocentric deficits in the non-switching condition. These findings suggest that switching alterations would emerge from impairments in hippocampal and posteromedial areas and from concurrent dysregulations in the locus coeruleus-noradrenaline system or pre-frontal cortex.
Article
Full-text available
An action with an object can be accomplished only if we encode the position of the object with respect to our body (i.e., egocentrically) and/or to another element in the environment (i.e., allocentrically). However, some actions with the objects are directed towards our body, such as brushing our teeth, and others away from the body, such as writing. Objects can be near the body, that is within arm reaching, or far from the body, that is outside arm reaching. The aim of this study was to verify if the direction of use of the objects influences the way we represent their position in both near and far space. Objects typically used towards (TB) or away from the body (AB) were presented in near or far space and participants had to judge whether an object was closer to them (i.e., egocentric judgement) or closer to another object (i.e., allocentric judgement). Results showed that egocentric judgements on TB objects were more accurate in near than in far space. Moreover, allocentric judgements on AB objects were less accurate than egocentric judgements in near space but not in far space. These results are discussed with respect to the different roles that visuo-motor and visuo-spatial mechanisms play in near space and far space, respectively.
Article
Full-text available
p>The cognitive representation of the environment is formed using cognitive systems that process data on spatial representations of two types: egocentric, encoding the position of environmental objects relative to the observer, and allocentric, encoding the position of objects relative to each other, regardless of the position of the observer. Data on spatial representations were studied mainly in problems of memorization and reconstruction of static scenes. However, the task of processing information about dynamic scenes in everyday life has a higher ecological validity. We used HMD virtual reality technologies to study the accuracy of the formation of egocentric and allocentric spatial representations of static and dynamic scenes in working memory. The subjects were presented 8 three-dimensional virtual scenes of 4 objects each for 10 seconds in static and dynamic conditions for memorization and reconstruction. Identification accuracy (number of correctly reconstructed objects) and localization accuracy (accuracy of spatial scene reconstruction) were assessed. Localization accuracy was assessed in topological units, corresponding to the accuracy of the representation of the general configuration of objects in the scene (global topological information), and in metric units, corresponding to the accuracy of the representation of the spatial coordinates of each object (local metric information). The results showed that object identification accuracy was similar in static and dynamic conditions; the processes of encoding metric local information during the formation of both types of representations of dynamic scenes worsen compared to static ones; the accuracy of encoding topological global information remains stable compared to the static condition. We can conclude that the visual and spatial systems operate independently as part of a general cognitive system that processes data on spatial representations in time-limited working memory, as well as the redistribution of its resource in dynamic condition for supporting topological data of the holistic configuration of moving objects more, than metric data. The results highlight the importance of topological spatial characteristics of spatial representations for processes of early spatial perception, decision making, and action in the environment.</p
Article
Full-text available
Spatial relations (SRs: coordinate/metric vs categorical/non metric) and frames of reference (FoRs: egocentric/body vs allocentric/external element) represent the building blocks underlying any spatial representation. In the present 7T fMRI study we have identified for the first time the neural correlates of the spatial representations emerging from the combination of the two dimensions. The direct comparison between the different spatial representations revealed a bilateral fronto-parietal network, mainly right sided, that was more involved in the egocentric categorical representations. A right fronto-parietal circuitry was specialized for egocentric coordinate representations. A bilateral occipital network was more involved in the allocentric categorical representations. Finally, a smaller part of this bilateral network (i.e. Calcarine Sulcus and Lingual Gyrus), along with the right Supramarginal and Inferior Frontal gyri, supported the allocentric coordinate representations. The fact that some areas were more involved in a spatial representation than in others reveals how our brain builds adaptive spatial representations in order to effectively react to specific environmental needs and task demands.
Chapter
Full-text available
After a brief description of how visual information travels from the retina to the cortex, two fundamental distinctions within visuospatial perception are discussed. First, spatial relations between objects can be represented either categorically, "left of" or "above," or coordinately, in which metric distances are taken into account. These two types of representations are dissociated in terms of neural correlates, regardless of stimulus type and precise task at hand. Recent findings indicate that also the scope of attention as used during spatial relation processing affects this dissociation.The second distinction is between egocentric (ie, body-based) and allocentric (ie, scene/object-based) frames of reference. Behavioral and neural evidence supporting the existence of the two frames of reference is reported and their functional role within the perception-action model by Milner and Goodale (1995) is discussed. Final, several experiments exploring the interaction between coordinate and categorical spatial relations and egocentric and allocentric frames of reference are presented.
Article
Full-text available
Results of 4 sets of neural network simulations support the distinction between categorical and coordinate spatial relations representations: (a) Networks that were split so that different hidden units contributed to each type of judgment performed better than unsplit networks; the reverse was observed when they made 2 coordinate judgments. (b) Both computations were more difficult when finer discriminations were required; this result mirrored findings with human Ss. (c) Networks with large, overlapping “receptive fields” performed the coordinate task better than did networks with small, less overlapping receptive fields, but vice versa for the categorical task; this suggests a possible basis for observed cerebral lateralization of the 2 kinds of processing. (d) The previously observed effect of stimulus contrast on this hemispheric asymmetry could reflect contributions of more neuronal input in high-contrast conditions.
Article
Full-text available
This paper reports a study of how familiarity and gender may influence the frames of reference used in memory to represent a real-world regularly shaped environment. Familiar and unfamiliar participants learned the locations of three triads of buildings by walking on a path which encircled each triad. Then they were shown with maps reproducing these triads at five different orientations (from 0° to 180°) and had to judge whether each triad represented correctly the relative positions between the buildings. Results showed that unfamiliar participants performed better when the orientation of triads was closer to the learning perspective (0° and 45°) and corresponded to front rather than to back positions. Instead, familiar participants showed a facilitation for triads oriented along orthogonal axes (0°–180°, 90°) and no difference between front and back positions. These findings suggested that locations of unfamiliar buildings were mentally represented in terms of egocentric frames of reference; instead, allocentric frames of reference defined by the environment were used when the environment was familiar. Finally, males were more accurate and faster than females, and this difference was particularly evident in participants unfamiliar with the environment.
Article
Full-text available
First published in 1995, this book presents a model for understanding the visual processing underlying perception and action, proposing a broad distinction within the brain between two kinds of vision: conscious perception and unconscious 'online' vision. It argues that each kind of vision can occur quasi-independently of the other, and is separately handled by a quite different processing system. For this new edition, the text from the original edition has been left untouched, standing as a coherent statement of the authors' position. However, a very substantial epilogue has been added to the book, which reviews some of the key developments that support or challenge the views that were put forward in the first edition. The new chapter summarizes developments in various relevant areas of psychology, neuroscience, and behaviour. It supplements the main text by updating the reader on the contributions that have emerged from the use of functional neuroimaging, which was in its infancy when the first edition was written. Neuroimaging, and functional MRI in particular, has revolutionized the field by allowing investigators to plot in detail the patterns of activity within the visual brains of behaving and perceiving humans. The authors show how its use now allows scientists to test and confirm their proposals, based largely on evidence accrued from primate neuroscience in conjunction with studies of neurological patients.
Article
How is space represented in the brain? How are spatial relationships encoded in the neural network so as to frame our perception and to orient and guide our actions? How are mental images of the outside world generated? Although these questions have caused endless philosophical controversy, it is only recently that neurophysiology has advanced sufficiently to provide a sound scientific basis for the subject. In this book, leading authorities in the field describe their latest research, and provide new theoretical insights for the understanding of spatial relationships and cognition. The book is divided into five sections. The first is devoted to oculomotor control, linking the problem of gaze control to that of sensorimotor mapping of the visual space; the second deals with neural control of skeletal movements; the third discusses the contribution of the cortical parietal association areas to the mapping of spatial information of multimodal origin (with emphasis on the neuropsychology of spatial disorders); the fourth highlights the role of hippocampal structures in cognitive mapping of space and in spatial memory; and the final section examines how neural networks can map spatial relationships and generate internal representations of the physical world.
Article
The perception-action model proposes that vision-for-perception and vision-for-action are based on anatomically distinct and functionally independent streams within the visual cortex. This idea can account for diverse experimental findings, and has been hugely influential over the past two decades. The model itself comprises a set of core contrasts between the functional properties of the two visual streams. We critically review the evidence for these contrasts, arguing that each of them has either been refuted or found limited empirical support. We suggest that the perception-action model captures some broad patterns of functional localization, but that the specializations of the two streams are relative, not absolute. The ubiquity and extent of inter-stream interactions suggest that we should reject the idea that the ventral and dorsal streams are functionally independent processing pathways.
Article
Abstract Sixty patients with unilateral stroke (half with left hemisphere damage and half with right hemisphere damage) and a control group (N = 15) matched for age and educational level were tested in two experiments. In one experiment they were first shown, on each trial, a sample drawing depicting one or more objects. Following a short delay, they were asked to identify the drawing when it was paired with a drawing in which the same object(s) was transformed in categorical or coordinate spatial relations. In the other experiment, the same subjects first were shown, on each trial, a sample drawing. They then judged which of two variants (each in one type of spatial relation) looked more similar to the sample drawing. Typically, patients with left-sided stroke mistakenly identified the categorical transformation for the sample drawing in the first task; in the second task, they judged the categorical transformation as more similar to the sample drawing. Patients with right-sided stroke mistakenly identified the coordinate transformations for the sample drawing in the first task, and, in the second task, typically judged the drawings transformed along coordinate spatial relations as more similar to the sample drawing. These findings provide evidence for complementary lateralization of the two types of spatial perception. It can therefore be inferred that separate functional subsystems process the two types of spatial relations.
Article
The perception-action model proposes that vision-for-perception and vision-for-action are based on anatomically distinct and functionally independent streams within the visual cortex. This idea can account for diverse experi- mental findings, and has been hugely influential over the past two decades. The model itself comprises a set of core contrasts between the functional properties of the two visual streams. We critically review the evidence for these contrasts, arguing that each of them has either been refuted or found limited empirical support. We suggest that the perception-action model captures some broad patterns of functional localization, but that the specializations of the two streams are relative, not absolute. The ubiquity and extent of inter-stream interactions suggest that we should reject the idea that the ventral and dorsal streams are functionally independent processing pathways.
Article
 Spatial orientation is based on coordinates referring to the subject’s body. A fundamental principle is the mid-sagittal plane, which divides the body and space into the left and right sides. Its neural bases were investigated by functional magnetic resonance imaging (fMRI). Seven normal subjects pressed a button when a vertical bar, moving horizontally, crossed the subjective mid-sagittal plane. In the control condition, the subjects’ task was to press a button when the direction of the bar movement changed, at the end of each leftward or rightward movement. The task involving the computation of the mid-sagittal plane yielded increased signal in posterior parietal and lateral frontal premotor regions, with a more extensive activation in the right cerebral hemisphere. This direct evidence in normal human subjects that a bilateral, mainly right hemisphere-based, cortical network is active during the computation of the egocentric reference is consistent with neuropsychological studies in patients with unilateral cerebral lesions. Damage to the right hemisphere, more frequently to the posterior-inferior parietal region, may bring about a neglect syndrome of the contralesional, left side of space, including a major rightward displacement of the subjective mid-sagittal plane. The existence of a posterior parietal-lateral premotor frontal network concerned with egocentric spatial reference frames is also in line with neurophysiological studies in the monkey.