Article

Attentional selection during preparation of prehension movements

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In two experiments coupling between dorsal attentional selection for action and ventral attentional selection for perception during preparation of prehension movements was examined. In a dual-task paradigm subjects had to grasp an X-shaped object with either the left or the right hand's thumb and index finger. Simultaneously a discrimination task was used to measure visual attention prior to the execution of the prehension movements: Mask items transiently changed into distractors or discrimination targets. There was exactly one discrimination target per trial, which appeared at one of the four branch ends of the object. In Experiment 1 target position varied randomly while in Experiment 2 it was constant and known to subjects in each block of trials. In both experiments discrimination performance was significantly better for discrimination target positions at to-be-grasped branch ends than for not-to-be-grasped branch ends. We conclude that during preparation of prehension movements visual attention is largely confined to those parts of an object that will be grasped.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Similar approaches have more recently been applied to manual movements. Enhanced processing at goal locations has been demonstrated during the preparation of pointing (Baldauf and Deubel, 2008b;Deubel et al., 1998), reaching (Baldauf and Deubel, 2008a;Baldauf et al., 2006) and grasping (Gilster et al., 2012;Schiegg et al., 2003) movements. ...
... Whilst this work shows that perceptual enhancement at the effector location can happen in principle, their pattern of results suggests that this effect may be limited to either goals or effectors, dependent upon top down factors such as task instructions. This interpretation is inconsistent with findings that suggest that the coupling between action and perception at the goal location is obligatory (Deubel and Schneider, 1996;Hoffman and Subramaniam, 1995;Schiegg et al., 2003;Schneider andDeubel, 1995, 2002, though see also Belopolsky and Theeuwes, 2009;Deubel, 2008;Hunt and Kingstone, 2003, for arguments to the contrary). A possible explanation may be found in terms of the time course of motor preparation: if the temporal sequence of goal and effector processing differs (perhaps due to differing underlying mechanisms, e.g. ...
... A task-irrelevant visual 'dot probe' presented in the centre of one of the circle stimuli was used to measure visual processing; Following the cue, after an SOA of 100, 200 or 300 ms (chosen based upon both previously published behavioural work, e.g. Schiegg et al., 2003, and pilot data), a small white circle, measuring 2°visual angle was presented via the monitor, but which appeared to originate on the movement console. This probe stimulus was presented for 100 ms equi-probably but pseudo-randomly at either one of the goal or one of the effector locations. ...
... Discrimination performance was higher when the R1 pointing target matched the display location of S2, indicating a motorvisual facilitation effect. An analogous facilitation effect has also been observed for cyclically arranged response and stimulus locations (Paprotta et al., 1999) and for grasping movements (Schiegg et al., 2003). Schiegg et al. (2003) have shown that a two finger grasping movement toward an object (R1) directs visual attention toward the two parts of the object that are to be touched by the two fingers (see Baldauf & Deubel, 2008, Experiment 1, for analogous results concerning bimanual pointing). ...
... they had seen. Discrimination performance was higher when the R1 pointing target matched the display location of S2, indicating a motorvisual facilitation effect. An analogous facilitation effect has also been observed for cyclically arranged response and stimulus locations (Paprotta et al., 1999) and for grasping movements (Schiegg et al., 2003). Schiegg et al. (2003) have shown that a two finger grasping movement toward an object (R1) directs visual attention toward the two parts of the object that are to be touched by the two fingers (see Baldauf & Deubel, 2008, Experiment 1, for analogous results concerning bimanual pointing). Whereas the previously reviewed studies applied paradigms in which the ...
... Two previous models, the visual attention model (Schneider, 1995) and the premotor view of attention (Rizzolatti et al., 1987), have postulated that action planning facilitates perceptual processing at action compatible locations. Both models were initially developed to explain the attentional effects of saccades (Deubel & Schneider, 1996; Umiltà, Mucignat, Riggio, Barbieri, & Rizzolatti, 1994) but have later been developed also to account for attentional effects of manual movements (Craighero, Bello, Fadiga, & Rizzolatti, 2002; Schiegg et al., 2003) and for spatial dimensions other than location, such as orientation (Craighero et al., 2002; Craighero, Fadiga, Rizzolatti, & Umiltà, 1999). In this respect, these models are similar to the PCM. ...
Article
Full-text available
Previous research on dual-tasks has shown that, under some circumstances, actions impair the perception of action-consistent stimuli, whereas, under other conditions, actions facilitate the perception of action-consistent stimuli. We propose a new model to reconcile these contrasting findings. The planning and control model (PCM) of motorvisual priming proposes that action planning binds categorical representations of action features so that their availability for perceptual processing is inhibited. Thus, the perception of categorically action-consistent stimuli is impaired during action planning. Movement control processes, on the other hand, integrate multi-sensory spatial information about the movement and, therefore, facilitate perceptual processing of spatially movement-consistent stimuli. We show that the PCM is consistent with a wider range of empirical data than previous models on motorvisual priming. Furthermore, the model yields previously untested empirical predictions. We also discuss how the PCM relates to motorvisual research paradigms other than dual-tasks.
... So far, this theory is mainly supported by the finding that the preparation of a spatio-motor action binds the attentional mechanisms in visual perception to the movement target. For example, while preparing a pointing or grasping movement , visual discrimination performance is increased at the selected movement positions, whereas the discrimination performance is reduced at positions which are not associated with an upcoming movement (e.g., Baldauf, Wolf, & Deubel, 2006; Deubel & Schneider, 2004; Deubel, Schneider, & Paprotta, 1998; Schiegg, Deubel, & Schneider, 2003 ). Thus, the sensorimotor system seems to selectively allocate attention to relevant movement-related positions in space when planning a movement. ...
... We chose different presentation times in order to prevent participants from predicting the occurrence of the target digit during the experiment. Furthermore, we aimed at presenting the target during the time the movement was initiated since the movement programming phase is supposed to be most crucial for the distribution of attentional capacities (Baldauf & Deubel, 2010; Schiegg et al., 2003). The mean RT associated with cued prehension is approximately 450 ms according to Jakobson and Goodale (1991). ...
... It has repeatedly been shown that visual attention is allocated to the target positions of reaching and grasping movements when preparing an action, suggesting a coupling between selection for action and selection for perception in these tasks (Baldauf & Deubel, 2008; Deubel et al., 1998; Schiegg et al., 2003). The main purpose of this study was to examine whether there is also the inverse effect of withdrawing spatial attention from a grasping task on movement kinematics. ...
Article
We investigated the effects of visuo-spatial attention on the kinematics of grasping movements by employing a dual-task paradigm. Participants had to grasp cylindrical objects of different sizes (motor task) while simultaneously identifying a target digit presented at a different spatial location within a rapid serial visual presentation (perceptual task). The grasping kinematics in this dual-task situation were compared with the those measured in a single-task condition. Likewise, the identification performance was also measured in a single-task condition. Additionally, we kept the visual input constant across conditions by asking participants to fixate. Without instructions about the priority of tasks (Experiment 1) participants showed a considerable drop of identification performance in the dual-task condition. Regarding grasping kinematics, the concurrent perceptual task resulted in a less accurate adaptation of the grip to object size in the early phase of the movement, while movement times and maximum grip aperture were unaffected. When participants were instructed to focus on the perceptual task (Experiment 2), the identification performance stayed at about the same level in the dual-task and the single-task conditions. The perceptual improvement was however associated with a further decrease in the accuracy of the early grip adjustment. We conclude that visual attention is needed for the effective control of the grasp kinematics, especially for a precise adjustment of the hand to object size when approaching the object.
... Schiegg and colleagues (Schiegg, Deubel, & Schneider, 2003) directly probed the spatial and temporal properties of covert visual attention when participants were required to grasp a wooden cross with their thumb and index finger (see Fig. 4A). The participants were asked to keep fixation on the object's centre. ...
... Presumably, minimizing the aperture of the moving hand is a clever strategy to minimize the risk of collision with the obstacle while the hand is in flight. Schiegg et al. (2003) to map visual attention at various surface points of a cross-shaped, to-be-grasped object. Before the reach-to-grasp movement was initialised, 150 ms after the onset of the go-signal, a discrimination stimulus ('E' versus '$') was briefly presented at a random position among distractors. ...
... Discrimination performance at the various positions served as measure for the allocation of visual attention. (B) Discrimination performance at the intended points of application for the thumb and the index finger was superior to the discrimination performance at the other, action-irrelevant points of application (adapted from Schiegg et al., 2003). ...
Article
It is well established that during the preparation and execution of goal-directed movements, perceptual processing is biased towards the goal. Most of the previous work on the relation between action and attention has focused on rather simple movements, such as single saccades or manual reaches towards a single target. Here we review recent behavioural and neurophysiological studies on manual actions that require to consider more than a single spatial location in the planning of the response, such as movement sequences, grasping, and movements around obstacles. The studies provide compelling evidence that the preparation of these actions establishes multiple foci of attention which reflect the spatial-temporal requirements of the future action. The findings help clarify how perceptual processing is bound by action preparation and also offer new perspectives for motor control research.
... At the behavioral level, evidence in favor of an obligatory attention-action coupling has come primarily from psychophysical dualtask studies requiring participants to perform goal-directed actions toward cued placeholder stimuli, while premotor attention allocation is probed by flashing a discrimination target either at the motor target or at a different position. A consistent finding of these studies was that discrimination performance is selectively enhanced when the attention probe and the target of a saccade (Deubel, 2008;Deubel & Schneider, 1996Hoffman & Subramaniam, 1995;Jonikaitis & Deubel, 2011) or manual movement (Deubel, Schneider, & Paprotta, 1998;Jonikaitis & Deubel, 2011;Schiegg, Deubel, & Schneider, 2003) spatially coincide compared to when they diverge. Notably, this spatial congruency effect was still observed when experimental conditions provided an incentive to withdraw attention from the motor target (Deubel, 2008;Deubel & Schneider, 1996;Deubel, Schneider, & Paprotta, 1998;Schiegg, Deubel, & Schneider, 2003), indicating that attention allocation toward targets of upcoming goal-directed movements is mandatory. ...
... A consistent finding of these studies was that discrimination performance is selectively enhanced when the attention probe and the target of a saccade (Deubel, 2008;Deubel & Schneider, 1996Hoffman & Subramaniam, 1995;Jonikaitis & Deubel, 2011) or manual movement (Deubel, Schneider, & Paprotta, 1998;Jonikaitis & Deubel, 2011;Schiegg, Deubel, & Schneider, 2003) spatially coincide compared to when they diverge. Notably, this spatial congruency effect was still observed when experimental conditions provided an incentive to withdraw attention from the motor target (Deubel, 2008;Deubel & Schneider, 1996;Deubel, Schneider, & Paprotta, 1998;Schiegg, Deubel, & Schneider, 2003), indicating that attention allocation toward targets of upcoming goal-directed movements is mandatory. Indeed, a very recent dual-task study (Hanning et al., 2022) affirmed these earlier observations by demonstrating that attention can be deployed to distinct eye and hand movement targets in parallel and without cost, whereas the preparation of these movements cumulatively deteriorates the capacity to attend to movementirrelevant, yet highly task-relevant, objects. ...
Article
Full-text available
Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling.
... Preparation of a manual motor response, such as grasping, improves perceptual performance on the side of space of the grasp (Deubel et al. 1998) and is linked to attentional selection on that side of space (Schiegg et al. 2003). This tight coupling of motor programming and visuospatial attention forms the basis of pre-motor theories of attention, which postulate that shared neural substrates underlie spatial attention selection and motor planning processes (Hommel 2004;Rizzolatti et al. 1987). ...
... In the current study, knowledge of the upcoming action state for a particular hand may have affected attentional orienting by one of these mechanisms. Evidence for effects of premotor programming on visual attention has been observed by probing attention in the programming stages of both reaching (Deubel et al. 1998) and grasping tasks (Schiegg et al. 2003). Thus, it is possible that the initial planning of the hand-pinch in the current study may have affected orienting by remaining active throughout the trial. ...
Article
Full-text available
Recent studies have documented that the hand’s ability to perform actions affects the visual processing and attention for objects near the hand, suggesting that actions may have specific effects on visual orienting. However, most research on the relation between spatial attention and action focuses on actions as responses to visual attention manipulations. The current study examines visual attention immediately following an executed or imagined action. A modified spatial cuing paradigm tested whether a brief, lateralized hand-pinch performed by a visually hidden hand near the target location, facilitated or inhibited subsequent visual target detection. Conditions in which hand-pinches were fully executed (action) were compared to ones with no hand-pinch (inaction) in Experiment 1 and imagined pinches (imagine) in Experiment 2. Results from Experiment 1 indicated that performed hand pinches facilitated rather than inhibited subsequent detection responses to targets appearing near the pinch, but target detection was not affected by inaction. In Experiment 2, both action and imagined action conditions cued attention and facilitated responses, but along differing time courses. These results highlight the ongoing nature of visual attention and demonstrate how it is deployed to locations even following actions.
... The first hypothesis is based on the role of visual attention in spatial motor control, proposed by two influential models which posit the occurrence of a strong link between sensory and motor representations of action. According to the "Visual-Attention-Model" of Schneider (1995), the abrupt onset of a salient visual stimulus engages a "selection-for-action" process (Allport, 1987), leading to the simultaneous programming in the dorsal visual stream of possible spatial motor actions (saccades, pointing, reaching, grasping) toward the same target (Schneider and Deubel, 2002;Schiegg et al., 2003). A similar prediction has also been suggested by the "Premotor Theory" of spatial attention (Rizzolatti et al., 1987). ...
... These data clearly show that, indeed, the allocation of visual attention to a spatial location activates the motor map of the arm. However, the lack of muscle and direction specificity argues against the possibility that this activation is compatible with attentional selection for the preparation of a manual movement (Schiegg et al., 2003) or with a space representation within the hand 'pragmatic map' (Rizzolatti et al., 1994;Schneider and Deubel, 2002). Furthermore, the buildup of CSS excitability preceding the eye response is reminiscent of the increase of activity described in the frontal eye fields and superior colliculus during the decisionmaking process for saccade initiation (Hanes and Schall, 1996;Sparks et al., 2000;Brown et al., 2008;Schall et al., 2011;Jantz et al., 2013). ...
Article
Full-text available
It has been recently demonstrated that visually guided saccades are linked to changes in muscle excitability in the relaxed upper limb, which are compatible with a covert motor plan encoding a hand movement toward the gaze target. In this study we investigated whether these excitability changes are time locked to the visual stimulus, as predicted by influential attention models, or are strictly dependent on saccade execution. Single-pulse transcranial magnetic stimulation was applied to the motor cortex at eight different time delays during a ‘go’/‘no-go’ task, which involved overt or covert orienting of attention. By analyzing the time course of excitability in three hand muscles, synchronized with the onset of either the attentional cue or the eye movement, we demonstrated that side- and muscle-specific excitability changes were strictly time locked to the saccadic response and were not correlated to the onset of the visual attentive stimulus. Furthermore, muscle excitability changes were absent following a covert shift of attention. We conclude that a sub-threshold manual motor plan is automatically activated by the saccade decision-making process, as part of a covert eye-hand coordination program. We found no evidence for a representation of spatial attention within the upper limb motor map.
... These ndings show that covert attention can be distributed not only at one location, as overt attention, but rather simultaneously forms a complex attentional landscape in the visual eld. Schiegg et al. found that covert attention can be split into multiple foci that are deployed in a way to pinpoint individual locations of intended contact points of the ngers during precision grasping (Schiegg et al., 2003). ...
... Deployment of covert attention could be modulated by motor plans as tightly as to support planned nger movements during grasping (Schiegg et al., 2003). ...
... These findings show that covert attention can be distributed not only at one location, as overt attention, but rather simultaneously forms a complex "attentional landscape" in the visual field. Schiegg et al. found that covert attention can be split into multiple foci that are deployed in a way to pinpoint individual locations of intended contact points of the fingers during precision grasping [39]. The experiments with nonhuman primates have shown that visual receptive fields can even adapt after several minutes of the tool use by elongating their shape to covertly overlay the tool held in the hand [40], [41]. ...
... Deubel and Schneider found that deployment of covert visual attention at an obstacle occurs when the obstacle obstructs intended arm movements, however, in cases when it does not obstruct intended manipulation it is not covertly attended [57]. Deployment of covert attention could be modulated by motor plans as tightly as to support planned finger movements during grasping [39]. ...
Article
Full-text available
We present a novel, biologically inspired, approach to an efficient allocation of visual resources for humanoid robots in a form of a motor-primed visual attentional landscape. The attentional landscape is a more general, dynamic and a more complex concept of an arrangement of spatial attention than the popular “attentional spotlight” or “zoom-lens” models of attention. Motor-priming of attention is a mechanism for prioritizing visual processing to motor-relevant parts of the visual field, in contrast to other, motor-irrelevant, parts. In particular, we present two techniques for constructing a visual “attentional landscape”. The first, more general, technique, is to devote visual attention to the reachable space of a robot (peripersonal space-primed attention). The second, more specialized, technique is to allocate visual attention with respect to motor plans of the robot (motor plans-primed attention). Hence, in our model, visual attention is not exclusively defined in terms of visual saliency in color, texture or intensity cues, it is rather modulated by motor information. This computational model is inspired by recent findings in visual neuroscience and psychology. In addition to two approaches to constructing the attentional landscape, we present two methods for using the attentional landscape for driving visual processing. We show that motor-priming of visual attention can be used to very efficiently distribute limited computational resources devoted to the visual processing. The proposed model is validated in a series of experiments conducted with the iCub robot, both using the simulator and the real robot.
... Thus a substantial body of evidence exists in favour of a close relationship between the systems involved in saccade planning and attention shifts. Similarly, behavioral studies have also demonstrated that attention shifts to goals for other motor actions as well, for instance to reaching goals (Bekkering & Pratt, 2004;Linnell et al., 2005), or to grasping targets (Fischer & Hoellen, 2004;Schiegg, Deubel & Schneider, 2003). While it is not clear whether the areas involved in selection for reaching or grasping are associated with attentional allocation, some imaging studies suggest that there is an overlap between the areas that are active in pointing and in attentional tasks Simon et al., 2002). ...
... It is now well established that the selection of a stimulus as the goal of a movement is related to attention shift to the movement target. A number of studies have shown that these attention shifts precede the initiation of goal-directed saccades, reaching movements and grasping Schiegg, Deubel & Schneider, 2003;. ...
... The visual nature of orientation representations is consistent with the fact that visually guided grasping requires vision-based grasp point computations (Blake, 1992) and causes the attentional spotlight to split into two regions near the grasp points (Schiegg et al., 2003). Further, occipital and parieto-occipital electrodes being involved in orientation representations suggest that the underlying processes recruited occipitotemporal and occipitoparietal areas, all of which play a role in extracting grasprelevant object information and action selection (Astafiev et al., 2004;Rice et al., 2007;Monaco et al., 2014;Fabbri et al., 2016) or abstract action representation (Tucciarelli et al., 2015). ...
Article
Full-text available
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher- to lower-level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate, and to lower levels given that this knowledge importantly relies on functional imaging which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects and only after a delay we instructed participants about which effector(s) to use to grasp, either the right, or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations that were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished between all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.Significance Statement:A longstanding assumption of the grasp computations is that grasp representations progress from higher- to lower-level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower-level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially non-canonical progression from higher to lower, and then to intermediate level grasp control.
... Goal-directed movements shape the deployment of visuospatial attention: The goal of an action is attended before the movement starts, increasing sensitivity at that location relative to other locations (for reviews see Deubel, 2014;Zhao et al., 2012). The coupling of perceptual spatial attention to visual goal locations during action planning is obligatory and has not only been observed for eye movements (e.g., Castet et al., 2006;Deubel & Schneider, 1996;Hanning et al., 2019;Hoffman & Subramaniam, 1995;Kowler et al., 1995;Li et al., 2016;Montagnini & Castet, 2007;Rolfs et al., 2011;Rolfs & Carrasco, 2012), for which the link with visual attention might be expected to be particularly strong, but also for hand movements such as reaching and grasping (e.g., Baldauf & Deubel, 2008b, 2010Deubel et al., 1998;Rolfs et al., 2013;Schiegg et al., 2003;Stewart et al., 2019). All of these studies employed dual-task paradigms, in which participants had to perform a specific movement in combination with a visual task that requires the detection, discrimination, or identification of a target stimulus presented briefly before the onset of the movement. ...
Article
Full-text available
Perception is shaped by actions, which determine the allocation of selective attention across the visual field. Here, we review evidence that maintenance in visual working memory is similarly influenced by actions (eye or hand movements), planned and executed well after encoding: Representations that are relevant for an upcoming action – because they spatially correspond to the action goal or because they are defined along action-related feature dimensions – are automatically prioritised over action-irrelevant representations and held in a stable state. We summarise what is known about specific characteristics and mechanisms of selection-for-action in working memory, such as its temporal dynamics and spatial specificity, and delineate open questions. This newly-burgeoning area of research promotes a more functional perspective on visual working memory that emphasizes its role in action control.
... Once these basic insights are available, they can be combined with other, compatible, modeling studies that explain, in addition, how headcentered spatial coordinates are transformed through learning into body-centered spatial coordinates both to control movement-invariant shrouds for invariant object category learning, as well as to control arm reaching movements to the same attended positions in space to which the eyes move (Y. E. Cohen & Andersen, 2002;Deubel, Schneider, & Paprotta, 1998;Schiegg, Deubel, & Schneider, 2003;Schneider & Deubel, 2002). How such a body-centered representation may be learned in real time using outflow neck position signals in addition to the outflow eye target position signals in Fig. 27 has been modeled in Guenther, Bullock, Greve, and Grossberg (1994). ...
Article
Full-text available
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface–shroud resonances support conscious seeing and action, whereas feature–category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure–ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive–emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
... Further extensions of this paradigm have tested whether attention can be simultaneously allocated to up to two or three action-relevant locations in the scene. In particular, when subjects plan multiple sequential eye movements (Baldauf & Deubel, 2008a), sequential reach movements (Baldauf & Deubel, 2009;Baldauf, Wolf, & Deubel, 2006), or bimanual grasps (Schiegg, Deubel, & Schneider, 2003, Baldauf & Deubel, 2008b, visual selection is enhanced for all of the action-relevant locations (Baldauf & Deubel, 2010). ...
... A number of studies examining the properties of selective reach-to-grasp actions in humans have shown that in some situations motor systems have to process information about the surroundings that are located near to a target. Selectionfor-action paradigms Chieffi et al. 1993;Jackson et al. 1995;Castiello 1996Castiello , 1998Howard and Tipper 1997;Bonfiglioli and Castiello 1998;Tresilian 1998;Deubel et al. 1996Deubel et al. , 1998Schiegg et al. 2003;Kritikos et al. 2000;Tresilian 1998) have revealed that the location of the objects surrounding a target determines changes in the motor execution level. More specifically, the kinematic properties of reaching movements have been found to be evoked by nearby objects contaminating those evoked by the target Castiello 1999). ...
Article
Full-text available
When a monkey selects a piece of food lying on the ground from among other viable objects in the near vicinity, only the desired item governs the particular pattern and direction of the animal’s reaching action. It would seem then that selection is an important component controlling the animal’s action. But, we may ask, is the selection process in such cases impervious to the presence of other objects that could constitute potential obstacles to or constraints on movement execution? And if it is, in fact, pervious to other objects, do they have a direct influence on the organization of the response? The kinematics of macaques’ reaching movements were examined by the current study that analysed some exemplars as they selectively reached to grasp a food item in the absence as well as in the presence of potential obstacles (i.e., stones) that could affect the arm trajectory. Changes in movement parameterization were noted in temporal measures, such as movement time, as well as in spatial ones, such as paths of trajectory. Generally speaking, the presence of stones in the vicinity of the acting hand stalled the reaching movement and affected the arm trajectory as the hand veered away from the stone even when it was not a physical obstacle. We concluded that nearby objects evoke a motor response in macaques, and the attentional mechanisms that allow for a successful action selection are revealed in the reaching path. The data outlined here concur with human studies indicating that potential obstacles are internally represented, a finding implying basic cognitive operations allowing for action selection in macaques.
... Another reason for comparing gaze and finger foraging is the hypothesized relation between eye movements and visual attention (Deubel & Schneider, 1996;Hoffman & Subramaniam, 1995;Kowler, Anderson, Dosher, & Blaser, 1995;Kristja´nsson, 2007Kristja´nsson, , 2011Kristja´nsson, Chen, & Nakayama, 2001;Kustov & Robinson, 1996). While many studies show that similar relations hold for attention and finger control (Bekkering & Neggers, 2002;Deubel & Schneider, 2004;Eimer, Van Velzen, Gherri, & Press, 2006;Schiegg, Deubel, & Schneider, 2003), Jonikaitis and Deubel (2011) argued that attentional resources are allocated independently to eye and hand movement targets, suggesting that the goals for the two are selected by separate mechanisms. ...
Article
Full-text available
A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.
... The preparatory phase, or the intention deriving phase, as well as the post-movement phase appear though to benefit from enhancement in perceptual tasks. For example, higher visual acuity has been demonstrated at the goal location of an upcoming saccade (Deubel and Schneider, 1996), as well as that of a prehensile movement (Schiegg et al., 2003), even before the actual saccade or hand movement has been initiated. ...
Article
Full-text available
In the present study, we investigated whether indenting the sides of novel objects (e.g., product packaging) would influence where people grasp, and hence focus their gaze, under the assumption that gaze precedes grasping. In Experiment 1, the participants grasped a selection of custom-made objects designed to resemble typical packaging forms with an indentation in the upper, middle, or lower part. In Experiment 2, eye movements were recorded while the participants viewed differently-sized (small, medium, and large) objects with the same three indentation positions tested in Experiment 1, together with a control object lacking any indentation. The results revealed that irrespective of the location of the indentation, the participants tended to grasp the mid-region of the object, with their index finger always positioned slightly above its midpoint. Importantly, the first visual fixation tended to fall in the cap region of the novel object. The participants also fixated for longer in this region. Furthermore, participants saccaded more often, as well saccading more rapidly when directing their gaze to the upper region of the objects that they were required to inspect visually. Taken together, these results therefore suggest that different spatial locations on target objects are of interest to our eyes and hands.
... De même, la préparation d'un geste de saisie manuelle dont les propriétés biomécaniques sont compatibles avec celles de la saisie manuelle du stimulus visuel induit une amélioration des performances perceptives lors de tâches de discrimination visuelle, entraînant ainsi ce que les auteurs appellent un effet facilitateur attentionnel (Craighero et al., 2002 ; Craighero et al., 1999). De faç on complémentaire, d'autres auteurs montrent que la préparation de l'atteinte corporelle d'une localisation spatiale influerait sur l'efficacité de l'attention visuospatiale au niveau de la localisation visée, que le geste préparé soit oculomoteur (Irwin et Gordon, 1998 ; Sheliga et al., 1997) ou manuel (Deubel et Schneider, 2004 ; Deubel et al., 1998 ; Eimer et al., 2006 ; Schiegg et al., 2003). ...
Article
Cet article aborde la question des liens entre l’attention spatialeet la préparation motrice de gestes d’atteinte manuelle. Au coursd’une tâche de recherche visuelle, les participants se préparent àappuyer sur une des touches du pavé numérique et exécutent cegeste dès qu’ils ont détecté un intrus sur l’écran. Le dispositif expé-rimental permet de contrôler deux directions: celle de l’atteintemanuelle de la touche de réponse (dans le plan horizontal) et cellede l’orientation attentionnelle qui permet d’atteindre directementl’intrus à partir du point de fixation oculaire initial (dans le planvertical). Les résultats montrent que: (a) lorsque ces deux direc-tions sont congruentes les temps de réaction sont plus courts quelorsque ces deux directions sont incongruentes; (b) cet effet est misen évidence lorsque l’orientation de l’attention est contrôlée volon-tairementparlesujet,maisn’estpassignificatifsil’intrusinduitunecapture attentionnelle automatique. Ces résultatssuggèrent que larecherche visuelle débute dans la direction qui est déterminée parla préparation de l’atteinte manuelle. Ces résultats sont discutésdans le cadre de la théorie prémotrice de l’attention.
... Second, there was relatively good performance in the abstract gesture group; this probably reflects more effective (but contra-productive) visual filtering in the other two conditions, due to stronger engagement of object manipulation mechanisms. Object handling directs visual attention onto those objects and grasp preparation focuses attention to the object's size, thereby excluding other objects from processing (Schiegg et al., 2003;Fischer and Hoellen, 2004; Symes et al., 2008). Thus, reaching for and especially picking up two number magnets allowed the least processing of the other numbers present. ...
... In summary, there is strong experimental evidence for the link between visual attention and saccade preparation. The link between manual response preparation and shifts of spatial attention has been less convincing, but several studies ( Deubel et al., 1998;Eimer et al., 2006;Schiegg et al., 2003) provide support for the claim that covert preparation of manual responses is linked to shifts of spatial attention as well. ...
... Deubel and Schneider took this as evidence that attention for perception could not be decoupled from attention for action. Schiegg, Deubel, and Schneider (2003) made a similar finding when the task was to grasp an object: discrimination performance (using the same type of discrimination targets as Deubel and Schneider (1996)) was enhanced at the to-be-grasped locations of an object relative to the non-grasped locations of the object. ...
... The close relationship between the adaptation of visual resources and manual response preparation has received support with studies suggesting that a forthcoming movement results in enhanced visual processing being distributed to action relevant locations. As mentioned in Chapter 1, a collection of behavioural and ERP studies have identified increased visuospatial discrimination at the intended goal location of a forthcoming reach movement (Baldauf & Deubel, 2008a, 2008bDeubel et al., 1998;Riek et al, 2003;Schiegg et al., 2003). Although the research into visuospatial processing during the preparation of a movement has presented data clearly supporting goal related activity, research has also examined sensory processing at cued effector locations prior to movement onset. ...
... The illusion has been considered evidence that at times the motor system uses a representation of object size that is unaffected by context and quite different from that used by perception processing6. Other ‘illusion' studies789 and ‘selection-for-action' paradigms41011121314151617181920 have, instead, revealed that the perceptual features of objects surrounding a target do indeed determine interference effects. In these circumstances, the simultaneous activation of responses to both a target and a distractor produces a cross-talk during which the kinematic properties of the grasping movement evoked by a distractor contaminate those evoked by the target2122. ...
Article
Full-text available
Highly efficient systems are needed to link perception with action in the context of the highly complex environments in which primates move and interact. Another important component is, nonetheless, needed for action: selection. When one piece of fruit from a branch is being chosen by a monkey, many other pieces are within reach and visible: do the perceptual features of the objects surrounding a target determine interference effects? In humans, reaching to grasp a desired object appears to integrate the motor features of the objects which might become potential targets - a process which seems to be driven by inhibitory attention mechanisms. Here we show that non-human primates use similar mechanisms when carrying out goal-directed actions. The data indicate that the volumetric features of distractors are internally represented, implying that the basic cognitive operations allowing for action selection have deep evolutionary roots.
... De même, la préparation d'un geste de saisie manuelle dont les propriétés biomécaniques sont compatibles avec celles de la saisie manuelle du stimulus visuel induit une amélioration des performances perceptives lors de tâches de discrimination visuelle, entraînant ainsi ce que les auteurs appellent un effet facilitateur attentionnel (Craighero et al., 2002 ;Craighero et al., 1999). De faç on complémentaire, d'autres auteurs montrent que la préparation de l'atteinte corporelle d'une localisation spatiale influerait sur l'efficacité de l'attention visuospatiale au niveau de la localisation visée, que le geste préparé soit oculomoteur (Irwin et Gordon, 1998 ;Sheliga et al., 1997) ou manuel (Deubel et Schneider, 2004 ;Deubel et al., 1998 ;Eimer et al., 2006 ;Schiegg et al., 2003). ...
Article
Full-text available
This paper deals with the links between spatial attention and reaching movement's motor preparation. During a visual search task, participants prepare to manually reach a button of the numeric keypad and execute it when they find an intruder on the screen. The experimental devices allowed us to control two directions: on the one hand, the direction of the response button's manual reach (on the horizontal plane) and on the other hand, the direction of the intruder's direct attentional reach, starting from the fixation point (on the vertical plane). The results show that: (a) reaction times are shorter when there is a directional congruence between these two directions than when there is incongruence; (b) this effect is significant when attention is voluntarily shifted, whereas it is no longer significant when attention is automatically captured by an intruder. These results suggest that visual search's initial direction is determined by the prepared manual reach. These data are discussed in the premotor theory of attention's framework.
... Numerous studies now provide data consistent with this hypothesis. For instance, there have been repeated demonstrations of improved perceptual discrimination, prior to movement onset, at all upcoming movement targets relative to nontarget locations, for single movements (Deubel, Schneider, & Paprotta, 1998;Schiegg, Deubel, & Schneider, 2003), for short movement sequences (Baldauf & Deubel, 2008a;Baldauf, Wolf, & Deubel, 2006), for simultaneous bimanual movements (Baldauf & Deubel, 2008b), and even for tool-mediated movements (Collins, Schicke, & Rçder, 2008). Similarly, classical attentional signatures in ERP have also been associated with movement preparation (Eimer et al., 2005) and with target locations for upcoming movements (Baldauf & Deubel, 2008c). ...
Article
Full-text available
During manually-assisted search, where participants must actively manipulate search items, it has been reported that participants will often select and move the target of search itself without recognizing it (Solman et al., 2012a). In two experiments we explore the hypothesis that this error results from a naturally-arising strategy that decouples perception and action during search, enabling motor interactions with items to outpace the speed of perceptual analysis. In Experiment 1, we report that the error is prevalent for both mouse and touch-screen interaction modes, and is uninfluenced by speeding or slowing instructions - ruling out these task-specific details as causes of the error. In Experiment 2 we manipulate motor speed, and show that reducing the speed of individual movements during search leads to a reduction in error rates. These findings support the conclusion that the error results from incoordination between motor and perceptual processes, with motor processes outpacing perceptual abilities.
... The Affordance Probabilistic Coding (APC) Module was designed so as to provide a computational solution to (b), that is, to the multiple affordance extraction problem (seeFigure 1). To accomplish (c), that is, generalization capabilities enabling one to extract affordances from novel objects, a starting point was provided by the observation that the agent usually focuses its attention on the part of the object at which the grasping action is directed (Schiegg, Deubel, & Schneider, 2003 ). This behaviour suggests the possibility of associating parts of a graspable object to affordances, and to store this " mereological " information for use when novel graspable objects are presented. ...
... In order to grasp an object, we desire a smooth, simultaneous movement of all finger towards their corresponding contact points. According to neuroscientific studies (see[14]), during the grasp process humans tend to focus on a mainly fixed spot on the object surface which corresponds to the thumb contact position. Therefore, the thumb is assumed to lead the reaching movement of the end effector towards the object. ...
Conference Paper
Full-text available
In this paper, we present a grasp representation in task space exploiting position information of the fingertips. We propose a new way for grasp representation in the task space, which provides a suitable basis for grasp imitation learning. Inspired by neuroscientific findings, finger movement synergies in the task space together with fingertip positions are used to derive a parametric low-dimensional grasp representation. Taking into account correlating finger movements, we describe grasps using a system of virtual springs to connect the fingers, where different grasp types are defined by parameterizing the spring constants. Based on such continuous parameterization, all instantiation of grasp types and all hand preshapes during a grasping action (reach, preshape, enclose, open) can be represented. We present experimental results, in which the spring constants are merely estimated from fingertip motion tracking using a stereo camera setup of a humanoid robot. The results show that the generated grasps based on the proposed representation are similar to the observed grasps.
... In summary, there is strong experimental evidence for the link between visual attention and saccade preparation. The link between manual response preparation and shifts of spatial attention has been less convincing, but several studies (Baldauf et al., 2006; Deubel et al., 1998; Eimer et al., 2005 Eimer et al., , 2006 Schiegg et al., 2003) provide support for the claim that covert preparation of manual responses is linked to shifts of spatial attention as well. We propose a computational model of grasping of extrafoveal targets which is implemented on a robot setup. ...
Article
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.
... These targets were separated by only 1.5 @BULLET of visual angle, yet participants performed significantly better when the target was congruent with the movement endpoint. Indeed, the spatial specificity of attention shifts that precede actions is such that when a grasp is planned rather than a pointing action, attention is allocated only to the points of the object which will be in contact with the effector, and not the whole object (Schiegg, Deubel, & Schneider, 2003). Furthermore, facilitation at movement goals can be observed for both unimanual and bimanual pointing movements (Baldauf & Deubel, 2008b ). ...
Article
Spatial attention and eye-movements are tightly coupled, but the precise nature of this coupling is controversial. The influential but controversial Premotor theory of attention makes four specific predictions about the relationship between motor preparation and spatial attention. Firstly, spatial attention and motor preparation use the same neural substrates. Secondly, spatial attention is functionally equivalent to planning goal directed actions such as eye-movements (i.e. planning an action is both necessary and sufficient for a shift of spatial attention). Thirdly, planning a goal directed action with any effector system is sufficient to trigger a shift of spatial attention. Fourthly, the eye-movement system has a privileged role in orienting visual spatial attention. This article reviews empirical studies that have tested these predictions. Contrary to predictions one and two there is evidence of anatomical and functional dissociations between endogenous spatial attention and motor preparation. However, there is compelling evidence that exogenous attention is reliant on activation of the oculomotor system. With respect to the third prediction, there is correlational evidence that spatial attention is directed to the endpoint of goal-directed actions but no direct evidence that this attention shift is dependent on motor preparation. The few studies to have directly tested the fourth prediction have produced conflicting results, so the extent to which the oculomotor system has a privileged role in spatial attention remains unclear. Overall, the evidence is not consistent with the view that spatial attention is functionally equivalent to motor preparation so the Premotor theory should be rejected, although a limited version of the Premotor theory in which only exogenous attention is dependent on motor preparation may still be tenable. A plausible alternative account is that activity in the motor system contributes to biased competition between different sensory representations with the winner of the competition becoming the attended item.
... In addressing the generalization problem with respect to novel/unknown object -requirement (c) above -a heuristically useful starting point is provided by the observation that the agent usually focuses its attention on the part of the object at which the grasping action is directed (Schiegg, Deubel, & Schneider, 2003). This behavior suggests the possibility of associating parts of a graspable object to hand-configurations, and to store this "mereological" information for use when novel graspable objects are presented. ...
Article
The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience.
... Evidence that the functions of the primate SC extend beyond eye movements has recently come from studies showing that manipulations of SC activity can influence covert attention, leading to changes in performance for difficult perceptual tasks (50)(51)(52). Furthermore, it has been suggested that the same covert attention mechanisms involved in perception might also be involved in selecting movement goals (53)(54)(55), although some physiological evidence argues against this proposition (56). The finding that the SC plays a causal role both in perceptual selection and in selection for action suggests that these two functions may be governed by a common mechanism. ...
Article
Full-text available
Purposive action requires the selection of a single movement goal from multiple possibilities. Neural structures involved in movement planning and execution often exhibit activity related to target selection. A key question is whether this activity is specific to the type of movement produced by the structure, perhaps consisting of a competition among effector-specific movement plans, or whether it constitutes a more abstract, effector-independent selection signal. Here, we show that temporary focal inactivation of the primate superior colliculus (SC), an area involved in eye-movement target selection and execution, causes striking target selection deficits for reaching movements, which cannot be readily explained as a simple impairment in visual perception or motor execution. This indicates that target selection activity in the SC does not simply represent a competition among eye-movement goals and, instead, suggests that the SC contributes to a more general purpose priority map that influences target selection for other actions, such as reaches.
... When R1 was given too early ( 100 ms after S1), too late ( 500 ms after S1), or with the wrong response key, immediate specific written error feedback occurred on the screen and the trial was aborted. The eligibility-range of 100 ms500 ms for valid responses is in line with some previous motorvisual dual-task studies (e.g., Collins, Schicke, & Röder, 2008; Schiegg et al., 2003), while others have chosen slightly higher cutoff criteria as, for instance 150 ms (Baldauf & Deubel, 2008; Hommel & Schneider, 2002), or 200 ms (Baldauf, Wolf, & Deubel, 2006; Deubel et al., 1998). We have chosen the lowest of the commonly applied cutoff criterion in order to, first, avoid by all means any bias in the results by deleting any very fast nonanticipative RTs (see Ulrich & Miller, 1994), and second, not to discourage participants from fast responding. ...
Article
Full-text available
Previous research has shown that actions impair the visual perception of categorically action-consistent stimuli. On the other hand, actions can also facilitate the perception of spatially action-consistent stimuli. We suggest that motorvisual impairment is due to action planning processes, while motorvisual facilitation is due to action control mechanisms. This implies that because action planning is sensitive to modulations by cue-response mapping so should motorvisual impairment, while motorvisual facilitation should be insensitive to manipulations of cue-response mapping as is action control. We tested this prediction in three dual-task experiments. The impact of performing left and right key presses on the perception of unrelated, categorically or spatially consistent, stimuli was studied. As expected, we found motorvisual impairment for categorically consistent stimuli and motorvisual facilitation for spatially consistent stimuli. In all experiments, we compared congruent with incongruent cue-key mappings. Mapping manipulations affected motorvisual impairment, but not motorvisual facilitation. The results support our suggestion that motorvisual impairment is due to action planning, and motorvisual facilitation to action control.
... It is now well established that the selection of a stimulus as the goal of a movement is related to attention shift to the movement target. A number of studies have shown that these attention shifts precede the initiation of goaldirected saccades, reaching movements and grasping (Baldauf & Deubel, 2010; Deubel & Schneider, 1996; Deubel, Schneider, & Paprotta, 1998; Montagnini & Castet, 2007; Schiegg, Deubel, & Schneider, 2003). Hence, spatial attention can be used as an index of movement goal selection before movement onset. ...
Article
Full-text available
Dual-task costs are observed when people perform two tasks at the same time. It has been suggested that these costs arise from limitations of movement goal selection when multiple goal-directed movements are made simultaneously. To investigate this, we asked participants to reach and look at different locations while we varied the time between the cues to start the eye and the hand movement between 150 ms and 900 ms. In Experiment 1, participants executed the reach first, and the saccade second, in Experiment 2 the order of the movements was reversed. We observed dual-task costs-participants were slower to start the eye or hand movement if they were planning another movement at that time. In Experiment 3, we investigated whether these dual-task costs were due to limited attentional resources needed to select saccade and reach goal locations. We found that the discrimination of a probe improved at both saccade and reach locations, indicating that attention shifted to both movement goals. Importantly, while we again observed the expected dual-task costs as reflected in movement latencies, there was no apparent delay of the associated attention shifts. Our results rule out attentional goal selection as the causal factor leading to the dual-task costs occurring in eye-hand movements.
... The premotor theory gives a clear explanation of the present results. In the training block, the horizontal manual movements are accompanied by a horizontal shift of attention, because attentional shifts are involved in preparing and executing a response, irrespective of the response modality (Craighero, Fadiga, Rizzolatti, & Umilta, 1999; Deubel, Schneider, & Paprotta, 1998; Schiegg, Deubel, & Schneider, 2003). During this training, subjects learned an association between colors and motor movements as well as an association between colors and the shift of attention that accompanies motor movements. ...
Article
Full-text available
The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems.
... For example, TMS pulses applied to the contralateral frontal eye field (FEF) affected the recognition of target stimuli that appeared at the intended saccade locations, thereby providing functional evidence for the role of FEF in feedback connections to visual areas (Neggers et al. 2007). In addition, it has been shown that manual response preparation is accompanied by attentional shifts as well, resulting in enhanced recognition of targets at intended reaching and grasp-ing locations (Deubel et al. 1998; Schiegg et al. 2003 ). Preparing a finger-lifting response, for instance, was found to facilitate the processing of tactile stimuli presented at the effector of the prepared movement (Juravle and Deubel 2009). ...
Article
Full-text available
The present study investigated the selection for action hypothesis, according to which a subject's action intention to perform a movement influences the way in which visual information is being processed. Subjects were instructed in separate blocks either to grasp or to point to a three-dimensional target-object and event-related potentials were recorded relative to stimulus onset. It was found that grasping compared with pointing resulted in a stronger N1 component and a subsequent selection negativity, which were localized to the lateral occipital complex. These effects suggest that the intention to grasp influences the processing of action-relevant features in ventral stream areas already at an early stage (e.g., enhanced processing of object orientation for grasping). These findings provide new insight in the neural and temporal dynamics underlying perception-action coupling and provide neural evidence for a selection for action principle in early human visual processing.
Preprint
Full-text available
It is commonly held that computations of goal-directed behaviour are governed by conjunctive neural representations of the task features. However, support for this view comes from paradigms with arbitrary combinations of task features and task affordances that require representations in working memory. Therefore, in the present study we used a task that is well-rehearsed with task features that afford minimal working memory representations to investigate the temporal evolution of feature representations and their potential integration in the brain. Specifically, we recorded electroencephalography data from human participants while they first viewed and then grasped objects or touched them with a knuckle. Objects had different shapes and were made of heavy or light materials with shape and weight being features relevant for grasping but not for knuckling. Using multivariate analysis, we found that representations of object shape were similar for grasping and knuckling. However, only for grasping did early shape representations reactivate at later phases of grasp planning, suggesting that sensorimotor control signals feed back to early visual cortex. Grasp-specific representations of material/weight only arose during grasp execution after object contact during the load phase. A trend for integrated representations of shape and material also became grasp-specific but only briefly during movement onset. These results argue against the view that goal-directed actions inevitably join all features of a task into a sustained and unified neural representation. Instead, our results suggest that the brain generates action-specific representations of relevant features as required for the different subcomponent of its action computations. Significance statement The idea that all the features of a task are integrated into a joint representation or event file is widely supported but importantly based on paradigms with arbitrary stimulus-response combinations. Our study is the first to investigate grasping using electroencephalography to search for the neural basis of feature integration in such a daily-life task with overlearned stimulus-response mappings. Contrary to the notion of event files we find limited evidence for integrated representations. Instead, we find that task-relevant features form representations at specific phases of the action. Our results show that integrated representations do not occur universally for any kind of goal-directed behaviour but in a manner of computation on demand.
Article
Full-text available
Dual-task studies have demonstrated that goal-directed actions are typically preceded by a premotor shift of visual attention toward the movement goal location. This finding is often taken as evidence for an obligatory coupling between attention and motor preparation. Here, we examined whether this coupling entails a habitual component relating to an expectation of spatial congruence between visual and motor targets. In two experiments, participants had to identify a visual discrimination target (DT) while preparing variably delayed pointing movements to a motor target (MT). To induce distinct expectations regarding the DT position, different groups of participants performed a training phase in which the DT either always appeared at MT, opposite to MT, or at an unpredictable position. In a subsequent test phase, the DT position was randomized to assess the impact of learned expectancy on premotor attention allocation. Although we applied individually determined DT presentation times in the test phase of Experiment 1, a fixed DT presentation time was used in Experiment 2. Both experiments yielded evidence for attentional enhancement at the expected DT position. Although interpretability of this effect was limited in Experiment 1 because of between-group differences in DT presentation time, results of Experiment 2 were much clearer. Specifically, a marked discrimination benefit was observed at the position opposite to MT in participants anticipating the DT at this position, whereas no statistically significant benefit was found at MT. Crucially, this was observed at short movement delays, demonstrating that expectation of spatial incongruence between visual and motor targets allows for decoupling of attentional resources from ongoing motor preparation. Based on our findings, we suggest that premotor attention shifts entail a considerable habitual component rather than being the sole result of motor programming.
Article
Visual perception is closely related to body movements and action, and it is known that processing visual stimuli is facilitated at the hand or at the hand-movement goal. Such facilitation suggests that there may be an attentional process associated with the hands or hand movements. To investigate the underlying mechanisms of visual attention at a hand-movement goal, we conducted 2 experiments to examine whether attention at the hand-movement goal is a process independent from endogenous attention. Endogenous attention is attention that is intentionally focused on a location, feature, or object. We controlled the hand-movement goal and endogenous attention separately to investigate the spatial profiles of the two types of attention. A visual target was presented either at the goal of hand movement (same condition) or at its opposite side (opposite condition) while steady-state visual-evoked potential (SSVEP) was used to estimate the spatial distributions of the facilitation effect from the 2 types of attention around the hand-movement goal and around the visual target through EEG. We estimated the spatial profile of attentional modulation for the hand-movement goal by taking the difference in SSVEP amplitude between conditions with and without hand movement, thereby obtaining the effect of visual endogenous attention alone. The results showed a peak at the hand-movement goal, independent of the location of the visual target where participants intentionally focused their attention (endogenous attention). We also found differences in the spatial extent of attentional modulation. Spatial tuning was narrow around the hand-movement goal (i.e., attentional facilitation only at the goal location) but was broadly tuned around the focus of endogenous attention (i.e., attentional facilitation spreading over adjacent stimulus locations), which was obtained from the condition without hand movement. These results suggest the existence of 2 separate mechanisms, 1 underlying the attention at the hand-movement goal and another underlying endogenous attention.
Article
Patients with optic ataxia following lesions to superior parts of the posterior parietal cortex make large errors when reaching to targets in the peripheral visual field. These errors are characterised by a contraction, or attraction, toward the point of fixation. These patients also have a reduced ability to allocate visual attention away from the point of fixation, but it is unclear whether the core symptom of misreaching is related to these attentional problems. In neurologically-intact adults, we tested the effect of an attention-demanding dual-task performed at fixation upon visually-guided reaching to peripheral targets. The dual task was associated with delayed movement initiation, and a shortened deceleration phase of movement suggesting a reduced ability to benefit from online control. It also induced a small but consistent shift of reaching endpoints towards the side of fixation. Our experimental restriction of visual attention thus impaired both the programming and control of reaching, and induced a spatial pattern of errors that was qualitatively reminiscent of optic ataxia, albeit much less severe. These findings are consistent with a close functional link between attention and action in the healthy brain, and suggest that attentional disturbances could be a core component of optic ataxia following parietal lesions.
Article
Alfred L. Yarbus was among the first to demonstrate that eye movements actively serve our perceptual and cognitive goals, a crucial recognition that is at the heart of today’s research on active vision. He realized that not the changes in fixation stick in memory but the changes in shifts of attention. Indeed, oculomotor control is tightly coupled to functions as fundamental as attention and memory. This tight relationship offers an intriguing perspective on transsaccadic perceptual continuity, which we experience despite the fact that saccades cause rapid shifts of the image across the retina. Here, I elaborate this perspective based on a series of psychophysical findings. First, saccade preparation shapes the visual system’s priorities; it enhances visual performance and perceived stimulus intensity at the targets of the eye movement. Second, before saccades, the deployment of visual attention is updated, predictively facilitating perception at those retinal locations that will be relevant once the eyes land. Third, saccadic eye movements strongly affect the contents of visual memory, highlighting their crucial role for which parts of a scene we remember or forget. Together, these results provide insights on how attentional processes enable the visual system to cope with the retinal consequences of saccades.
Article
Full-text available
While the relationship between action and focused attention has been well-studied, less is known about the ability to divide attention while acting. In the current paper we explore this issue using the multiple object tracking (MOT) paradigm (Pylyshyn & Storm, 1988). We asked whether planning and executing a display-relevant action during tracking would substantially affect the ability track and later identify targets. In all trials the primary task was to track 4 targets among a set of 8 identical objects. Several times during each trial, one object, selected at random, briefly changed colour. In the baseline MOT trials, these changes were ignored. During active trials, each changed object had to be quickly touched. On a given trial, changed objects were either from the tracking set or were selected at random from all 8 objects. Although there was a small dual-task cost, the need to act did not substantially impair tracking under either touch condition.
Article
We report four experiments on the speed of people's reactions to sensory stimulation while throwing and catching a basketball. Thirty participants participated in Experiment 1, split according to basketball expertise: none, intermediate (6years on average), or advanced (20years or more). The participants had to catch a bouncing basketball. The movement triggered a short tactile pulse in a tactor attached to their wrist to which they made a speeded vocal response (RT). The pulse could be presented either at rest, at two time-points during the reaching movement, or when the hand reached forward to catch the ball. The results indicated that participants responded more rapidly to vibrations on the moving hand relative to preparing or catching the ball, with expert athletes responding significantly faster than novices. In a second experiment, participants made a speeded vocal response to an auditory signal. As in Experiment 1, faster auditory RTs were observed when the hand was moving, as compared to the other time-points. In a third study, the participants responded to a pulse delivered at their resting hand at various time-points corresponding to the average timings of stimulation in Experiment 1. The results revealed comparable RTs across the tested time-points. In a final experiment, the participants made a vocal response to a pulse presented at various time-points while they were throwing the basketball. The results indicated faster tactile RTs while the ball was being thrown. These results are discussed with reference to the literature on goal-directed movements and in terms of current theories of attention and sensory suppression.
Article
Action guidance, like perceptual discrimination, requires selective attention. Perception is enhanced at the target of a reaching movement, but it is not known whether selecting an object for perception reciprocally prioritises it for action. Two theoretical frameworks, the premotor theory and the Visual Attention Model, predict that this reciprocal relation should hold. We tested the influence of perceptual attention on the online control of reaching. In Experiment 1, participants attended covertly to a flanker on one or other side of a fixated target, prior to reaching for that target, which occasionally jumped, after reach onset, to the attended or non-attended side. Participants corrected their reaches for almost all target jumps. In Experiment 2, we required covert monitoring of the flanker during reaching. This concurrent perceptual task globally reduced correction behaviour, indicating that perception and action share a common attentional resource. Corrections were especially unlikely toward the attended side. This is explained by assuming that perceptual attention primed an action toward the attended location and that the participant inhibited this primed action. The data thus imply that perceptual selection constrains online action guidance, as predicted by the premotor theory and the VAM. We further argue that the fact that participants can inhibit a location within the action system but simultaneously maintain its prioritisation for perceptual monitoring, is easier to reconcile with the VAM than with the premotor theory.
Article
Generalization represents the ability to transfer what has been learned in one context to another context beyond limited experience. Because acquired motor representations often have to be reinstated in a different or novel environment, generalization is a crucial part of visuomotor learning. In daily life, training for new motor skills often occurs in a complex environment, in which dividing attentional resources for multiple stimuli is required. However, it is unknown how dividing attention during learning affects the generalization of visuomotor learning. We examined how divided attention during training modulates the generalization of visuomotor rotational adaptation. Participants were trained to adapt to one direction with or without dividing attention to a simultaneously presented visual detection task. Then, they had to generalize rotational adaptation to other untrained directions. We show that visuomotor training with divided attention multiplicatively reduces the gain and sharpens the tuning of the generalization function. We suggest that limiting attention narrowly restricts an internal model, reducing the range and magnitude of transfer. This result suggests that attention modulates a selective subpopulation of neurons in motor areas, those with directional tuning values in or near the training direction.
Article
Full-text available
In this study, we have investigated the influence of available attentional resources on the dual-task costs of implementing a new action plan and the influence of movement planning on the transfer of information into visuospatial working memory. To approach these two questions, we have used a motor-memory dual-task design in which participants grasped a sphere and planned a placing movement toward a left or right target according to a directional arrow. Subsequently, they encoded a centrally presented memory stimulus (4 × 4 symbol matrix). While maintaining the information in working memory, a visual stay/change cue (presented on the left, center or right) either confirmed or reversed the planned movement direction. That is, participants had to execute either the prepared or the re-planned movement and finally reported the symbols at leisure. The results show that both, shifts of spatial attention required to process the incongruent stay/change cues and movement re-planning, constitute processing bottlenecks as they both reduced visuospatial working memory performance. Importantly, the spatial attention shifts and movement re-planning appeared to be independent of each other. Further, we found that the initial preparation of the placing movement influenced the report pattern of the central working memory stimulus. Preparing a leftward movement resulted in better memory performance for the left stimulus side, while the preparation of a rightward movement resulted in better memory performance for the right stimulus side. Hence, movement planning influenced the transfer of information into the capacity-limited working memory store. Therefore, our results suggest complex interactions in that the processes involved in movement planning, spatial attention and visuospatial working memory are functionally correlated but not linked in a mandatory fashion.
Article
Eye tracking has been one of the most insight-gaining methods in advertising research for many decades. The vast developments in eye tracking techniques (hard- and software) and the emergence of neuromarketing a couple of years ago have to quite some extent accounted for the rising interest in eye tracking in marketing research these days. Eye-tracking is an implicit method to measure the effectiveness of ads, commercials and other visual marketing stimuli. Eye tracking provides objective measures for an ad’s or a commercial’s effectiveness in terms of when, how often and how long certain aspects of an ad have been fixated. It also provides answers why important elements (e.g. brand) have not captured the consumer’s attention. Whereas eye tracking has already been used for a long time in print advertising research, the analysis of dynamic stimuli such as commercials has not been able until a few years ago. Based on psychophysiological and neurological theory this article gives a review of eye tracking metrics which are essential to analyse and interpret eye tracking data. The results of two eye tracking studies for print advertisements in the Yellow Pages and for commercials demonstrate the great potential of eye tracking for identifying factors which are able to boost an ad’s or a commercial’s effectiveness.
Article
Natural scenes contain far more information than can be processed simultaneously. Thus, our visually guided behavior depends crucially on the capacity to attend to relevant stimuli. Past studies have provided compelling evidence of functional overlap of the neural mechanisms that control spatial attention and saccadic eye movements. Recent neurophysiological work demonstrates that the neural circuits involved in the preparation of saccades also play a causal role in directing covert spatial attention. At the same time, other studies have identified separable neural populations that contribute uniquely to visual and oculomotor selection. Taken together, all of the recent work suggests how visual and oculomotor signals are integrated to simultaneously select the visual attributes of targets and the saccades needed to fixate them.
Article
According to action-centered models of attention, attention and action systems are tightly linked such that the capture of attention by an object automatically initiates response-producing processes. In support of this link, studies have shown that movements deviate towards or away from non-target stimuli. These deviations are thought to emerge because attentional capture by non-target stimuli generates responses that summate with target responses to develop a combined movement vector. The present study tested attention-action coupling by examining movement trajectories in the presence of non-target stimuli that do or do not capture attention. Previous research has revealed that non-target cue stimuli only capture attention when they share critical features with the target. Cues that do not share this feature do not capture attention. Following these studies and their findings, participants in the present study aimed to the location of a single white square (onset singleton target) or a single red square presented with two white squares (color singleton target). In separate blocks, targets were preceded by non-predictive cues that did or did not share the target feature (color or onset singleton cues). The critical finding of the present study was that trajectory effects mirrored the temporal interference effects in that deviations were only observed when cue and target properties matched. Deviations were not observed when the cue and target properties did not match. These data provide clear support for the link between attentional capture and the activation of response-producing processes.
Article
It has been suggested that the kinematics of a reach-to-grasp movement, performed within an action sequence, vary depending on the action goal and the properties of subsequent movement segments (action context effect). The aim of this study was to investigate whether the action context also affects action sequences that consist of several grasping movements directed toward different target objects. Twenty participants were asked to perform a sequence in which they grasped a cylinder, placed it into a target area, and subsequently grasped and displaced a target bar of a certain orientation. We specifically tested whether the orientation of the target bar being grasped in the last movement segment influenced the grip orientation adapted to grasp and place the cylinder in the preceding segments. When all movement segments within the sequence were easy to perform, results indeed showed that grip orientation chosen in the early movement segments depended on the forthcoming motor demands, suggesting a holistic planning process. In contrast, high accuracy demands in specifying a movement segment reduced the ability of the motor system to plan and organize the movement sequence into larger chunks, thus causing a shift toward sequential performance. Additionally, making the placing task more difficult resulted in prolonged reaction times and increased the movement times of all other movement segments.
Article
Full-text available
Article
Full-text available
Reports 5 experiments conducted with 52 paid Ss in which detection of a visual signal required information to reach a system capable of eliciting arbitrary responses required by the experimenter. Detection latencies were reduced when Ss received a cue indicating where the signal would occur. This shift in efficiency appears to be due to an alignment of the central attentional system with the pathways to be activated by the visual input. It is also possible to describe these results as being due to a reduced criterion at the expected target position. However, this ignores important constraints about the way in which expectancy improves performance. A framework involving a limited-capacity attentional mechanism seems to capture these constraints better than the more general language of criterion setting. Using this framework, it was found that attention shifts were not closely related to the saccadic eye movement system. For luminance detection, the retina appears to be equipotential with respect to attention shifts, since costs to unexpected stimuli are similar whether foveal or peripheral. (26 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This article presents a theory of selective attention that is intended to account for the identification of a visual shape in a cluttered display. The selected area of attention is assumed to be controlled by a filter that operates on the location information in a display. The location information selected by the filter in turn determines the feature information that is to be identified. Changes in location of the selected area are assumed to be governed by a gradient of processing resources. Data from three new experiments are fit more parsimoniously by a gradient model than by a moving-spotlight model. The theory is applied to experiments in the recent literature concerned with precuing locations in the visual field, and to the issue of attentional and automatic processing in the identification of words. Finally, data from neuroanatomical experiments are reviewed to suggest ways that the theory might be realized in the primate brain. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The relationship between saccadic eye movements and covert orienting of visual spatial attention was investigated in two experiments. In the first experiment, subjects were required to make a saccade to a specified location while also detecting a visual target presented just prior to the eye movement. Detection accuracy was highest when the location of the target coincided with the location of the saccade, suggesting that subjects use spatial attention in the programming and/or execution of saccadic eye movements. In the second experiment, subjects were explicitly directed to attend to a particular location and to make a saccade to the same location or to a different one. Superior target detection occurred at the saccade location regardless of attention instructions. This finding shows that subjects cannot move their eyes to one location and attend to a different one. The results of these experiments suggest that visuospatial attention is an important mechanism in generating voluntary saccadic eye movements.
Article
Full-text available
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.
Article
Full-text available
In four experiments, the influence of distractor objects on the temporal evolution of the reach-tograsp movement toward a target object (an apple) was examined. In the first experiment, the distractor was another apple, which moved laterally behind the target and occasionally changed direction toward the target, thus becoming the to-be-grasped object. In the second and third experiments, the distractor was a stationary piece of fruit, which sometimes became the to-be-grasped object because of a change in illumination. The fourth experiment was a combination of the first two experiments. In all cases, selective interference effects on the transport and manipulation components were observed only when attention to the distractor was covert rather than overt. It is proposed that covert visuospatial attention selects information about distracting but potentially important stimuli, such that a registration of significance is accomplished without the need to process all available information.
Article
Full-text available
A unified theory of visual recognition and attentional selection is developed by integrating the biased-choice model for single-stimulus recognition (Luce, 1963; Shepard, 1957) with a choice model for selection from multielement displays (Bundesen, Pedersen, & Larsen, 1984) in a race model framework. Mathematically, the theory is tractable, and it specifies the computations necessary for selection. The theory is applied to extant findings from a broad range of experimental paradigms. The findings include effects of object integrality in selective report, number and spatial position of targets in divided-attention paradigms, selection criterion and number of distracters in focused-attention paradigms, delay of selection cue in partial report, and consistent practice in search. On the whole, the quantitative fits are encouraging.
Article
Full-text available
A new theory of search and visual attention is presented. Results support neither a distinction between serial and parallel search nor between search for features and conjunctions. For all search materials, instead, difficulty increases with increased similarity of targets to nontargets and decreased similarity between nontargets, producing a continuum of search efficiency. A parallel stage of perceptual grouping and description is followed by competitive interaction between inputs, guiding selective access to awareness and action. An input gains weight to the extent that it matches an internal description of that information needed in current behavior (hence the effect of target-nontarget similarity). Perceptual grouping encourages input weights to change together (allowing "spreading suppression" of similar nontargets). The theory accounts for harmful effects of nontargets resembling any possible target, the importance of local nontarget grouping, and many other findings.
Article
Full-text available
Stimuli presented in a non-attended location are responded to much slower than stimuli presented in an attended one. The hypotheses proposed to explain this effect make reference to covert movement of attention, hemifield inhibition, or attentional gradients. The experiment reported here was aimed at discriminating among these hypotheses. Subjects were cued to attend to one of four possible stimulus locations, which were arranged either horizontally or vertically, above, below, to the right or left of a fixation point. The instructions were to respond manually as fast as possible to the occurrence of a visual stimulus, regardless of whether it occurred in a cued or in a non-cued location. In 70% of the cued trials the stimulus was presented in the cued location and in 30% in one of the non-cued locations. In addition there were trials in which a non-directional cue instructed the subject to pay attention to all four locations. The results showed that the correct orienting of attention yielded a small but significant benefit; the incorrect orienting of attention yielded a large and significant cost; the cost tended to increase as a function of the distance between the attended location and the location that was actually stimulated; and an additional cost was incurred when the stimulated and attended locations were on opposite sides of the vertical or horizontal meridian. We concluded that neither the hypothesis postulating hemifield inhibition nor that postulating movement of attention with a constant time can explain the data. The hypothesis of an attention gradient and that of attention movements with a constant speed are tenable in principle, but they fail to account for the effect of crossing the horizontal and vertical meridians. A hypothesis is proposed that postulates a strict link between covert orienting of attention and programming explicit ocular movements. Attention is oriented to a given point when the oculomotor programme for moving the eyes to this point is ready to be executed. Attentional cost is the time required to erase one ocular program and prepare the next one.
Article
Full-text available
Theories of visual attention deal with the limit on our ability to see (and later report) several things at once. These theories fall into three broad classes. Object-based theories propose a limit on the number of separate objects that can be perceived simultaneously. Discrimination-based theories propose a limit on the number of separate discriminations that can be made. Space-based theories propose a limit on the spatial area from which information can be taken up. To distinguish these views, the present experiments used small (less than 1 degree), brief, foveal displays, each consisting of two overlapping objects (a box with a line struck through it). It was found that two judgments that concern the same object can be made simultaneously without loss of accuracy, whereas two judgments that concern different objects cannot. Neither the similarity nor the difficulty of required discriminations, nor the spatial distribution of information, could account for the results. The experiments support a view in which parallel, preattentive processes serve to segment the field into separate objects, followed by a process of focal attention that deals with only one object at a time. This view is also able to account for results taken to support both discrimination-based and space-based theories.
Article
Full-text available
Bartlett viewed thinking as a high level skill exhibiting ballistic properties that he called its “point of no return”. This paper explores one aspect of cognition through the use of a simple model task in which human subjects are asked to commit attention to a position in visual space other than fixation. This instruction is executed by orienting a covert (attentional) mechanism that seems sufficiently time locked to external events that its trajectory can be traced across the visual field in terms of momentary changes in the efficiency of detecting stimuli. A comparison of results obtained with alert monkeys, brain injured and normal human subjects shows the relationship of this covert system to saccadic eye movements and to various brain systems controlling perception and motion. In accordance with Bartlett's insight, the possibility is explored that similar principles apply to orienting of attention toward sensory input and orienting to the semantic structures used in thinking.
Article
Full-text available
Detection of a visual signal requires information to reach a system capable of eliciting arbitrary responses required by the experimenter. Detection latencies are reduced when subjects receive a cue that indicates where in the visual field the signal will occur. This shift in efficiency appears to be due to an alignment (orienting) of the central attentional system with the pathways to be activated by the visual input. It would also be possible to describe these results as being due to a reduced criterion at the expected target position. However, this description ignores important constraints about the way in which expectancy improves performance. First, when subjects are cued on each trial, they show stronger expectancy effects than when a probable position is held constant for a block, indicating the active nature of the expectancy. Second, while information on spatial position improves performance, information on the form of the stimulus does not. Third, expectancy may lead to improvements in latency without a reduction in accuracy. Fourth, there appears to be little ability to lower the criterion at two positions that are not spatially contiguous. A framework involving the employment of a limited-capacity attentional mechanism seems to capture these constraints better than the more general language of criterion setting. Using this framework, we find that attention shifts are not closely related to the saccadic eye movement system. For luminance detection the retina appears to be equipotential with respect to attention shifts, since costs to unexpected stimuli are similar whether foveal or peripheral. These results appear to provide an important model system for the study of the relationship between attention and the structure of the visual system.
Article
Full-text available
The relationship between saccadic eye movements and covert orienting or visual spatial attention was investigated in two experiments. In the first experiment, subjects were required to make a saccade to a specified location while also detecting a visual target presented just prior to the eye movement. Detection accuracy was highest when the location of the target coincided with the location of the saccade, suggesting that subjects use spatial attention in the programming and/or execution of saccadic eye movements. In the second experiment, subjects were explicitly directed to attend to a particular location and to make a saccade to the same location or to a different one. Superior target detection occurred at the saccade location regardless of attention instructions. This finding shows that subjects cannot move their eyes to one location and attend to a different one. The result of these experiments suggest that visuospatial attention is an important mechanism in generating voluntary saccadic eye movements.
Article
Full-text available
Space- and object-based attention components were examined in neurologically normal and parietal-lesion subjects, who detected a luminance change at 1 of 4 ends of 2 outline rectangles. One rectangle end was precued (75% valid); on invalid-cue trials, the target appeared at the other end of the cued rectangle or at 1 end of the uncued rectangle. For normals, the cost for invalid cues was greater for targets in the uncued rectangle, indicating an object-based component. Both right- and left-hemisphere patients showed costs that were greater for contralesional targets. For right-hemisphere patients, the object cost was equivalent for contralesional and ipsilesional targets, indicating a spatial deficit, whereas the object cost for left-hemisphere patients was larger for contralesional targets, indicating an object deficit.
Article
Full-text available
In 5 experiments, it was found that judging the relative location of 2 contours was more difficult when they belonged to 2 objects rather than 1. This was observed even when the 1- and 2-object displays were physically identical, with perceptual set determining how many objects they were seen to contain. Such a 2-object cost is consistent with object-based views of attention and with a hierarchical scheme for position coding, whereby object parts are located relative to the position of their parent object. In further experiments, it was shown that in accord with this hierarchical scheme, the relative location of objects could disrupt judgments of the relative location of object parts, but the reverse did not occur. This was found even when the relative position of the parts could be judged more quickly than that of the objects.
Article
Full-text available
Descriptions of interference effects from non-relevant stimuli are extensive in visual target detection and identification paradigms. To explore the influence of features of non-relevant objects on reach-to-grasp movements, we instructed healthy normal controls to reach for and pick up a cylinder (target) placed midsagittally 30 cm from the starting position of the hand. In Experiment 1, the target was presented alone, or accompanied by a narrower, wider, or same-size distractor positioned to the left or right of the target. In Experiment 2, the target was presented alone or accompanied by a distractor, which was slanted at a different orientation to the target. Reflective markers were placed on the wrist, thumb, and index finger of the right hand, and infra-red light-detecting cameras recorded their displacement through a calibrated 3-dimensional working space. Kinematic parameters were derived and analysed. Consistent changes in the expression of peak velocity, acceleration, and deceleration were evident when the distractor was narrower or wider than the target. The impact of the orientation of the distractor, conversely, was not marked. We discuss the results in the context of physiological findings and models of selective attention.
Article
Full-text available
Descriptions of interference effects from non-relevant stimuli are extensive in visual target detection and identification paradigms. To explore the influence of features of non-relevant objects on reach-to-grasp movements, we instructed healthy normal controls to reach for and pick up a cylinder (target) placed midsagittally 30 cm from the starting position of the hand. In Experiment 1, the target was presented alone, or accompanied by a narrower, wider, or same-size distracter positioned to the left or right of the target. In Experiment 2, the target was presented alone or accompanied by a distractor, which was slanted at a different orientation to the target. Reflective markers were placed on the wrist, thumb, and index finger of the right hand, and infra-red light-detecting cameras recorded their displacement through a calibrated 3-dimensional working space. Kinematic parameters were derived and analysed. Consistent changes in the expression of peak velocity, acceleration, and deceleration were evident when the distracter was narrower or wider than the target. The impact of the orientation of the distractor, conversely, was not marked. We discuss the results in the context of physiological findings and models of selective attention.
Article
We recently demonstrated that visual attention before saccadic eye movements is focused on the saccade target, allowing for spatially selective object recognition (Deubel and Schneider Vision Research in press). Here we investigate the role of visual selective attention in the preparation of aiming hand movements. The interaction of visual attention and manual aiming was studied in a dual-task paradigm that required manual pointing to a target in combination with a letter discrimination task. Subjects were asked to keep fixation in the centre of a screen. Upon offset of a central cue, they had to aim, with unseen hand, to locations within horizontal letter strings left or right from the central fixation; movements were registered with a Polhemus FastTrack system. The ability to discriminate between the symbol “E” and its mirror image presented tachistoscopically within the surrounding distractors was taken as the measure of visual attention. The results reveal that discrimination performance is far superior when the discrimination stimulus is also the target for manual aiming; when discrimination stimulus and pointing target refer to different objects, performance deteriorates. We conclude that it is not possible to maintain attention on a stimulus while directing a manual movement to a spatially separate object. Rather, our results argue for an obligatory and selective coupling of visual attention and movement programming, just as found for saccadic eye movements. This is consistent with a model of visual attention (proposed by Schneider) in which a unitary attention mechanism selects a goal object for visual processing, and simultaneously provides the information necessary for goal-directed motor action such as saccades, pointing, and grasping.
Article
The primate visual system can be divided into a ventral stream for perception and recognition and a dorsal stream for computing spatial information for motor action. How are selection mechanisms in both processing streams coordinated? We recently demonstrated that selection-for-perception in the ventral stream (usually termed “visual attention”) and saccade target selection in the dorsal stream are tightly coupled (Deubel & Schneider, 1996). Here we investigate whether such coupling also holds for the preparation of manual reaching movements. A dual-task paradigm required the preparation of a reaching movement to a cued item in a letter string. Simultaneously, the ability to discriminate between the symbols “E” and “∃” presented tachistoscopically within the surrounding distractors was taken as a measure of perceptual performance. The data demonstrate thatdiscrimination performance is superior when the discrimination stimulus is also the target for manual aiming; when the discrimination stimulus and pointing targetreferto differentobjects, performance deteriorates. Therefore, it is not possible to maintain attention on a stimulus for the purpose of discriminationwhiledirecting a movementtoa spatially separate object. The results argue for an obligatory coupling of (ventral) selection-for-perception and (dorsal) selection-for-action.
Article
This paper introduces a new neuro-cognitive Visual Attention Model, called VAM. It is a model of visual attention control of segmentation, object recognition, and space-based motor action. VAM is concerned with two main functions of visual attention-that is “selection-for-object-recognition” and “selection-for-space-based-motor-action”. The attentional control processes that perform these two functions restructure the results of stimulus-driven and local perceptual grouping and segregation processes, the “visual chunks”, in such a way that one visual chunk is globally segmented and implemented as an “object token”. This attentional segmentation solves the “inter- and intra-object-binding problem”. It can be controlled by higher-level visual modules of the what-pathway (e.g. V4/IT) and/or the where-pathway (e.g. PPC) that contain relatively invariant “type-level” information (e.g. an alphabet of shape primitives, colors with constancy, locations for space-based motor actions). What-based attentional control is successful if there is only one object in the visual scene whose type-level features match the intended target object description. If this is not the case, where-based attention is required that can serially scan one object location after another.
Article
Models of visual attention have, with few exceptions, suggested that attention is deployed to unitary regions of visual space. Kramer and Kahn (1995) recently reported that attention is considerably more flexible than previously believed, such that under some conditions attention may be focused on multiple non-contiguous areas of the visual field. In the five studies reported here, we examined the boundary conditions on the ability to divide attention among different locations in visual space. In each of the studies, subjects performed a same-different matching task with target letters that were presented on opposite sides of a set of distracter letters. Experiments 1, 2 and 3 provide further support for our proposal that subjects can concurrently attend to non-contiguous locations as long as new objects do not appear between the attended areas. Experiment 4 examined whether the disruption of multiple attentional foci was the result of the capture of attention by new objects per se, or by task-irrelevant objects. Multiple attentional foci could be maintained as long as new distracter objects did not appear between target locations. Experiment 5 examined whether attention can be divided among non-contiguous locations within as well as between hemifields. Hemifield boundaries did not constrain subjects' ability to divide attention among different areas of visual space. The results of these studies are discussed in terms of the nature of attentional flexibility and putative neuroanatomical mechanisms which support our ability to split attention among different regions of the visual field.
Article
In an effort to examine the flexibility with which attention can be allocated in visual space, we investigated whether subjects could selectively attend to multiple noncontiguous locations in the visual field We examined this issue by precuing two separate areas of the visual field and requiring subjects to decide whether the letters that appeared in these locations matched or mismatched while distractors that primed either the match or mismatch response were presented between the cued locations If the distractors had no effect on performance, it would provide evidence that subjects can divide attention over noncontiguous areas of space Subjects were able to ignore the distractors when the targets and distractors were presented as nononset stimuli (i e, when premasks were changed into the targets and distractors) In contrast, when the targets and distractors were presented as sudden-onset stimuli, subjects were unable to ignore the distractors These results begin to define the conditions under which attention can be flexibly deployed to multiple noncontiguous locations in the visual field. © 1995, Association for Psychological Science. All rights reserved.
Article
Transport and grasp kinematics were examined in a task in which subjects selectively reached to grasp a target object in the presence of non-target objects. In a variety of experiments significant interference effects were observed in temporal parameters, such as movement time, and spatial parameters, such as path. In general, the presence of non-targets slowed down the reach. Furthermore, reach paths were affected such that the hand veered away from near non-taroets o in reaches for far targets, even though the non-targets were not physical obstacles to the reaching hand. In contrast, the hand veered towards far non-targets in near reaches. We conclude that non-targets evoke competing responses, and the inhibitory mechanisms that resolve this competition are revealed in the reach path.
Article
this chapter is concerned with the question of how . . . attentional limits can be reconciled with a realistic view of the brain's processing capabilities / review . . . and comment upon, some answers that have been suggested both from within Capacity theory and by theorists who reject the Capacity view of attention present a framework for a functional view of limited capacity / main suggestion is that the limits of attention are not due to processing limitations, but rather result from the way in which the brain solves selection problems in the control of action (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Accumulating neuropsychological, electrophysiological and behavioural evidence suggests that the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object. We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior parietal region mediates the required sensorimotor transformations for visually guided actions directed at such objects.
Article
Transport of the hand towards an object and the formation of grasp are logically separable components of reaching. It has been suggested that, although the two components must be temporally co-ordinated, their spatial parameters are under the control of independent visuo-motor channels. A case study of reaching by a proficient user of a manually-operated artificial hand is presented. A pattern of natural hand usage was observed in which the index finger rather than the thumb was responsible for reduction of grasp aperture as the hand approached an object. The same pattern of usage was also observed in the artificial hand even though the mechanics of that hand make it no easier to move the finger than the thumb. This suggests that the relative stability of the thumb in the natural hand is determined, not simply by anatomy, but by a role in guiding the transport component of reaching. At least part of the spatial aspect of grasp formation is closely related to the transport component of reaching and this is evidence against theories postulating two independent visuo-motor channels controlling the spatial parameters of grasp and transport.
Article
A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.
Article
Accurate saccadic programming in natural visual scenes requires a signal designating which of the many potential targets is to be the goal of the saccade. Is this signal controlled by the allocation of perceptual attention, or do saccades have their own independent selective filter? We found evidence for the involvement of perceptual attention, namely: (1) summoning perceptual attention to a target also facilitated saccades; (2) perceptual identification was better at the saccadic goal than elsewhere; and (3) attempts to dissociate the locus of attention from the saccadic goal were unsuccessful, i.e. it was not possible to prepare to look quickly and accurately at one target while at the same time making highly accurate perceptual judgements about targets elsewhere. We also studied the trade-off between saccadic and perceptual performance by means of a novel application of the "attentional operating characteristic" (AOC) to oculomotor performance. This analysis revealed that some attention could be diverted from the saccadic goal with virtually no cost to either saccadic latency or accuracy, showing that there is a ceiling on the attentional demands of saccades. The links we discovered between saccades and attention can be explained by a model in which perceptual attention determines the endpoint of the saccade, while a separate trigger signal initiates the saccade in response to transient changes in the attentional locus. The model will be discussed in the context of current neurophysiological work on saccadic control.
Article
The role of visual information and the precise nature of the representations used in the control of prehension movements has frequently been studied by having subjects reach for target objects in the absence of visual information. Such manipulations have often been described as preventing visual feedback; however, they also impose a working memory load not found in prehension movements with normal vision. In this study we examined the relationship between working memory and visuospatial attention using a prehension task. In this study six healthy, right-handed adult subjects reached for a wooden block under conditions of normal vision, or else with their eyes closed having first observed the placement of the target. Furthermore, the role of visuospatial attention was examined by studying the effect, on transport and grasp kinematics, of placing task-irrelevant "flanker" objects (a wooden cylinder) within the visual field on a proportion of trials. Our results clearly demonstrated that the position of flankers produced clear interference effects on both transport and grasp kinematics. Furthermore, interference effects were significantly greater when subjects reached to the remembered location of the target (i.e., with eyes closed). The finding that the position of flanker objects influences both transport and grasp components of the prehension movement is taken as support for the view that these components may not be independently computed and that subjects may prepare a coordinated movement in which both transport and grasp are specifically adapted to the task in hand. The finding that flanker effects occur primarily when reaching to the remembered location of the target object is interpreted as supporting the view that attentional processes do not work efficiently on working memory representations.
Article
The spatial interaction of visual attention and saccadic eye movements was investigated in a dual-task paradigm that required a target-directed saccade in combination with a letter discrimination task. Subjects had to saccade to locations within horizontal letter strings left and right of a central fixation cross. The performance in discriminating between the symbols "E" and "E", presented tachistoscopically before the saccade within the surrounding distractors was taken as a measure of visual attention. The data show that visual discrimination is best when discrimination stimulus and saccade target refer to the same object; discrimination at neighboring items is close to chance level. Also, it is not possible, in spite of prior knowledge of discrimination target position, to direct attention to the discrimination target while saccading to a spatially close saccade target. The data strongly argue for an obligatory and selective coupling of saccade programming and visual attention to one common target object. The results favor a model in which a single attentional mechanism selects objects for perceptual processing and recognition, and also provides the information necessary for motor action.
Article
In four experiments, the influence of distractor objects on the temporal evolution of the reach-to-grasp movement toward a target object (an apple) was examined. In the first experiment, the distractor was another apple, which moved laterally behind the target and occasionally changed direction toward the target, thus becoming the to-be-grasped object. In the second and third experiments, the distractor was a stationary piece of fruit, which sometimes became the to-be-grasped object because of a change in illumination. The fourth experiment was a combination of the first two experiments. In all cases, selective interference effects on the transport and manipulation components were observed only when attention to the distractor was covert rather than overt. It is proposed that covert visuospatial attention selects information about distracting but potentially important stimuli, such that a registration of significance is accomplished without the need to process all available information.
Article
In reaching for an object in the environment, it has been suggested that movement components concerned with transport of the hand toward the object and those related to grasping the object are organized and executed independently. An experiment is reported that demonstrates people adjust grasp aperture to compensate for factors affecting transport error. Grasp aperture was found to be greater in reaching movements performed faster than normal, and grasp aperture was also found to be wider when reaching with the eyes closed. In both cases, transport was spatially less accurate. It is argued that, in advance of movement, formation of grasp is planned to take into account not only the perceived characteristics of the object but, also, internalized information based on past experience about the likely accuracy of the transport component.
Interference from distractors in reach-to-grasp movements Theory of attentional operations in shape identification
  • A Kritikos
  • K Bennett
  • J Dunai
  • U Castiello
Kritikos, A., Bennett, K., Dunai, J., & Castiello, U. (2000). Interference from distractors in reach-to-grasp movements. Quarterly Journal of Experimental Psychology, 53A (1), 131±151. LaBerge, D., & Brown, V. (1989). Theory of attentional operations in shape identification. Psy-chological Review, 96, 101±124.
Coordinated brain systems in selective perception and action Attention and performance XVI. Information integration in perception and communication (pp. 549±578)
  • J Duncan
Duncan, J. (1996). Coordinated brain systems in selective perception and action. In T. Inui & J. L. McClelland (Eds.), Attention and performance XVI. Information integration in perception and communication (pp. 549±578). Cambridge, MA: MIT Press.