FIG 8 - uploaded by John Douglas Crawford
Content may be subject to copyright.
Evolution of four descriptors of output-layer activity across updating in the standard updating models. The means SD for 4 descriptors of output-layer activity from a set of 500 large-saccade (45–50°) updating trials are plotted for the Vtop, Mtop, and EV standard updating models. SDs were used because the distributions of spread of descriptor values were unimodal and sufficiently symmetric. The single exception to this symmetry was for the activation descriptor when mean values approached either limit (0 or 1), in which conditions the error bars were modified accordingly. This was done in all figures showing activation spreads. A: the fraction of updating performed (black) as of each time step is defined as the normalized projection of current remapping onto the ideal remapping vector. The maximum activation (red) is the largest activation of all the output-layer units in the current time step. B: lateral shift is the distance of the center of mass of output-layer activity (black) in degrees from the direct remapping vector from initial to updated target position. Spread of activity (red) in degrees is the root-mean-square distance of all output-layer unit activity from the center of mass of this activity.  

Evolution of four descriptors of output-layer activity across updating in the standard updating models. The means SD for 4 descriptors of output-layer activity from a set of 500 large-saccade (45–50°) updating trials are plotted for the Vtop, Mtop, and EV standard updating models. SDs were used because the distributions of spread of descriptor values were unimodal and sufficiently symmetric. The single exception to this symmetry was for the activation descriptor when mean values approached either limit (0 or 1), in which conditions the error bars were modified accordingly. This was done in all figures showing activation spreads. A: the fraction of updating performed (black) as of each time step is defined as the normalized projection of current remapping onto the ideal remapping vector. The maximum activation (red) is the largest activation of all the output-layer units in the current time step. B: lateral shift is the distance of the center of mass of output-layer activity (black) in degrees from the direct remapping vector from initial to updated target position. Spread of activity (red) in degrees is the root-mean-square distance of all output-layer unit activity from the center of mass of this activity.  

Source publication
Article
Full-text available
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this...

Citations

... Since the initial discovery of P RE ∼45 years ago [37], an assortment of models has attempted to specify a biophysical mechanism, a specific retinotopic brain structure, and/or a neural circuit underlying this ubiquitous phenomenon [38][39][40][41][42][43]. Models that opted for a less reductionist approach were carefully parametrized and/or included nonmodular components, with simulations ultimately conforming to an experimental demand (i.e., one form of remapping over another or the combination of both) [44][45][46]. Indeed, these approaches, as recent reviews demonstrate [24,25] have fundamentally obscured the emergent simplicity and elegance of the phenomenon [47]. ...
Article
Full-text available
This article is part of the Physical Review Research collection titled Physics of Neuroscience.
... This model class predicts that the inherent reference frame of coding within a brain area is fixed across time. Alternative models take advantage of the dynamic nature of brain signals and suggest that sensorimotor transformations can be carried out over time within a single area receiving all relevant inputs (Deneve et al. 2007;Keith et al. 2010;Schneegans and Sch??ner 2012). If this was true, we would expect the spatial coding scheme within a given brain area to change over time. ...
Article
Movement planning involves transforming the sensory signals into a command in motor coordinates. Surprisingly, the real-time dynamics of sensorimotor transformations at the whole brain level remain unknown, in part due to the spatiotemporal limitations of fMRI and neurophysiological recordings. Here, we used magnetoencephalography (MEG)during pro-/anti-wrist pointing to determine (1)the cortical areas involved in transforming visual signals into appropriate hand motor commands, and (2)how this transformation occurs in real time, both within and across the regions involved. We computed sensory, motor, and sensorimotor indices in 16 bilateral brain regions for direction coding based on hemispherically lateralized de/synchronization in the α (7–15 Hz)and β (15–35 Hz)bands. We found a visuomotor progression, from pure sensory codes in ‘early’ occipital-parietal areas, to a temporal transition from sensory to motor coding in the majority of parietal-frontal sensorimotor areas, to a pure motor code, in both the α and β bands. Further, the timing of these transformations revealed a top-down pro/anti cue influence that propagated ‘backwards’ from frontal through posterior cortical areas. These data directly demonstrate a progressive, real-time transformation both within and across the entire occipital-parietal-frontal network that follows specific rules of spatial distribution and temporal order.
... This model class predicts that the inherent reference frame of coding within a brain area is fixed across time. Alternative models take advantage of the dynamic nature of brain signals and suggest that sensorimotor transformations can be carried out over time within a single area receiving all relevant inputs (Denève et al. 2007;Keith et al. 2010;Schneegans and Schöner 2012). If this was true, we would expect the spatial coding scheme within a given brain area to change over time. ...
Preprint
Full-text available
Planning an accurate reach involves the transformation of the neural representation of target location in sensory coordinates into a command for hand motion in motor coordinates. Although imaging techniques such as fMRI reveal the cortical topography of such transformations, and neurophysiological recordings provide local dynamics, we do not yet know the real-time dynamics of sensorimotor transformations at the whole brain level. We used high spatiotemporal resolution magnetoencephalography (MEG) during a pro-/anti-reaching task to determine (1) which brain areas are involved in transforming visual signals into appropriate motor commands for the arm, and (2) how this transformation occurs on a millisecond time scale, both within and across the regions involved. We performed time-frequency response analysis and identified 16 bilateral brain regions using adaptive hierarchical clustering (Alikhanian et al. 2013). We then computed sensory, motor, and sensorimotor indices for direction coding based on hemispherically lateralized de/synchronization in the α (7-15Hz) and β (15-35Hz) bands.
... The shift of visual inputs on the retina caused by a saccade is no different to a shift caused by objects moving in real space, yet we do not mistake one for another. Understanding how the brain achieves visual stability across the saccade has been a challenge to both experimental (Sommer and Wurtz, 2008;Hall and Colby, 2011;Wurtz et al., 2011) and theoretical (Quaia et al., 1998;Keith et al., 2010;Ziesche and Hamker, 2014) neuroscience for decades. It has been suggested that the brain solves this problem by utilizing an efference copy of the motor command responsible for a saccade, called corollary discharge (CD), to compensate in advance for the disturbance brought by the saccade Wurtz, 2002, 2006;Sun and Goldberg, 2016). ...
Article
Full-text available
Our eyes move constantly at a frequency of 3-5 times per second. These movements, called saccades, induce the sweeping of visual images on the retina, yet we perceive the world as stable. It has been suggested that the brain achieves this visual stability via predictive remapping of neuronal receptive field (RF). A recent experimental study disclosed details of this remapping process in the lateral intraparietal area (LIP), that is, about the time of the saccade, the neuronal RF expands along the saccadic trajectory temporally, covering the current RF (CRF), the future RF (FRF), and the region the eye will sweep through during the saccade. A cortical wave (CW) model was also proposed, which attributes the RF remapping as a consequence of neural activity propagating in the cortex, triggered jointly by a visual stimulus and the corollary discharge (CD) signal responsible for the saccade. In this study, we investigate how this CW model is learned naturally from visual experiences at the development of the brain. We build a two-layer network, with one layer consisting of LIP neurons and the other superior colliculus (SC) neurons. Initially, neuronal connections are random and non-selective. A saccade will cause a static visual image to sweep through the retina passively, creating the effect of the visual stimulus moving in the opposite direction of the saccade. According to the spiking-time-dependent-plasticity rule, the connection path in the opposite direction of the saccade between LIP neurons and the connection path from SC to LIP are enhanced. Over many such visual experiences, the CW model is developed, which generates the peri-saccadic RF remapping in LIP as observed in the experiment.
... To assess the generality of this result, we calculated changes in coherence during the same perisaccadic time period in standard frequency bands: theta (4-6 Hz), alpha (7-12 Hz), beta (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34), gamma , and high gamma (80-150 Hz) for 75 additional current-future field pairs. As in the example recordings, the alpha band consistently showed higher increases in relative coherence around the time of a 10°leftward saccade (P << 0.0001, right-tailed two-sample t test; n = 75; Fig. 2E). ...
... Alpha-Phase Relationships Could Support Remapping. Enhanced coherence could serve to transfer visual information from the future field site to the current field site (21), as suggested by some theories of remapping (12,(22)(23)(24). To test this idea, we calculated the phase differences between perisaccadic alpha oscillations at the two sites. ...
... The question naturally arises as to whether oscillatory coherence is a cause or a consequence of remapping. Theoretical models have shown how networks of laterally connected neurons can implement remapping (12,(22)(23)(24), and coherent alpha oscillations might emerge as a side effect of these operations. In that case, alpha coherence would not be necessary for remapping, although it could still facilitate the transmission of the remapped signals to other brain areas (21). ...
Article
Significance Humans and other primates make frequent eye movements to inspect their surroundings. Consequently, stimuli that are stable in the environment are constantly changing position on the retina. One way for the brain to compensate for these changes is a mechanism called receptive field remapping, which allows individual neurons to encode the same object both before and after each saccade. Here, we provide a possible mechanism for remapping: Simultaneous recordings from cortical sites encoding the presaccadic and postsaccadic locations of a visual stimulus reveal coherent oscillations in the alpha frequency band. Because coherent oscillations are thought to play a role more generally in routing information within the brain, our findings provide a framework for understanding stable visual perception during eye movements.
... As indicated by the flat Midpoint activity lines in Figure 8, there was no such spread in our model. Remapping in the simulated FEF sheet involved a jump in activity from the Receptive Field to the Future Field as found for the real FEF (Sommer and Wurtz, 2006) and reported in numerous other physiological and computational studies (Duhamel et al., 1992;Walker et al., 1995;Umeno and Goldberg, 1997;Nakamura and Colby, 2002;Keith and Crawford, 2008;Keith et al., 2010). The jump in activity from FEF neurons representing the Receptive Field to those representing the Future Field was not instantaneous, however. ...
... In a series of models, Keith and colleagues trained a feed-forward neural network to perform remapping in single time steps and incorporated temporal dynamics by means of recurrent connections (Keith et al., 2007;Keith and Crawford, 2008). The recurrent network performed a double-step task using individual signals known to exist in the brain, such as visual error, motor bursts that begin prior to the saccade, and the saccade velocity (Keith et al., 2010). The population dynamics of the trained model depended on the updating signal. ...
... Some of the prior modeling studies of spatial updating focused on briefly flashed probes and the remapping of visual memory (White and Snyder, 2004;Keith et al., 2010;Schneegans and Schöner, 2012). A subset of this class of models (for review, see Hamker et al., 2008;Hamker, 2011, 2014) aimed to explain an intriguing illusion, transsaccadic mislocalization, that can accompany the viewing of brief flashes (Dassonville et al., 1992;Kaiser and Lappe, 2004;Jeffries et al., 2007). ...
Article
Full-text available
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability,” quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
... Since experimental work is extremely difficult and often (in animal experiments) can only target selected areas and signals at one time, it is important to have a theoretical framework to guide such experiments. Past theoretical efforts have used control-system type models to explain the spatiotemporal and geometric aspects of updating (Quaia et al., 1998;Optican and Quaia, 2002;Blohm et al., 2006;Cromer and Waitzman, 2006;Van Pelt and Medendorp, 2007), and neural network models to predict specific signals (Zipser and Andersen, 1988;Snyder, 2004, 2007;Keith et al., 2010). However, there is still no general theoretical framework for spatial updating and remapping. ...
... The proposed model aims to study the dynamics of spatial updating through time in two types of eye movements, saccades and smooth pursuits. As in previous models on this general topic (Zipser and Andersen, 1988;Snyder, 2004, 2007;Keith et al., 2010) we aimed to model this system at a level that bridges the computational and algorithmic levels (Marr, 1982), and made no attempt to model mechanisms at the biophysical level. In order to simulate the dynamics of neural mechanism during smooth pursuit and saccades, we used these novel approaches: we developed a SSM and we used a dual Extended Kalman filter (DEKF) approach Nelson, 1997, 2001). ...
... In other words, the visual input is used to initialize the neurons in the state space. Note that, like most previous models of spatial updating (e.g., Keith et al., 2010), we used a homogenous retinotopic map in our model, which is a simplification of the actual SC map (Cynader and Berman, 1972;Munoz and Wurtz, 1993a,b). This simplification reduced the computational complexity of the model without interfering with its ability to simulate spatial updating (see Section 3). ...
Article
Full-text available
In the oculomotor system‎, ‎spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements‎. ‎Although this has been the subject of extensive experimental investigation‎, ‎there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon‎, ‎and how it influences visual signals in the brain‎. ‎Here‎, ‎we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit‎. ‎Our proposed model is a nonlinear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure‎. ‎The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method‎. ‎The proposed model replicates two fundamental experimental observations‎: ‎continuous gaze-centered updating of visual memory-related activity during smooth pursuit‎, ‎and predictive remapping of visual memory activity before and during saccades‎. ‎Moreover‎, ‎our model makes the new prediction that‎, ‎when uncertainty of input signals is incorporated in the model‎, ‎neural population activity and receptive fields expand just before and during saccades‎. ‎%This study suggests that visual remapping phenomena‎ -‎and thus subjective space constancy- ‎originates from the requirements of motor behavior‎. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism‎, ‎and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
... Here, in a more direct comparison of responses during SP, we found a ratio of 75% between responses to active attended visual targets vs. updated memory targets. This is consistent with the need to internally reconstruct updating signals (Droulez and Berthoz, 1991;Keith et al., 2010), but it may also be somehow complementary to visual responses in normal lighting conditions. We will return to the latter topic below. ...
... The visual inputs to the SC are well described and include both retino-tectal projections as well as projections from occipital and parietal cortex (May, 2006). Once such stimuli have activated the SC, many computational models have proposed that attended stimuli might be maintained by recurrent connections, both intrinsic to structures like the SC and between different regions, both during fixation and spatial updating task (Xing and Andersen, 2000;Keith et al., 2010). These signals could involve both intrinsic SC circuits as well as projections from cortical and sub-cortical regions including caudate nucleus, substantia nigra, dorso-lateral prefrontal cortex (DLPFC), frontal eye fields (FEF) as well as lateral intraparietal area (LIP; Hikosaka and Wurtz, 1983;Hikosaka and Sakamoto, 1986;Gnadt and Andersen, 1988;Funahashi et al., 1989;Goldman-Rakic, 1995;Constantinidis and Steinmetz, 1996;Compte et al., 2003;Armstrong et al., 2009). ...
... Based on behavioral observations, Blohm et al. (2006) proposed a model that used the delayed integration of eye velocity signal to obtain eye displacement signal prior to updating target location. Furthermore, consistent with our results, other computational modeling studies showed that when eye velocity was used as the updating signal, there was a continuous moving hill of activity across two dimensional topographic representation of visual space (Droulez and Berthoz, 1991;Keith et al., 2010). Various cortical and cerebellar regions, ventral intraparietal area (VIP), FEF, medial superior temporal area (MST), oculomotor vermis and ventral paraflocculus carry velocity signals which could be the source of updating signal and reach SC directly or indirectly (Fukushima et al., 1999;Ilg and Thier, 2003;Schlack et al., 2003;Medina and Lisberger, 2007;Dash et al., 2012). ...
Article
Full-text available
In realistic environments, keeping track of multiple visual targets during eye movements likely involves an interaction between vision, top-down spatial attention, memory, and self-motion information. Recently we found that the superior colliculus (SC) visual memory response is attention-sensitive and continuously updated relative to gaze direction. In that study, animals were trained to remember the location of a saccade target across an intervening smooth pursuit (SP) eye movement (Dash et al., 2015). Here, we modified this paradigm to directly compare the properties of visual and memory updating responses to attended and unattended targets. Our analysis shows that during SP, active SC visual vs. memory updating responses share similar gaze-centered spatio-temporal profiles (suggesting a common mechanism), but updating was weaker by ~25%, delayed by ~55 ms, and far more dependent on attention. Further, during SP the sum of passive visual responses (to distracter stimuli) and memory updating responses (to saccade targets) closely resembled the responses for active attentional tracking of visible saccade targets. These results suggest that SP updating signals provide a damped, delayed estimate of attended location that contributes to the gaze-centered tracking of both remembered and visible saccade targets.
... Finally, because these trials were generally associated with smaller saccades, and it was the largest saccades that mostly drove the effect for our cross-field results, this might in part be explained by finding that suppression increased with saccade size. Larger saccades take longer to produce (Abrams et al., 1989), so this would allow more time for TMS to influence the signals (saccade efference copies and other computations) associated with remapping (Keith, Blohm & Crawford, 2010). Larger saccades have also been shown to reduce performance during egocentric updating and trans-saccadic integration tasks in the absence of TMS, presumably due to noisy internal signals (Byrne & Crawford, 2010;Prime et al., 2007). ...
Article
Full-text available
Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.
... As the onset of the first saccade approaches, however, motor signals relating to the actual execution of the current saccade may increasingly affect the localization judgment. Similarly, recent modeling of spatial updating around two-saccade sequences suggests that various motor signals related to the eye movement motor plan and execution can each contribute to spatial updating within the visual system (Keith, Blohm, & Crawford, 2010). Overall, such results indicate that the translational shifts must occur due to a combination of the presence of visual anchors and the impending motor commands for making a saccade sequence, which is different from the localization errors induced by a blink. ...
Article
Full-text available
Various studies have identified systematic errors, such as spatial compression, when observers report the locations of objects displayed around the time of saccades. Localization errors also occur when holding spatial representations in visual working memory. Such errors, however, have not been examined in the context of eye blinks. In this study, we examined the effects of blinks and saccades when observers reproduced the locations of a set of briefly presented, randomly placed discs. Performance was compared with a fixation-only condition in which observers simply held these representations in working memory for the same duration; this allowed us to elucidate the relationship between the perceptual phenomena related to blinks, saccades, and visual working memory. Our results indicate that the same amount of spatial compression is experienced prior to a blink as is experienced in the control fixation-only condition, suggesting that blinks do not increase compression above that occurring from holding a spatial representation in visual memory. Saccades, however, tend to increase these compression effects and produce translational shifts both toward and away from saccade targets (depending on the time of the saccade onset in relation to the stimulus offset). A higher numerosity recall capacity was also observed when stimuli were presented prior to a blink in comparison with the other conditions. These findings reflect key differences underlying blinks and saccades in terms of spatial compression and translational shifts. Such results suggest that separate mechanisms maintain perceptual stability across these visual events.