FIG 5 - uploaded by Guy A Orban
Content may be subject to copyright.
Response modulation by bumps and dents at different mean stereoscopic depths. A and B: exemplar cells (the same as in Fig. 2, A and B, respectively). The responses of the cells to the normal flat stimuli are included for comparison. The bump/dent icons (top) illustrate the positioning of the corresponding stimuli in the disparity space. For the bump stimuli, the and r values were 15 and 4.7 spikes/s, respectively, for the cell in A and 79 and 2.8 spikes/s, respectively, for the cell in B. For dents, the and r values were 3.1 and 2.8 spikes/s, respectively , for the cell in A, 87 and 2.6 spikes/s, respectively , for the cell in B. C: response modulation by bumps vs. dents. The modulation of responses across depth was measured for the bump and dent stimuli using indices RMI bump and RMI dent as described in METHODS; the results are plotted in this figure using the same conventions as in Fig. 2.  

Response modulation by bumps and dents at different mean stereoscopic depths. A and B: exemplar cells (the same as in Fig. 2, A and B, respectively). The responses of the cells to the normal flat stimuli are included for comparison. The bump/dent icons (top) illustrate the positioning of the corresponding stimuli in the disparity space. For the bump stimuli, the and r values were 15 and 4.7 spikes/s, respectively, for the cell in A and 79 and 2.8 spikes/s, respectively, for the cell in B. For dents, the and r values were 3.1 and 2.8 spikes/s, respectively , for the cell in A, 87 and 2.6 spikes/s, respectively , for the cell in B. C: response modulation by bumps vs. dents. The modulation of responses across depth was measured for the bump and dent stimuli using indices RMI bump and RMI dent as described in METHODS; the results are plotted in this figure using the same conventions as in Fig. 2.  

Source publication
Article
Full-text available
Differences in the horizontal positions of retinal images--binocular disparity--provide important cues for three-dimensional object recognition and manipulation. We investigated the neural coding of three-dimensional shape defined by disparity in anterior intraparietal (AIP) area. Robust selectivity for disparity-defined slanted and curved surfaces...

Contexts in source publication

Context 1
... the responses of many V4 cells were modu- lated for bump and/or dent stimuli across stereoscopic depth (i.e., across mean disparities), as illustrated by the exemplar cells in Fig. 5, A and B. For both cells, the responses were modulated across various dent stimuli (1-way ANOVA, P 0.05; ---), whereas the response modulation across bump stimuli (-) was statistically insignificant (P 0.05) for the cell shown in A but significant for the cell in ...
Context 2
... responses of the exemplar cell in Fig. 5A to curved stimuli are not readily predictable from its responses to flat stimuli ( ). This is especially clear for the response to the bump stimulus at a mean disparity of 0.25°, which elicited a response larger than that elicited by any zero-order disparity stimuli at any depth. Furthermore, the cell's responses to the zero-order ...
Context 3
... responses of the cell in Fig. 5B to curved stimuli were qualitatively similar to the tuned inhibitory profile for flat stimuli except that the dent profile was shifted to the left and the bump profile was shifted to the right. These shifts would be expected if the disparities for the central portion of the curved stimuli dominated the responses. However, the responses ...
Context 4
... larger than the responses to any of the zero-disparity stimuli intersected by these stimuli. To assess whether the responses of this cell were strongly dependent on stimulus position, we examined responses across the four jitter positions using a two-way ANOVA (jitter stimulus). The jitter factor was statistically insignificant for both cells in Fig. 5, A and B (P 0.7 and 0.14, respectively), indicating that tuning profiles were not significantly dependent on jitter position. Taken together, these analyses suggest that significant nonlinear interactions contribute to the responses to bump and dent stimuli in these example cells. However, these two cells showed the most pronounced ...
Context 5
... measured the modulation of each V4 cell to the bump and dent stimuli across various stereoscopic depths using the RMI bump and RMI dent , respectively (see METHODS). The results are shown in Fig. 5C. Across the population, the responses of 42 (35%) and 37 cells (31%) were significantly modulated for the bumps and dents across the various depths, respectively (filled bars in the histograms on the x and the y axes, respec- tively). Fifty-nine cells (50%) were tuned for either or both types of stimuli. The tuning profiles for bumps ...
Context 6
... (inset), and the numbers of cells denoted by the various symbols are indicated next to the plot. The black numbers inside a given sector denote the total number of cells in each sector; the gray numbers denote the number of cells within the inner triangle of each sector. Exemplar cells in Figs. 2, A and B (which were the same as those in Fig. 5, A and B, respectively), and 4, A and B, are denoted by arrows. B: the sector plot of the responses to the most effective stimulus from each subclass, plotted using the same conventions as in A except for the plotting symbols. In this panel, the plotting symbols denote whether the response to the preferred stimulus in a given subclass ...

Similar publications

Article
Full-text available
Perceptual closure refers to the coherent perception of an object under circumstances when the visual information is incomplete. Although the perceptual closure index observed in electroencephalography reflects that an object has been recognized, the full spatiotemporal dynamics of cortical source activity underlying perceptual closure processing r...
Article
Full-text available
Electrophysiological and behavioral studies in many species have demonstrated mirror-image confusion for objects, perhaps because many objects are vertically symmetric (e.g., a cup is the same cup when seen in left or right profile). In contrast, the navigability of a scene changes when it is mirror reversed, and behavioral studies reveal high sens...
Article
Full-text available
Neurons in the primary visual cortex typically reach their highest firing rate after an abrupt image transition. Since the mutual information between the firing rate and the currently presented image is largest during this early firing period it is tempting to conclude this early firing encodes the current image. This view is, however, made more co...
Article
Full-text available
Achromatic visual information is transferred from the retina to the brain through two parallel channels: ON-center cells carry "white" information and OFF-center cells "black" information (Nelson et al., 1978; Schiller, 1982; Schiller et al., 1986). Responses of ON and OFF retinal and thalamic neurons are approximately equal in magnitude (Krüger an...
Article
Full-text available
Analysis of the movement of a complex visual stimulus is expressed in the responses of pattern-direction-selective neurons in area MT, which depend in turn on directionally selective inputs from area V1. How do MT neurons integrate their inputs? Pattern selectivity in MT breaks down when the gratings comprising a moving plaid are presented to non-o...

Citations

... Similar to F5c AOENs, 76% of AIP neurons responding during grasping observation and execution were also active during observation of an ellipse, even when moving on a scrambled background, and most of these AIP neurons responded maximally when the ellipse appeared close to the to-be-grasped object. A similar correspondence in neuronal selectivity between anatomically connected [41] parietal and F5 subsectors has been reported for 3D [28,42,43] and 2D [27,44] shape. Future studies will have to determine to what extent the different subsectors of F5 respond differently during action observation at the single-neuron level [9,45]. ...
Article
Full-text available
Neurons responding during action execution and action observation were discovered in the ventral premotor cortex 3 decades ago. However, the visual features that drive the responses of action observation/execution neurons (AOENs) have not been revealed at present. We investigated the neural responses of AOENs in ventral premotor area F5c of 4 macaques during the observation of action videos and crucial control stimuli. The large majority of AOENs showed highly phasic responses during the action videos, with a preference for the moment that the hand made contact with the object. They also responded to an abstract shape moving towards but not interacting with an object, even when the shape moved on a scrambled background, implying that most AOENs in F5c do not require the perception of causality or a meaningful action. Additionally, the majority of AOENs responded to static frames of the videos. Our findings show that very elementary stimuli, even without a grasping context, are sufficient to drive responses in F5c AOENs.
... Previous studies show that grasp-relevant representations are distributed across ventral and dorsal visual processing streams. Shape is represented throughout both streams (Sereno et al., 2002;Orban et al., 2006;Konen and Kastner, 2008;Orban, 2011), with dorsal representations emphasizing information required for grasp planning (Srivastava et al., 2009). For example, dorsomedial area V6A-located in human superior parieto-occipital cortex (SPOC)is involved in selecting hand orientation given object shape (Fattori et al., 2004(Fattori et al., , 2009(Fattori et al., , 2010Monaco et al., 2011). ...
Article
Full-text available
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor’s body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors: grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral-stream areas during grasp planning, then in premotor regions during grasp execution. Object mass was encoded in ventral-stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects. Significance Statement Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
... One possibility to explore is that representations of relative disparity gradients that begin to emerge in V3A are solidified in PIP before 3D pose tuning is more widely achieved in Correspondence problem: the problem of identifying matching features in the two retinal images that correspond to a common feature in the world CIP. It is also likely that 3D visual representations are further transformed downstream of CIP in the anterior intraparietal (AIP) area, as well as in premotor cortex, to estimate 3D shape (Durand et al. 2007, Srivastava et al. 2009, Theys et al. 2012) and support prehensile object manipulations (Murata et al. 2000). ...
... This may indicate that CIP influences 3D processing in STSv via connections with AIP (Borra et al. 2008, Webster et al. 1994 (Figure 5). Notably, 3D object shape is processed in both AIP and STSv, but whereas this selectivity emerges more quickly in AIP, it is finer in STSv (Srivastava et al. 2009, Verhoef et al. 2010). This distinction may reflect differences in the temporal urgency of action versus the refinement of 3D object representations. ...
Article
Full-text available
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision. Expected final online publication date for the Annual Review of Vision Science, Volume 9 is September 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
... Previous studies show that grasp-relevant representations are distributed across ventral and dorsal visual processing streams. Shape is represented throughout both streams (Sereno et al., 2002;Orban et al., 2006;Konen and Kastner, 2008;Orban, 2011), with dorsal representations emphasizing information required for grasp planning (Srivastava et al., 2009). For example, dorsomedial area V6A-located in human superior parieto-occipital cortex (SPOC)-is involved in selecting hand orientation given object shape (Fattori et al., 2004(Fattori et al., , 2009(Fattori et al., , 2010Monaco et al., 2011). ...
Preprint
Full-text available
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor’s body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors: grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral-stream areas during grasp planning, then in premotor regions during grasp execution. Object mass was encoded in ventral-stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects. Significance Statement Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
... Area 7b has a strong tactile component but a third of its tactile neurons responding to tactile stimulation applied to the face or arm also respond to visual stimuli presented near those body parts (Hyvärinen, 1981;Hyvärinen and Shelepin, 1979) as well as to motor activity, from simple grasping movements to more complex action sequences such as 'bring this fruit in my mouth' Fogassi and Luppino, 2005;Hyvärinen and Poranen, 1974;Hyvärinen and Shelepin, 1979;L Leinonen, 1979;Leinonen and Nyman, 1979;Robinson et al., 1978). Area AIP has neurons selective for 3D objects and/or 2D object components depending on where the visual stimulus occurs (Durand et al., 2007;Murata et al., 2000;Romero et al., 2013Romero et al., , 2012Sakata and Taira, 1994;Srivastava et al., 2009;Theys et al., 2012;Verhoef et al., 2010). Premotor area F5 has canonical neurons responding to hand-grasping, containing highly overlapping movement representations of the mouth and hand, and visual selectivity matching their motor selectivity (Fogassi et al., 2001;Hepp-Reymond et al., 1994;Murata et al., 1997;Raos et al., 2006;Rizzolatti et al., 1988). ...
... 3D perception is a sequential process involving depth order, depth interval and 3D representation, each of which is specific to a different stage of cortical processing (Anzai and DeAngelis, 2010). Neurobiological evidence suggests that depth order is related to early visual areas, while depth interval and 3D representation tend to be 29 processed in higher cortical areas (Anzai et al., 2011;Anzai and DeAngelis, 2010;Srivastava et al., 2009). Our time-domain results indicate that the increased depth cues from stereoscopic vision amplify P2 components on both occipital and parietal areas, which is consistent with the literature (Cepeda-Freyre et al., 2020). ...
Article
Full-text available
Currently, vision-related neuroscience studies are undergoing a trend from simplified image stimuli toward more naturalistic stimuli. Virtual reality (VR), as an emerging technology for visual immersion, provides more depth cues for three-dimensional (3D) presentation than two-dimensional (2D) image. It is still unclear whether the depth cues used to create 3D visual perception modulate specific cortical activation. Here, we constructed two visual stimuli presented by stereoscopic vision in VR and graphical projection with 2D image, respectively, and used electroencephalography to examine neural oscillations and their functional connectivity during 3D perception. We find that neural oscillations are specific to delta and theta bands in stereoscopic vision and the functional connectivity in the both bands increase in cortical areas related to visual pathways. These findings indicate that low-frequency oscillations play an important role in 3D perception with depth cues.
... Interesting possibilities regarding dorsal involvement in visual object perception are bolstered by the anatomical connectivity (Takemura et al., 2016;Cloutman, 2013) and cross-talk (Hutchison & Gallivan, 2018;Janssen, Verhoef, & Premereur, 2018;de Haan & Cowey, 2011;Schenk & McIntosh, 2010) between the two pathways as well as the finding that neural signals propagate faster through the dorsal relative to the ventral pathway (Sim, Helbig, Graf, & Kiefer, 2015;Srivastava, Orban, De Mazière, & Janssen, 2009;Norman, 2002). Together, these functional and anatomical properties give rise to the possibility that object representations computed first in the dorsal pathway may be capable of priming or otherwise modulating object-related computations in the ventral pathway via feedback. ...
Article
Full-text available
Visual object perception involves neural processes that unfold over time and recruit multiple regions of the brain. Here, we use high-density electroencephalography (EEG) to investigate the spatiotemporal representations of object categories across the dorsal and ventral pathways. In Experiment 1, human participants were presented with images from two animate object categories (birds and insects) and two inanimate categories (tools and graspable objects). In Experiment 2, participants viewed images of tools and graspable objects from a different stimulus set, one in which a shape confound that often exists between these categories (elongation) was controlled for. To explore the temporal dynamics of object representations, we employed time-resolved multivariate pattern analysis on the EEG time series data. This was performed at the electrode level as well as in source space of two regions of interest: one encompassing the ventral pathway and another encompassing the dorsal pathway. Our results demonstrate shape, exemplar, and category information can be decoded from the EEG signal. Multivariate pattern analysis within source space revealed that both dorsal and ventral pathways contain information pertaining to shape, inanimate object categories, and animate object categories. Of particular interest, we note striking similarities obtained in both ventral stream and dorsal stream regions of interest. These findings provide insight into the spatio-temporal dynamics of object representation and contribute to a growing literature that has begun to redefine the traditional role of the dorsal pathway.
... While the dorsal stream is considered to primarily support visuomotor functions, there is plenty of evidence showing that the object features and identities could be encoded across the dorsal pathway. The brain regions containing object representations in the dorsal pathway overlap extensively with the visuomotor system (Murata et al. 2000), suggesting that those object representations may be associated with action encoding (Srivastava et al. 2009;Erlikhman et al. 2018). In other words, the neural representations of goal-directed actions consist of both body actions and the goal-objects, as well as their interaction. ...
Article
The human brain can efficiently process action-related visual information, which supports our ability to quickly understand and learn others’ actions. The visual information of goal-directed action is extensively represented in the parietal and frontal cortex, but how actions and goal-objects are represented within this neural network is not fully understood. Specifically, which part of this dorsal network represents the identity of goal-objects? Is such goal-object information encoded at an abstract level or highly interactive with action representations? Here, we used functional magnetic resonance imaging with a large number of participants (n = 94) to investigate the neural representation of goal-objects and actions when participants viewed goal-directed action videos. Our results showed that the goal-directed action information could be decoded across much of the dorsal pathway, but in contrast, the invariant goal-object information independent of action was mainly localized in the early stage of dorsal pathway in parietal cortex rather than the down-stream areas of the parieto-frontal cortex. These results help us to understand the relationship between action and goal-object representations in the dorsal pathway, and the evolution of interactive representation of goal-objects and actions along the dorsal pathway during goal-directed action observation.
... The AIP has a crucial role in visually guided hand control; for example, in preshaping the hand to grasp an object 220,[239][240][241][242][243][244][245][246][247] . Neurons in the AIP exhibit visual responses that are dependent on the shape of the object and its position in eye-centred coordinates 241,[248][249][250] . At the population level, AIP neurons encode not only object shape but also planned grip type 220 . ...
Article
The hand endows us with unparalleled precision and versatility in our interactions with objects, from mundane activities such as grasping to extraordinary ones such as virtuoso pianism. The complex anatomy of the human hand combined with expansive and specialized neuronal control circuits allows a wide range of precise manual behaviours. To support these behaviours, an exquisite sensory apparatus, spanning the modalities of touch and proprioception, conveys detailed and timely information about our interactions with objects and about the objects themselves. The study of manual dexterity provides a unique lens into the sensorimotor mechanisms that endow the nervous system with the ability to flexibly generate complex behaviour.
... Most notably it has been found that the dorsal stream is not only involved in the perception of object shape and structure (Freud et al. 2015(Freud et al. , 2018 but is also involved in primarily perceptual tasks like face and object recognition in humans (Jeong and Xu 2016;Zachariou et al. 2017). Furthermore, there is evidence of dorsal stream activity that occurs prior Communicated by Melvyn A. Goodale. to the planning and execution of actions in both human (Faillenot et al. 1999;Kourtzy and Kanwisher 2000) and nonhuman primates (Srivastava et al. 2009). One possible explanation for the lack of clear functional distinction between the two streams could lie in the interactions between the dorsal and ventral streams. ...
Article
Full-text available
The two-visual stream hypothesis posits that the dorsal stream is less susceptible than the ventral stream to the effects of illusions and visual priming. While previous studies have separately examined these perceptual manipulations, the present study combined the effects of a visual illusion and priming to examine the possibility of dorsally guided actions being susceptible to the perceptual stimuli due to interactions between the two streams. Thirty-four participants were primed with a ‘long’ or ‘short’ version of the Sander Parallelogram illusion and were asked to either reach out and grasp or manually estimate the length of a rod placed on a version of the illusion that was on some trials the same as the prime (congruent) and on other trials was the inverse (incongruent). Due to the context-focused nature of ventral processing, we predicted that estimations would be more susceptible to the effects of the illusion and priming than grasps. Results showed that while participants’ manual estimations were susceptible to both priming and the illusion, the grasps were only affected by the illusion, not by priming. The influence of the illusion on grip aperture was greater during manual estimations than it was during grasping. These findings support the notion that the functionally distinct dorsal and ventral streams interact under the current experimental paradigm. Outcomes of the study help better understand the nature of stimuli that promote interactions between the dorsal and ventral streams.