Article

Stimulus-Specific Delay Activity in Human Primary Visual Cortex

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Working memory (WM) involves maintaining information in an on-line state. One emerging view is that information in WM is maintained via sensory recruitment, such that information is stored via sustained activity in the sensory areas that encode the to-be-remembered information. Using functional magnetic resonance imaging, we observed that key sensory regions such as primary visual cortex (V1) showed little evidence of sustained increases in mean activation during a WM delay period, though such amplitude increases have typically been used to determine whether a region is involved in on-line maintenance. However, a multivoxel pattern analysis of delay-period activity revealed a sustained pattern of activation in V1 that represented only the intentionally stored feature of a multifeature object. Moreover, the pattern of delay activity was qualitatively similar to that observed during the discrimination of sensory stimuli, suggesting that WM representations in V1 are reasonable "copies" of those evoked during pure sensory processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Another fMRI study showed that specific location information was decodable from motor areas when the required behavioral response was known to observers in advance, indicating that the sensory representation was recoded into an action-oriented format (Henderson et al., 2022). These findings provide an important step-forward in our understanding on the recruitment of sensory regions in the brain for the short-term maintenance of stimulus information (i.e., Sensory recruitment hypothesis) (Bettencourt and Xu, 2016;Curtis and D'Esposito, 2003;Kamitani and Tong, 2005;Lorenc et al., 2018;Rademaker et al., 2019;Scimeca et al., 2018;Serences et al., 2009;Sreenivasan and D'Esposito, 2019;Xu, 2017;Zhao et al., 2022). However, because these studies mostly focused on the maintenance of spatial information, it remains to be seen whether the recoding can occur for non-spatial stimulus as well. ...
... The involvement of the sensory visual cortex in the maintenance of stimulus-specific information (i.e., sensory recruitment hypothesis) has been a central issue in the theories of visual WM (Bettencourt and Xu, 2016;Curtis and D'Esposito, 2003;Kamitani and Tong, 2005;Lorenc et al., 2018;Rademaker et al., 2019;Scimeca et al., 2018;Serences et al., 2009;Sreenivasan and D'Esposito, 2019;Xu, 2017;Zhao et al., 2022). Some studies found that the patterns of neural activities in early visual areas reflect stimulus-specific information (Kamitani and Tong, 2005;Serences et al., 2009). ...
... The involvement of the sensory visual cortex in the maintenance of stimulus-specific information (i.e., sensory recruitment hypothesis) has been a central issue in the theories of visual WM (Bettencourt and Xu, 2016;Curtis and D'Esposito, 2003;Kamitani and Tong, 2005;Lorenc et al., 2018;Rademaker et al., 2019;Scimeca et al., 2018;Serences et al., 2009;Sreenivasan and D'Esposito, 2019;Xu, 2017;Zhao et al., 2022). Some studies found that the patterns of neural activities in early visual areas reflect stimulus-specific information (Kamitani and Tong, 2005;Serences et al., 2009). However, others found that the stimulus-specific information is also present in higher cortical regions (Ester et al., 2015) and that irrelevant visual distraction has minimal impacts on the maintenance of stimulus specific information (Bettencourt and Xu, 2016), indicating that the sensory visual cortex may be unnecessary for WM. ...
... This visualization format is commonly attributed to the neural mechanisms of visual working memory (VWM) (5,6), our brain's apparatus for the active maintenance and manipulation of informative images. Although VWM engages a widely distributed brain network (7)(8)(9)(10)(11), the fascinating nature of these near-tangible visualizations of memory and consciousness has steered the focus of neuroscience toward the visual cortex over the past decade (12)(13)(14)(15)(16)(17)(18). ...
... Within this field, the role of the primary visual cortex (V1) has emerged as particularly controversial. While certain studies using functional magnetic resonance imaging (fMRI) suggest the existence of neural codes related to VWM content (7,(14)(15)(16)(17)(18) in V1, others have proposed divergent views (19)(20)(21). Critically, the scarcity of previous electrophysiological evidence in V1 (19,(22)(23)(24), especially concerning the core aspect of working memory-content rather than just spatial location (12,13)-emphasizes the need for further investigation. ...
... S5) in our supplementary results. Our findings align more closely with the recent "rotational dynamic" hypotheses (43) and contrast with the narrowly defined "sensory recruitment" framework (15,16), which assumes a consistent neuronal firing pattern throughout the processing of VWM contents. ...
Article
The human ability to perceive vivid memories as if they “float” before our eyes, even in the absence of actual visual stimuli, captivates the imagination. To determine the neural substrates underlying visual memories, we investigated the neuronal representation of working memory content in the primary visual cortex of monkeys. Our study revealed that neurons exhibit unique responses to different memory contents, using firing patterns distinct from those observed during the perception of external visual stimuli. Moreover, this neuronal representation evolves with alterations in the recalled content and extends beyond the retinotopic areas typically reserved for processing external visual input. These discoveries shed light on the visual encoding of memories and indicate avenues for understanding the remarkable power of the mind’s eye.
... Following now classic studies demonstrating that fMRI patterns of voxel activity in human early visual cortex can be used to decode the contents of visual working memory (WM; Harrison and Tong, 2009;Serences et al., 2009), decoding WM content from visual cortex has been a workhorse for neuroimaging studies testing aspects of the sensory recruitment hypothesis of WM. This incredibly influential hypothesis posits that visual WM storage utilizes the encoding machinery in the visual cortex, assuming that memory and perception utilize similar mechanisms (Postle, 2006;Curtis and D'Esposito, 2003;D'Esposito and Postle, 2015;Serences, 2016). ...
... Research has produced evidence for and against this hypothesis. On the one hand, WM representations can be decoded from the activity patterns as early as primary visual cortex (V1; Harrison and Tong, 2009;Serences et al., 2009;Riggall and Postle, 2012;Sprague et al., 2014;Rahmati et al., 2018;Curtis and Sprague, 2021). There is even some evidence that classifiers trained on data collected from early visual cortex while participants are simply viewing stimuli (e.g. ...
... We used this epoch of data for both training classifiers and testing decoding success. We first validated our methods by replicating successful orientation decoding in visual and parietal cortex separately for each type of modulator ( Figure 2B, Figure 2-figure supplement 2; within; Emrich et al., 2013;Ester et al., 2015;Harrison and Tong, 2009;Kwak and Curtis, 2022;Riggall and Postle, 2012;Sarma et al., 2016;Serences et al., 2009;Yu and Shim, 2017). Turning to the critical test, we asked if a classifier trained on oriented gratings with one type of modulator (e.g. ...
Article
Full-text available
During perception, decoding the orientation of gratings depends on complex interactions between the orientation of the grating, aperture edges, and topographic structure of the visual map. Here, we aimed to test how aperture biases described during perception affect working memory (WM) decoding. For memoranda, we used gratings multiplied by radial and angular modulators to generate orthogonal aperture biases for identical orientations. Therefore, if WM representations are simply maintained sensory representations, they would have similar aperture biases. If they are abstractions of sensory features, they would be unbiased and the modulator would have no effect on orientation decoding. Neural patterns of delay period activity while maintaining the orientation of gratings with one modulator (e.g. radial) were interchangeable with patterns while maintaining gratings with the other modulator (e.g. angular) in visual and parietal cortex, suggesting that WM representations are insensitive to aperture biases during perception. Then, we visualized memory abstractions of stimuli using models of visual field map properties. Regardless of aperture biases, WM representations of both modulated gratings were recoded into a single oriented line. These results provide strong evidence that visual WM representations are abstractions of percepts, immune to perceptual aperture biases, and compel revisions of WM theory.
... Following now classic studies demonstrating that fMRI patterns of voxel activity in human early visual cortex can be used to decode the contents of visual working memory (WM) (Harrison and Tong, 2009;Serences et al., 2009), decoding WM content from visual cortex has been a workhorse for neuroimaging studies testing aspects of the sensory recruitment hypothesis of WM. This incredibly influential hypothesis posits that visual WM storage utilizes the encoding machinery in the visual cortex, assuming that memory and perception utilize similar mechanisms (Curtis and D'Esposito, 2003;D'Esposito and Postle, 2015;Postle, 2006;Serences, 2016). ...
... Research has produced evidence for and against this hypothesis. On the one hand, WM representations as early as primary visual cortex (V1) can be used to decode WM representations Harrison and Tong, 2009;Rahmati et al., 2018;Riggall and Postle, 2012;Serences et al., 2009;Sprague et al., 2014). There is even some evidence that classifiers trained on data collected from early visual cortex while participants are simply viewing stimuli (e.g., oriented gratings) can be used to decode the contents of WM (Albers et al., 2013;Harrison and Tong, 2009;Rademaker et al., 2019). ...
... We used this epoch of data for both training classifiers and testing decoding success. We first validated our methods by replicating successful orientation decoding in visual and parietal cortex separately for each type of modulator ( Figure 2B & Figure S2; within) (Emrich et al., 2013;Ester et al., 2015;Harrison and Tong, 2009;Kwak and Curtis, 2022;Riggall and Postle, 2012;Sarma et al., 2016;Serences et al., 2009;Yu and Shim, 2017). Turning to the critical test, we asked if a classifier trained on oriented gratings with one type of modulator (e.g., radial) could be used to successfully cross-decode gratings with the other type of modulator (e.g., angular). ...
Preprint
Full-text available
Pioneering studies demonstrating that the contents of visual working memory (WM) can be decoded from the patterns of multivoxel activity in early visual cortex transformed not only how we study WM, but theories of how memories are stored. For instance, the ability to decode the orientation of memorized gratings is hypothesized to depend on the recruitment of the same neural encoding machinery used for perceiving orientations. However, decoding evidence cannot be used to test the so-called sensory recruitment hypothesis without understanding the underlying nature of what is being decoded. Although unknown during WM, during perception decoding the orientation of gratings does not simply depend on activities of orientation tuned neurons. Rather, it depends on complex interactions between the orientation of the grating, the aperture edges, and the topographic structure of the visual map. Here, our goals are to 1) test how these aperture biases described during perception may affect WM decoding, and 2) leverage carefully manipulated visual stimulus properties of gratings to test how sensory-like are WM codes. For memoranda, we used gratings multiplied by radial and angular modulators to generate orthogonal aperture biases despite having identical orientations. Therefore, if WM representations are simply maintained sensory representations, they would have similar aperture biases. If they are abstractions of sensory features, they would be unbiased and the modulator would have no effect on orientation decoding. Results indicated that fMRI patterns of delay period activity while maintaining the orientation of a grating with one modulator (eg, radial) were interchangeable with patterns while maintaining a grating with the other modulator (eg, angular). We found significant cross-classification in visual and parietal cortex, suggesting that WM representations are insensitive to aperture biases during perception. Then, we visualized memory abstractions of stimuli using a population receptive field model of the visual field maps. Regardless of aperture biases, WM representations of both modulated gratings were recoded into a single oriented line. These results provide strong evidence that visual WM representations are abstractions of percepts, immune to perceptual aperture biases, and compel revisions of WM theory.
... For instance, sensory recruitment models posit that working memory and perception recruit the same low-level sensory brain areas (Chota & Van der Stigchel, 2021;Christophel et al., 2017;Harrison & Tong, 2009;Serences, 2016;Serences et al., 2009) (but see Y. Xu, 2020Xu, , 2023. Neuroimaging and pupillometric studies have supported this notion by revealing enhanced responses to visual stimuli matching the content of VWM, regardless of their location (Gayet et al., 2017;Olmos-Solis et al., 2018;Wilschut & Mathôt, 2022) (also see Karabay et al., 2024). ...
... The sensory recruitment account proposes that the perception and maintenance of sensory information recruits the same low-level sensory areas (D'Esposito, 2007;Gayet et al., 2018;Scimeca et al., 2018). Indeed, early visual areas are involved in perception as well as VWM (Harrison & Tong, 2009;Serences et al., 2009), and neural and pupil responses to visual input matching VWM content are enhanced (Gayet et al., 2017;Olmos-Solis et al., 2018;Wilschut & Mathôt, 2022) (also see Karabay et al., 2024). Maintaining material in VWM may pre-activate neuronal populations involved in processing specific visual features, leading to an enhanced neural response to matching input (Gayet et al., 2017). ...
Preprint
Full-text available
Adaptive behavior necessitates the prioritization of the most relevant information in the environment (external) and in memory (internal). Internal prioritization is known to guide the selection of external sensory input, but the reverse may also be possible: Does the environment guide the prioritization of memorized material? Here we addressed whether reappearing sensory input matching visual working memory (VWM) facilitates the prioritization of other non-reappearing memorized items. Participants (total n = 72) memorized three orientations. Crucially some, but not all, items maintained in VWM were made available again in the environment. These reappearing items never had to be reproduced later. Experiment 1 showed that the reappearance of all but one memory item benefited accuracy and speed to the same extent as a spatial retro cue. This shows that reappearing memory-matching items allow for the dynamic prioritization of another non-reappearing memorized item. What aspects of the reappearing sensory input drive this effect? Experiment 2 demonstrated that prioritization was facilitated most if reappearing items matched VWM content in terms of both location and orientation. Sensory input fully matching VWM is likely processed more efficiently, possibly leading to stronger prioritization of other memory content. We showed the robustness of our findings in Experiment 3. We propose that the link between sensory processing and VWM is bidirectional: internal representations guide the processing of sensory input, which in turn facilitates the prioritization of other VWM content to subserve adaptive behavior. Public significance statement Visual attention allows for the selective processing of objects in our environment but also the prioritization of contents held in working memory. To date, most studies have investigated the prioritization of working memory content using static displays. However, in everyday life, objects appear, disappear and reappear from our visual field within a matter of seconds. Here we investigated whether participants could leverage the dynamic nature of the world to guide working memory prioritization. In three experiments we demonstrate that humans capitalize on reappearing objects to prioritize other, non-reappearing maintained objects. Moreover, we found that whenever reappearing objects matched memory fully, prioritization of other material was most effective. Our results provide insights into how working memory prioritization may occur in more natural settings.
... It has long been known that subsequent visual stimuli can alter the perception of a preceding visual target to such an extent that the target cannot be experienced or that it appears drastically distorted in comparison to seeing the target in isolation [1][2][3][4][5][6][7][8]. Such findings suggest a continuity between perception and (short-term) memory [9][10][11][12][13][14][15][16], where the visual perception of a target depends on influences playing out well (i.e., at least up to 400 ms) after target offset, so that an initially labile, malleable, and preconscious visual target representation can be altered drastically by influences retrospective relative to the start of target perception and, nonetheless, prior to the conclusion of the target's perception [17,18]. Critically, this long-lasting perceptual integration window also provides the opportunity for spatial attention elicited by a retrospective or post-target cue (in the following labelled a postcue) to influence the perception of a preceding target even hundreds of milliseconds after target offset [5,6,19]. ...
... Across participants, performance seems to depend on the presence of valid post-target cues in a manner inversely proportional to the visibility of Gabor patches (with higher performance in valid compared to invalid trials when the visibility of targets is comparatively low), a phenomenon indicative for contrast gain [37]. 13 Additionally, we also included uncued trials in our analysis by first collapsing the data over the respective SOAs (as uncued trials had no SOA, contrary to valid and invalid trials), and then fitted a model per validity condition. We expected costs in invalid to uncued conditions, as well as benefits in valid compared to uncued conditions at low-to-medium orientation levels. ...
Preprint
Full-text available
Past research suggests a continuity between perception and memory, as reflected in influences of orienting of spatial attention by cues presented after a visual target offset (post-target cues) on target perception. Conducting two experiments, we tested and confirmed this claim. Our study revealed an elevated reliance on post-target cues for target detection with diminishing target visibility, leading to better performance in validly versus invalidly cued trials, indicative for contrast gain. We demonstrated this post-target cueing impact on target perception without a postcue response prompt, meaning that our results truly reflected a continuity between perception and memory rather than a task-specific impact of having to memorize the target due to a response prompt. We further showed an influence of attention by the post-target cues with cues presented away from a clearly visible target, meaning that visual interactions at the target location provided no better explanation of our results. Our results generalize prior research with liminal targets and confirm the view of a perception-memory continuum, so that visual target processing is not shielded against visuospatial orienting of attention elicited by events following offset of the visual target.
... It has long been known that subsequent visual stimuli can alter the perception of a preceding visual target to such an extent that the target cannot be experienced or that it appears drastically distorted in comparison to seeing the target in isolation [1,2,3,4,5,6,7,8]. Such findings suggest a continuity between perception and (short-term) memory [9,10,11,12,13,14,15,16], where the visual perception of a target depends on influences playing out well (i.e., at least up to 400 ms) after target offset, so that an initially labile, malleable, and preconscious visual target representation can be altered drastically by influences retrospective relative to the start of target perception and, nonetheless, prior to the conclusion of the target's perception [17,18]. Critically, this long-lasting perceptual integration window also provides the opportunity for spatial attention elicited by a retrospective or post-target cue (in the following labelled a postcue) to influence the perception of a preceding target even hundreds of milliseconds after target offset [5,6,19]. ...
... This is exactly what we found (Figure 6, [B]). Corresponding model parameters are reported in Table A4, Appendix A. 13 Figure 6. Logistic models fitted to mean accuracy rates across participants in Experiment 2, with varying stimulus orientation (from 0° to 90°); (A): one model for each validity condition (blue solid line: valid condition; red dashed line: invalid condition) and SOA (short SOA on the left, long SOA on the right side); (B): uncued condition (grey dot-dashed line) additionally included, aggregated over stimulus-onset asynchronies (SOAs). ...
Preprint
Full-text available
Past research suggests a continuity between perception and memory, as reflected in influences of orienting of spatial attention by cues presented after a visual target offset (post-target cues) on target perception. Conducting two experiments, we tested and confirmed this claim. Our study revealed an elevated reliance on post-target cues for target detection with diminishing target visibility, leading to better performance in validly versus invalidly cued trials, indicative for contrast gain. We demonstrated this post-target cueing impact on target perception without a postcue response prompt, meaning that our results truly reflected a continuity between perception and memory rather than a task-specific impact of having to memorize the target due to a response prompt. We further showed an influence of attention by the post-target cues with cues presented away from a clearly visible target, meaning that visual interactions at the target location provided no better explanation of our results. Our results generalize prior research with liminal targets and confirm the view of a perception-memory continuum, so that visual target processing is not shielded against visuospatial orienting of attention elicited by events following offset of the visual target.
... In neuroscientific experiments examining the representation of visual WM information in the brain, often only a single stimulus feature needs to be reported after a delay (for example, the orientation of a visual grating) [7][8][9][10] . At one extreme, such tasks could be solved by sustaining a concrete visual memory of the stimulus and its visual details. ...
... At one extreme, such tasks could be solved by sustaining a concrete visual memory of the stimulus and its visual details. Various human neuroimaging studies have shown that WM contents can be decoded from early visual cortices 8,10,11 , which seems consistent with storage in a sensory format (but see ref. 12). At the other extreme, many WM tasks can also be solved with a high-level abstraction of the task-relevant stimulus parameter only, such as its orientation, speed or colour [12][13][14][15] . ...
Article
Full-text available
Stimulus-dependent eye movements have been recognized as a potential confound in decoding visual working memory information from neural signals. Here we combined eye-tracking with representational geometry analyses to uncover the information in miniature gaze patterns while participants (n = 41) were cued to maintain visual object orientations. Although participants were discouraged from breaking fixation by means of real-time feedback, small gaze shifts (<1°) robustly encoded the to-be-maintained stimulus orientation, with evidence for encoding two sequentially presented orientations at the same time. The orientation encoding on stimulus presentation was object-specific, but it changed to a more object-independent format during cued maintenance, particularly when attention had been temporarily withdrawn from the memorandum. Finally, categorical reporting biases increased after unattended storage, with indications of biased gaze geometries already emerging during the maintenance periods before behavioural reporting. These findings disclose a wealth of information in gaze patterns during visuospatial working memory and indicate systematic changes in representational format when memory contents have been unattended.
... Space may further be used as a medium for subsequent attentional selection and prioritization of specific objects within working memory [7][8][9][10][11][12] . The use of space is further consistent with sensory-recruitment models of working memory [13][14][15][16][17] whereby retinotopically organized visual cortex is also utilized for visual retention in working memory-thus 'recycling' existing visual-spatial coding architecture in the human brain 18 . ...
... These results thus show that, even under such circumstances, space continues to serve as a scaffold for binding mnemonic content in visual working memory to serve later selection and prioritization. This is generally consistent with sensoryrecruitment models of visual working memory, that posit that working memory recruits overlapping sensory areas that initially encode the information [13][14][15][16]42 , with non-spatial object features, such as color and temporal position, theorized to be bound together via their shared position [3][4][5][6]18,[43][44][45] . ...
Article
Full-text available
Space and time can each act as scaffolds for the individuation and selection of visual objects in working memory. Here we ask whether there is a trade-off between the use of space and time for visual working memory: whether observers will rely less on space, when memoranda can additionally be individuated through time. We tracked the use of space through directional biases in microsaccades after attention was directed to memory contents that had been encoded simultaneously or sequentially to the left and right of fixation. We found that spatial gaze biases were preserved when participants could (Experiment 1) and even when they had to (Experiment 2) additionally rely on time for object individuation. Thus, space remains a profound organizing medium for working memory even when other organizing sources are available and utilized, with no evidence for an obligatory trade-off between the use of space and time.
... Such globalization has been suggested on the basis of studies showing that information on memoranda can be reconstructed from non-stimulated portions of early visual cortex Serences et al., 2009). Second, peripheral representations may be fovealized, such that specifically neural sensory tissue that is normally dedicated to the fovea is being recruited to aid in, or even replace the representation of peripherally presented memoranda. ...
... For example, a stimulus-related suppression of alpha power in contralateral electrodes may come with power enhancement in concomitant ipsilateral electrodes (Poch et al., 2017;, thus mirroring the information across the midline. This is not denying the fact that there is substantial other evidence for globalization, as based on the fMRI technique Harrison & Tong 2009;Pratte & Tong, 2014;Serences et al. 2009). To our knowledge, . ...
Preprint
Full-text available
Visual working memory is believed to rely on top-down attentional mechanisms that sustain active sensory representations in early visual cortex. However, both bottom-up sensory input and top-down attentional modulations thereof have been shown to be biased towards the fovea, at the expense of the periphery, and initially peripheral percepts may even be assimilated by foveal processing. This raises the question whether and how visual working memory differs for central and peripheral input. To address this, we conducted a delayed orientation recall task in which an orientation stimulus was presented either at the center of the screen or at 15 d.v.a. eccentricity to the left or right. Response accuracy, electroencephalographic (EEG) activity and gaze position were recorded from 30 participants. Accuracy was slightly but significantly higher for foveal versus peripheral memories. Orientation decoding analyses of the EEG signals revealed a clear dissociation between early sensory and later maintenance signals. While sensory signals were clearly decodable for the central location, they were not for peripheral locations. In contrast, maintenance signals were decodable to an equal extent for both foveal and peripheral memories, suggesting comparable top-down components regardless of eccentricity. Moreover, while memory representations were initially spatially specific and reflected in voltage fluctuations, later in the maintenance period they generalized across locations, as emerged in alpha oscillations, thus revealing a dynamic transformation within memory. Moreover, these transformed representations remained accessible through sensory impulse-driven perturbations, unveiling the underlying memory state. The eccentricity-driven dissociation between disparate sensory and common maintenance representations indicates that storage activity patterns as measured by EEG reflect signals beyond primary visual cortex.
... The most surprising finding in working memory research, which has now been replicated numerous times, is that the multivariate patterns of neural activity in primary visual cortex, V1, can be used to decode the contents of visual working memory 3,15 . Moreover, patterns in V1 are often the best for decoding the contents of working memory. ...
Preprint
The main function of working memory is to extend the duration with which information remains available for use by a host of high-level cognitive systems 1. Decades of electrophysiological studies provide compelling evidence that persistent neural activity in prefrontal cortex is a key mechanism for working memory 2. Surprisingly, sophisticated human neuroimaging studies have convincingly demonstrated that the contents of working memory can be decoded from primary visual cortex (V1) 3,4. The necessity of this mnemonic information remains unknown and contentious. Here, we provide causal evidence that transcranial magnetic stimulation to human V1 disrupted the fidelity of visual working memory. Magnetic stimulation during the retention interval of a visuospatial working memory task caused a systematic increase in memory errors. Errors increased only for targets that were remembered in the portion of the visual field disrupted by stimulation, and were invariant to when in the delay stimulation was applied. Moreover, concurrently measured electroencephalography confirmed that stimulation disrupted not only memory behavior, but a standard neurophysiological signature of working memory. These results change the question from whether V1 is necessary for working memory into what mechanisms does V1 use to support memory. Moreover, they point to models in which the mechanisms supporting working memory are distributed across brain regions, including sensory areas that here we show are critical for memory storage.
... Moreover, two studies applied pattern classification techniques to obtain the BOLD activity from the visual cortex during the delay period of delayed discrimination tasks. They were able to predict, on a trial basis, which type of orientation (Harrison and Tong, 2009) and color (Serences et al., 2009) were held in VWM. These results supported the view that sensory cortical areas contribute to VWM retention of fine-tuned feature information (Pasternak and Greenlee, 2005). ...
Article
Full-text available
Brain dynamics of memory formation were explored during encoding and retention intervals of a visual working memory task. EEG data were acquired while subjects were exposed to grayscale images of widely known object categories (e.g., “luggage,” “chair,” and “car”). Following a short delay, two probes were shown to test memory accuracy. Oscillatory portraits of successful and erroneous memories were contrasted. Where significant differences were identified, oscillatory traits of false memories (i.e., when a novel probe item of the same category is recognized as familiar) were compared with those of successful and erroneous memories. Spectral analysis revealed theta (6–8 Hz) power over occipital channels for encoding of successful and false memories that was smaller when compared to other types of memory errors. The reduced theta power indicates successful encoding and reflects the efficient activation of the underlying neural assemblies. Prominent alpha-beta (10–26 Hz) activity belonging to the right parieto-occipital channels was identified during the retention interval. It was found to be larger for false memories and errors than that of correctly answered trials. High levels of alpha-beta oscillatory activity for errors correspond to poor maintenance leading to inefficient allocation of WM resources. In case of false memories, this would imply necessary cognitive effort to manage the extra semantic and perceptual load induced by the encoded stimuli.
... V1, V2 and V3) represent the content of VWM (e.g. [3,4]). However, we lack conclusive evidence supporting the causal role of sensory areas in VWM, not least because previous transcranial magnetic stimulation (TMS) studies provided incongruent results (e.g. ...
Article
Full-text available
The role of the early visual cortex in visual working memory (VWM) is a matter of current debate. Neuroimaging studies have consistently shown that visual areas encode the content of working memory, while transcranial magnetic stimulation (TMS) studies have presented incongruent results. Thus, we lack conclusive evidence supporting the causal role of early visual areas in VWM. In a recent registered report, Phylactou et al. (Phylactou P, Shimi A, Konstantinou N 2023 R. Soc. Open Sci. 10, 230321 (doi:10.1098/rsos.230321)) sought to tackle this controversy via two well-powered TMS experiments, designed to correct possible methodological issues of previous attempts identified in a preceding systematic review and meta-analysis (Phylactou P, Traikapi A, Papadatou-Pastou M, Konstantinou N 2022 Psychon. Bull. Rev. 29, 1594–1624 (doi:10.3758/s13423-022-02107-y)). However, a key part of their critique and experimental design was based on a misunderstanding of the visual system. They disregarded two important anatomical facts, namely that early visual areas of each hemisphere represent the contralateral visual hemifield, and that each hemisphere receives equally strong input from each eye—both leading to confounded conditions and artefactual effects in their studies. Here, we explain the correct anatomy, describe why their experiments failed to address current issues in the literature and perform a thorough reanalysis of their TMS data revealing important null results. We conclude that the causal role of the visual cortex in VWM remains uncertain.
... Because these abstract items have low meaningfulness, these tasks help us to estimate the capacity of working memory in the absence of support from long-term memory associations. 1 Over the past many decades, careful work dissociating working and longterm memory has been important for our understanding of these memory systems and their neural correlates (Baddeley & Warrington, 1970;Christophel et al., 2017;Jeneson & Squire, 2011;Luck & Vogel, 1997;Milner & Penfield, 1955;Scoville & Milner, 1957;Serences et al., 2009). However, by focusing primarily on each memory system in isolation, we may miss important insights about how information is flexibly shuttled between both working and long-term memory in everyday life. ...
Article
Full-text available
Working- and long-term memory are often studied in isolation. To better understand the specific limitations of working memory, effort is made to reduce the potential influence of long-term memory on performance in working memory tasks (e.g., asking participants to remember artificial, abstract items rather than familiar real-world objects). However, in everyday life we use working- and long-term memory in tandem. Here, our goal was to characterize how long-term memory can be recruited to circumvent capacity limits in a typical visual working memory task (i.e., remembering colored squares). Prior work has shown that incidental repetitions of working memory arrays often do not improve visual working memory performance – even after dozens of incidental repetitions, working memory performance often shows no improvement for repeated arrays. Here, we used a whole-report working memory task with explicit rather than incidental repetitions of arrays. In contrast to prior work with incidental repetitions, in two behavioral experiments we found that explicit repetitions of arrays yielded robust improvement to working memory performance, even after a single repetition. Participants performed above chance at recognizing repeated arrays in a later long-term memory test, consistent with the idea that long-term memory was used to rapidly improve performance across array repetitions. Finally, we analyzed inter-item response times and we found a response time signature of chunk formation that only emerged after the array was repeated (inter-response time slowing after two to three items); thus, inter-item response times may be useful for examining the coordinated interaction of visual working and long-term memory in future work.
... In other words, imagery does not appear to operate on the same representation as the perceptual memory of the visual cue, as the former leads to significantly lower accuracy. This finding suggests that perceptual memory and imagery may not fully share the same neuronal representation, challenging the current view of a close functional and neuroanatomical linkage between memory and imagery (Serences et al., 2009;Harrison & Tong 2009, Albers et al., 2013. ...
Preprint
Recent research indicates there is overlap in the neural resources used during imagery and visual short-term memory. But do visual short-term memory and visual imagery operate on similar representations during recall? Here we investigated this question by asking participants to perform a delayed match to sample task for the contrast of visual gratings as cues. In the Imagery condition, participants were asked to form an accurate mental image of the visual cue, and at the end of the trial, perform a matching task on the mental image contrast. In the Memory condition, participants were not required to perform visual imagery but merely instructed to perform the delayed contrast-matching task. In Experiment 1, participants were told at the beginning each block whether to engage in memory or imagery. The results showed that for the relevant contrast feature, matching judgments were more accurately in the Memory than in the Imagery condition. Thus imagery did not maintain an accurate representation of the encoded image, even when the visual features could still be maintained in visual short-term memory. In Experiment 2, participants were required to engage in memory and imagery simultaneously, and were told which to base their judgment on after each trial. The key finding was that the superior accuracy for memory over imagery remained, indicating that the contents of VSTM and imagery are based on distinct representations.
... They then applied this classifier to fMRI data from a WM task in which participants were shown two distinct orientation gratings on each trial and then cued to hold one of these gratings in memory across an 11-s delay period, after which they judged whether a probe grating matched the one held in memory. The fMRI analyses showed that even though BOLD signal levels in the visual cortex dropped dramatically after stimulus encoding, the classifier accurately decoded the orientation of the grating held in memory based on delay period activity patterns in visual areas V1 to V4. Serences et al. (2009) conducted a similar experiment, except that they used orientation gratings on colored backgrounds, and participants were cued to maintain either the orientation or the color. Their MVPA analysis showed that the maintenance period only contained diagnostic information about the relevant dimension, and most robustly in area V1. ...
Chapter
Full-text available
Since its introduction 25 years ago, functional magnetic resonance imaging (fMRI) has provided researchers with a powerful tool to characterize the brain mechanisms underlying many facets of human cognition. The goal of this chapter is to highlight the ways in which fMRI methods can be, and have been, harnessed to deepen the understanding of human memory. We acknowledge that fMRI-which measures local changes in blood oxygenation levels as induced by fluctuations in neural activity-is but one of many functional neuroimaging techniques available to cognitive neuroscientists. Complementary tools such as positron emission tomography (PET), electroenceph-alography (EEG), and magnetoencephalography (MEG) have all been proven valuable in the quest to elucidate the neural correlates of memory formation, maintenance, and retrieval processes. However , in an effort to provide sufficient depth of coverage, we have chosen to focus exclusively on fMRI, which is currently the most widely used functional neuroimaging method. Our intention is to help readers with limited neuroimaging experience appreciate the important experimental design elements that one must consider when developing an fMRI study of memory, as well as the range of data analysis approaches that one can employ to gain insights into the contributions of individual brain regions and the functional interactions between regions. Please note that this area of research is replete with acronyms. We therefore list these acronyms in Table 22.1. The first major consideration when designing an fMRI study is how to structure the timing of task events to facilitate the measurement of brain activity associated with different cognitive processes of interest. We therefore begin by reviewing three commonly used experimental strategies for stimulus presentation: blocked, event-related, and mixed designs (Figure 22.1). Blocked designs examine the sustained blood-oxygen-level-dependent (BOLD) response across many successive trials of a given task, enabling between-task comparisons (e.g., encoding versus retrieval), whereas event-related designs examine the transient BOLD responses evoked during each trial, enabling comparisons between trial types (e.g., trials associated with remembered versus forgotten stimuli). Mixed designs incorporate a combination of both design characteristics, blocking trials of a given condition together for the examination of temporally sustained effects, while also allowing for analysis of trial-specific effects within individual task blocks.
... Visual working memory (VWM) allows for a brief maintenance of visual stimuli that are no longer present within the environment [1][2][3] . Previous studies have revealed that the contents of VWM are present throughout multiple visual areas, starting from V1 [4][5][6][7][8][9][10][11][12] . These findings raised the question of how areas that are primarily involved in visual perception can also maintain VWM information without interference between the two contents. ...
Preprint
Full-text available
Recent studies have provided evidence for the concurrent encoding of sensory percepts and visual working memory contents (VWM) across visual areas; however, it has remained unclear how these two types of representations are concurrently present. Here, we reanalyzed an open-access fMRI dataset where participants memorized a sensory stimulus while simultaneously being presented with sensory distractors. First, we found that the VWM code in several visual regions did not generalize well between different time points, suggesting a dynamic code. A more detailed analysis revealed that this was due to shifts in coding spaces across time. Second, we collapsed neural signals across time to assess the degree of interference between VWM contents and sensory distractors, specifically by testing the alignment of their encoding spaces. We find that VWM and feature-matching sensory distractors are encoded in separable coding spaces. Together, these results indicate a role of dynamic coding and temporally stable coding spaces in helping multiplex perception and VWM within visual areas.
... While it is clearly established that occipital cortex contains visual maps (Inouye, 1909), recent evidence supports the provocative idea that these visual maps also play important roles in both working and long-term memory. Surprisingly, the contents of visual working memory (Harrison and Tong, 2009;Serences et al., 2009;Curtis and Sprague, 2021) and the contents retrieved from long-term memory (Bosch et al., 2014;Naselaris et al., 2015;Vo et al., 2022) can be decoded from the patterns of voxel activity in human primary visual cortex (V1). Such results provide strong support for influential theories of how the encoding mechanisms used for perception might also be used to store working memory representations (D'Esposito and Postle, 2015;Serences, 2016) and similarly be used to recall the visual properties of retrieved long-term memory (Tulving and Thomson, 1973;Schacter et al., 1998;Rugg et al., 2008). ...
Preprint
Full-text available
Perception, working memory, and long-term memory each evoke neural responses in visual cortex, suggesting that memory uses encoding mechanisms shared with perception. While previous research has largely focused on how perception and memory are similar, we hypothesized that responses in visual cortex would differ depending on the origins of the inputs. Using fMRI, we quantified spatial tuning in visual cortex while participants (both sexes) viewed, maintained in working memory, or retrieved from long-term memory a peripheral target. In each of these conditions, BOLD responses were spatially tuned and were aligned with the target’s polar angle in all measured visual field maps including V1. As expected given the increasing sizes of receptive fields, polar angle tuning during perception increased in width systematically up the visual hierarchy from V1 to V2, V3, hV4, and beyond. In stark contrast, the widths of tuned responses were broad across the visual hierarchy during working memory and long-term memory, matched to the widths in perception in later visual field maps but much broader in V1. This pattern is consistent with the idea that mnemonic responses in V1 stem from top-down sources. Moreover, these tuned responses when biased (clockwise or counterclockwise of target) predicted matched biases in memory, suggesting that the readout of maintained and reinstated mnemonic responses influences memory guided behavior. We conclude that feedback constrains spatial tuning during memory, where earlier visual maps inherit broader tuning from later maps thereby impacting the precision of memory. Significance Statement We demonstrate that visual information that is seen, maintained in working memory, and retrieved from long-term memory evokes responses that differ in spatial extent within visual cortex. These differences depend on the origins of the visual inputs. Feedforward visual inputs during perception evoke tuned responses in early visual areas that increase in size up the visual hierarchy. Feedback inputs associated with memory originate from later visual areas with larger receptive fields resulting in uniformly wide spatial tuning even in primary visual cortex. That trial-to-trial difficulty is reflected in the accuracy and precision of these representations suggests that visual cortex is flexibly used for processing visuospatial information, regardless of where that information originates.
... On the neurophysiological level, similar sensory cortices appear to be activated during an experience and during the retrieval of that experience (for reviews, see Buckner & Wheeler, 2001;Danker & Anderson, 2010). In the same way, mental imagery is functionally equivalent to sensory perception (Albers et al., 2013;Dijkstra et al., 2020;Harrison & Tong, 2009;Kosslyn et al., 1978;Kosslyn & Thompson, 2003;Pearson, 2019;Serences et al., 2009;Shepard & Metzler, 1971), and MI to real-life action (Cummings & Williams, 2012;Lorey et al., 2009;Sirigu & Duhamel, 2001). Experiencing or simulating therefore seems to be part of the same neural process, activating neuronal patterns that are partly identical. ...
Article
Full-text available
Mounting evidence supports the efficacy of mental imagery for verbal information retention. Motor imagery, imagining oneself interacting physically with the object to be learned, emerges as an optimal form compared to less physically engaging imagery. Yet, when engaging in mental imagery, it occurs within a specific context that may affect imagined actions and consequently impact the mnemonic benefits of mental imagery. In a first study, participants were given instructions for incidental learning: mental rehearsal, visual imagery, motor imagery or situated motor imagery. The latter, which involved imagining physical interaction with an item within a coherent situation, produced the highest proportion of correct recalls. This highlights memory’s role in supporting situated actions and offers the possibility for further developing the mnemonic potential of embodied mental imagery. Furthermore, item-level analysis showed that individuals who engaged in situated motor imagery remembered words primarily due to the sensorimotor characteristics of the words’ referent. A second study investigating the role of inter-item distinctiveness in this effect failed to determine the extent to which the situational and motor elements need to be distinctive in order to be considered useful retrieval cues and produce an optimal memory performance.
... Visual object imagery is related to several brain functions, such as working memory [28][29][30][31], shape-specific processing in the visual cortex [32], imagined and perceptual scene-specific brain activity [33], mental imagery during dreaming [34], visual search [35], and the relationship between mental imagery and emotions [36]. Visual perception and mental imagery activate similar brain patterns [37][38][39][40][41][42]. ...
... There, given the proximity of the sources to the midline, we selected the source with the highest F-statistic bilaterally. Sources coordinates were ( Figure 1B (Li et al., 2022), while visual nodes fall in the occipital pole, from where previous studies decoded memory contents during delay period (Serences et al., 2009). Timecourses of selected sources of interest were reconstructed using a linear constrained minimum variance beamformer ("lcmv" method in FieldTrip) (Veen et al., 1997) with a fixed orientation projected along the first PCA axis. ...
Article
Full-text available
Visual working memory (WM) engages several nodes of a large-scale network that includes frontal, parietal, and visual regions; however, little is understood about how these regions interact to support WM behavior. In particular, it is unclear whether network dynamics during WM maintenance primarily represent feedforward or feedback connections. This question has important implications for current debates about the relative roles of frontoparietal and visual regions in WM maintenance. In the current study, we investigated the network activity supporting WM using MEG data acquired while healthy subjects performed a multi-item delayed estimation WM task. We used computational modeling of behavior to discriminate correct responses (high accuracy trials) from two different types of incorrect responses (low accuracy and swap trials), and dynamic causal modeling of MEG data to measure effective connectivity. We observed behaviorally dependent changes in effective connectivity in a brain network comprising frontoparietal and early visual areas. In comparison with high accuracy trials, frontoparietal and frontooccipital networks showed disrupted signals depending on type of behavioral error. Low accuracy trials showed disrupted feedback signals during early portions of WM maintenance and disrupted feedforward signals during later portions of maintenance delay, while swap errors showed disrupted feedback signals during the whole delay period. These results support a distributed model of WM that emphasizes the role of visual regions in WM storage and where changes in large scale network configurations can have important consequences for memory-guided behavior.
... But this does not mean that the electrophysiological activity during the extraction stage is not related to working memory performance. According to sensory recruitment model of working memory (Awh and Jonides 2001;D'Esposito and Postle 2015;Serences et al. 2009), information is stored in working memory via sustained activity in the same cortical areas that are responsible for the perceptual processing of sensory signals. On the other hand, the relative contributions of prefrontal and sensory areas to working memory have attracted great attention, suggesting the existence of prefrontal interactions with sensory areas during working memory (Christophel et al. 2017;Kuo et al. 2011;Merrikhi et al. 2017). ...
Article
Full-text available
Working memory is a complex cognitive system that temporarily maintains purpose-relevant information during human cognition performance. Working memory performance has also been found to be sensitive to high-altitude exposure. This study used a multilevel change detection task combined with Electroencephalogram data to explore the mechanism of working memory change from high-altitude exposure. When compared with the sea-level population, the performance of the change detection task with 5 memory load levels was measured in the Han population living in high-altitude areas, using the event-related potential analysis and task-related connectivity network analysis. The topological analysis of the brain functional network showed that the normalized modularity of the high-altitude group was higher in the memory maintenance phase. Event-related Potential analysis showed that the peak latencies of P1 and N1 components of the high-altitude group were significantly shorter in the occipital region, which represents a greater attentional bias in visual early processing. Under the condition of high memory loads, the high-altitude group had a larger negative peak in N2 amplitude compared to the low-altitude group, which may imply more conscious processing in visual working memory. The above results revealed that the visual working memory change from high-altitude exposure might be derived from the attentional bias and the more conscious processing in the early processing stage of visual input, which is accompanied by the increase of the modularity of the brain functional network. This may imply that the attentional bias in the early processing stages have been influenced by the increased modularity of the functional brain networks induced by high-altitude exposure.
... There is also evidence for a recruitment of primary sensory areas like V1 in visual WM processes (7)(8)(9)(10)(11)(12)(13)(14)(15). For example, information held in visual WM can be decoded from V1 activity (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26); the volume (27) and activity (22,28,29) of V1 are positively correlated with behavioral performance in WM tasks. Early evidence suggests that WM is mediated by persistent firing during the delay period (1)(2)(3)(30)(31)(32)(33). ...
Article
Full-text available
In order to investigate the involvement of the primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multi-unit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of WM contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
... Perhaps the most striking aspect of the present results was the failure to find strong evidence for the active delay-period representation of stimulus orientation in early visual cortex: It was absent during Delay 1.1, and the significant IEM reconstructions observed during and immediately following the presentation of Cue 1 were likely due, at least in part, to the fact that Cue 1 visually re-presented these two orientations. Initially, this might seem to be at odds with one of the most replicated findings in visual WM research, one that dates back to some of the earliest applications of multivariate decoding methods to fMRI data (Harrison and Tong, 2009;Serences et al., 2009). But this result, together with the robust and priority-sensitive representation of stimulus location, was predicted by the functional model. ...
Article
Content-to-context binding is crucial for working memory performance. Using a dual-serial retrocueing (DSR) task on oriented gratings, Yu et al. (2020) found that content (orientation) of both prioritized and unprioritized memory items (PMI; UMI) was represented simultaneously in visual cortex, while their context (location) was represented in intraparietal sulcus (IPS), with a priority-based remapping of the representation of content and context of the UMI in each region, respectively. This registered report acquired fMRI of 24 healthy adults while they performed a DSR task with location as the to-be-reported content and orientation as the task-relevant context. We contrasted three accounts: domain-dependent, the engagement of visual and parietal regions depends on the feature domain (orientation vs location); functional, the engagement of these regions depends on their function (content vs context); and hybrid—a combination of the domain-dependent account and the additional stipulation that IPS encodes context regardless of domain. Delay-period activity in early visual cortex conformed most closely with functional predictions: robust priority-sensitive representation of stimulus location (content), but no evidence for the active representation of stimulus orientation (context). Delay-period activity in IPS, in contrast, conformed most closely to predictions of the hybrid account: active representation of content (location) and of prioritized context (orientation). Exploratory analyses further supported the hybrid account of IPS, revealing univariate sensitivity to variation in both content and context load, the latter in a manner that predicted individual differences in behavior. The representation of visual information in working memory is highly dependent on behavioral context.
... The activity approach has been informative but is limited in two ways. First, univariate BOLD activity has been shown to be a limited description of the signals present in the brain (Serences et al., 2009). Second, although results are often examined at the network level, activation approaches cannot always speak to the functional configuration of networks. ...
Article
Although we must prioritize the processing of task-relevant information to navigate life, our ability to do so fluctuates across time. Previous work has identified fMRI functional connectivity (FC) networks that predict an individual's ability to sustain attention and vary with attentional state from 1 min to the next. However, traditional dynamic FC approaches typically lack the temporal precision to capture moment-to-moment network fluctuations. Recently, researchers have “unfurled” traditional FC matrices in “edge cofluctuation time series” which measure timepoint-by-timepoint cofluctuations between regions. Here we apply event-based and parametric fMRI analyses to edge time series to capture moment-to-moment fluctuations in networks related to attention. In two independent fMRI datasets examining young adults of both sexes in which participants performed a sustained attention task, we identified a reliable set of edges that rapidly deflects in response to rare task events. Another set of edges varies with continuous fluctuations in attention and overlaps with a previously defined set of edges associated with individual differences in sustained attention. Demonstrating that edge-based analyses are not simply redundant with traditional regions-of-interest–based approaches, up to one-third of reliably deflected edges were not predicted from univariate activity patterns alone. These results reveal the large potential in combining traditional fMRI analyses with edge time series to identify rapid reconfigurations in networks across the brain.
... It has long been known that subsequent visual stimuli can alter the perception of a preceding visual target to such an extent that the target cannot be experienced or that it appears drastically distorted in comparison to seeing the target in isolation [1][2][3][4][5][6][7][8]. Such findings suggest a continuity between perception and (short-term) memory [9][10][11][12][13][14][15][16], where the visual perception of a target depends on influences playing out well (i.e., at least up to 400 ms) after target offset so that an initially labile, malleable, and preconscious visual target representation can be altered drastically by influences retrospective relative to the start of target perception and, nonetheless, prior to the conclusion of the target's perception [17,18]. Critically, this long-lasting perceptual integration window also provides the opportunity for spatial attention elicited by a retrospective or post-target cue (in the following labeled a postcue) to influence the perception of a preceding target even hundreds of milliseconds after target offset [5,6,19]. ...
Article
Full-text available
Past research suggests a continuity between perception and memory, as reflected in influences of orienting of spatial attention by cues presented after a visual target offset (post-target cues) on target perception. Conducting two experiments, we tested and confirmed this claim. Our study revealed an elevated reliance on post-target cues for target detection with diminishing target visibility, leading to better performance in validly versus invalidly cued trials, indicative of contrast gain. We demonstrated this post-target cueing impact on target perception without a postcue response prompt, meaning that our results truly reflected a continuity between perception and memory rather than a task-specific impact of having to memorize the target due to a response prompt. While previous studies found an improvement in accuracy through valid compared to invalid cues using liminal targets, in Experiment 1, we further showed an influence of attention on participants' response time by the post-target cues with cues presented away from a clearly visible target. This suggests that visual interactions at the target location provided no better explanation of post-target cueing effects. Our results generalize prior research with liminal targets and confirm the view of a perception-memory continuum so that visual target processing is not shielded against visuospatial orienting of attention elicited by events following the offset of the visual target.
... For single visual items, studies 8 have found correlations between decodable information in visual cortex and recall performance 9 across participants and trials [11,12,13]. The neuronal selectivity in visual cortex might enable 10 the maintenance of highly detailed, sensory-like, representations of memorized content [14,11,15]. ...
Preprint
Full-text available
While distributed cortical areas represent working memory contents, their necessity for memory maintenance has been questioned. To understand whether these regions serve separable functional roles when multiple items are maintained, we examined the effect of visual working memory load on neural information across cortical regions. We show that increasing visual load decreased behavioural recall performance and item-specific mnemonic information in visual cortex but not in anterior regions. Both items, irrespective of their serial position, were represented in visual cortex, whereas sPCS maintained only the most recent item. Our results provide evidence for distinct functional roles of visual cortices, where single items are stored with high fidelity, and anterior cortices, where multiple items are represented using different cortical patterns.
... There is a large corpus of literature focusing on where in the brain WM contents are stored during the maintenance phase (e.g., [52]; for reviews, see, [53,54]). Most of these studies demonstrated persistent and distributed representation of WM contents within sensory regions [55][56][57]. However, some of these also found the representation of the maintained information within the attentional frontoparietal network, such as colour patterns [58,59], stimulus positions [56,60], and motion features [61]. ...
Article
Full-text available
The frontoparietal attention network plays a pivotal role during working memory (WM) maintenance, especially under high-load conditions. Nevertheless, there is ongoing debate regarding whether this network relies on supramodal or modality-specific neural signatures. In this study, we used multi-voxel pattern analysis (MVPA) to evaluate the neural representation of visual versus auditory information during WM maintenance. During fMRI scanning, participants maintained small or large spatial configurations (low- or high-load trials) of either colour shades or sound pitches in WM for later retrieval. Participants were less accurate in retrieving high- vs. low-load trials, demonstrating an effective manipulation of WM load, irrespective of the sensory modality. The frontoparietal regions involved in maintaining high- vs. low-load spatial maps in either sensory modality were highlighted using a conjunction analysis. Widespread activity was found across the dorsal frontoparietal network, peaking on the frontal eye fields and the superior parietal lobule, bilaterally. Within these regions, MVPAs were performed to quantify the pattern of distinctness of visual vs. auditory neural codes during WM maintenance. These analyses failed to reveal distinguishable patterns in the dorsal frontoparietal regions, thus providing support for a common, supramodal neural code associated with the retention of either visual or auditory spatial configurations.
... Our results provide further evidence to the idea that visual memoranda are stored in a spatially organized format (van Ede et al., 2019;Schneegans & Bays, 2017). Furthermore, our findings are in line with the sensory recruitment hypothesis claiming that visual sensory areas, which are largely retinotopically organized, are involved in the maintenance of memoranda (Rademaker, Chunharas, & Serences, 2019;Ester, Rademaker, & Sprague, 2016;Harrison & Tong, 2009;Serences, Ester, Vogel, & Awh, 2009). The spatial organization of the visuo-attentional system is assumed to play a key role in the perceptual binding of multiple features into objects (Kahneman, Treisman, & Gibbs, 1992;Treisman, 1988). ...
Article
Visual working memory (VWM) allows to store goal-relevant information to guide future behavior. Prior work suggests that VWM is spatially organized and relies on spatial attention directed toward locations at which memory items were encoded, even if location is task irrelevant. Importantly, attention often needs to be dynamically redistributed between locations, for example, in preparation for an upcoming probe. Very little is known about how attentional resources are distributed between multiple locations during a VWM task and even less about the dynamic changes governing such attentional shifts over time. This is largely due to the inability to use behavioral outcomes to reveal fast dynamic changes within trials. We here demonstrate EEG steady-state visual evoked potentials (SSVEPs) to successfully track the dynamic allocation of spatial attention during a VWM task. Participants were presented with to-be-memorized gratings and distractors at two distinct locations, tagged with flickering discs. This allowed us to dynamically track attention allocated to memory and distractor items via their coupling with space by quantifying the amplitude and coherence of SSVEP responses in the EEG signal to flickering stimuli at the former memory and distractor locations. SSVEP responses did not differ between memory and distractor locations during early maintenance. However, shortly before probe comparison, we observed a decrease in SSVEP coherence over distractor locations indicative of a reallocation of spatial attentional resources. RTs were shorter when preceded by stronger decreases in SSVEP coherence at distractor locations, likely reflecting attentional shifts from the distractor to the probe or memory location. Broader Significance We demonstrate that SSVEPs can inform about dynamic processes in VWM, even if location does not have to be reported by participants. This finding not only supports the notion of a spatially organized VWM but also reveals that SSVEPs betray a dynamic prioritization process of working memory items and locations over time that is directly predictive of memory performance.
... In line with this idea, 'sensorimotor recruitment' models of WM have suggested that visual information is maintained in the same stimulus-selective regions that are responsible for perceiving that information (D'Esposito & Postle, 2015;Harrison & Tong, 2009;Pasternak & Greenlee, 2005;Scimeca et al., 2018;Serences et al., 2009). Our results provide supporting evidence for this sensorimotor recruitment account. ...
Preprint
Full-text available
Working memory (WM) is the ability to retain and manipulate information in mind, which allows mnemonic representations to flexibly guide behavior. Successful WM requires that objects' individual features are bound into cohesive representations, however the mechanisms supporting feature binding remain unclear. Binding errors (or swaps) provide a window into the intrinsic limits in capacity of WM. We tested the hypothesis that binding in WM is accomplished via neural phase synchrony and swaps result from its perturbations. Using magnetoencephalography data collected from human subjects, in a task designed to induce swaps, we showed that swaps are characterized by reduced phase-locked oscillatory activity during memory retention. We found that this reduction arises from increased phase-coding variability in the alpha-band, over a distributed network of sensorimotor areas. Our findings support the notion that feature binding in WM is accomplished through phase-coding dynamics that emerge from the competition between different memories.
... Visual working memory is often assumed to be deeply perceptual in nature (e.g., Serences et al., 2009;Harrison & Tong., 2009;Scimeca et al., 2018; but also see Xu, 2017) and is thus often thought to be best assessed independently of higher-level factors such as prior knowledge (Cowan, 2001). Based on this, the majority of studies have used artificial and abstract visual stimuli (e.g., colored circles, oriented lines) and have concluded that visual working memory capacity is "fixed" either in terms of the number of objects (Awh et al., 2007;Luck & Vogel, 1997), the number of parts per object (Luck, 2008), or in terms of a fixed resource pool (Bays et al., 2009). ...
Preprint
Full-text available
Visual working memory is traditionally studied using abstract, meaningless stimuli. While studies using such simplified stimuli have been insightful in understanding the mechanisms of visual working memory, they also potentially undermine our ability to understand how people encode and store conceptually rich and meaningful stimuli in the real world. Recent studies demonstrate that meaningful and familiar visual stimuli that connect to existing knowledge are better remembered compared to abstract colors or shapes, indicating that meaning can unlock additional working memory capacity. These findings challenge current models of visual working memory and suggest that its capacity is not fixed but depends on the type of information that is being remembered, and in particular how that information connects to pre-existing knowledge.
... First, it is possible that more accurate saccades were more closely targeting the underlying cortical space that maintained the memory representation. This view is in line with the idea of sensory recruitment (Harrison & Tong, 2009;Serences et al., 2009; for reviews, see Pasternak & Greenlee, 2005;Serences, 2016), and suggests that early visual cortex could constitute an interface between visual working memory and the oculomotor system that accounts for the reported trial-by-trial variations. In consequence, it could explain the marked difference between our results and the findings of Greenwood et al. (2017): while we demonstrate that saccadic selection in memory and saccade metrics covary on a single-trial level, saccade parameters and crowding remained uncorrelated even after standardization in their study. ...
Article
Full-text available
Visual working memory and actions are closely intertwined. Memory can guide our actions, but actions also impact what we remember. Even during memory maintenance, actions such as saccadic eye movements select content in visual working memory, resulting in better memory at locations that are congruent with the action goal as compared to incongruent locations. Here, we further substantiate the claim that saccadic eye movements are fundamentally linked to visual working memory by analyzing a large data set (>100k trials) of nine experiments (eight of them previously published). Using Bayesian hierarchical models, we demonstrate robust saccadic selection across the full range of probed saccade directions, manifesting as better memory performance at the saccade goal irrespective of its location in the visual field. By inspecting individual differences in saccadic selection, we show that saccadic selection was highly prevalent in the population. Moreover, both saccade metrics and visual working memory performance varied considerably across the visual field. Crucially, however, both idiosyncratic and systematic visual field anisotropies were not correlated between visual working memory and the oculomotor system, suggesting that they resulted from different sources (e.g., rely on separate spatial maps). In stark contrast, trial-by-trial variations in saccade metrics were strongly associated with memory performance: At any given location, shorter saccade latencies and more accurate saccades were associated with better memory performance, undergirding a robust link between action selection and visual memory.
... There is also evidence for a recruitment of primary sensory areas like V1 in visual WM processes (D'Esposito, 2007;D'Esposito and Postle, 2015;Lara and Wallis, 2015;Pasternak and Greenlee, 2005;Scimeca et al., 2018;Serences, 2016;Sreenivasan et al., 2014;Supèr et al., 2001;van Kerkoerle et al., 2017). For example, information held in visual WM can be decoded from V1 activity (Christophel et al., 2018;Emrich et al., 2013;Ester et al., 2013;Ester et al., 2009;Harrison and Tong, 2009;Lawrence et al., 2018;Lorenc et al., 2018;Rademaker et al., 2019;Serences et al., 2009;Wolff et al., 2015;Wolff et al., 2017); the volume (Bergmann et al., 2016) and activity (Ester et al., 2013;Iamshchinina et al., 2021a, b) of V1 are positively correlated with behavioural performance in WM tasks. Early evidence suggests that WM is mediated by persistent firing during the delay period (Constantinidis et al., 2018;Funahashi et al., 1989;Fuster and Alexander, 1971;Haller et al., 2018;Kaminski et al., 2017;Kornblith et al., 2017;Kubota and Niki, 1971). ...
Preprint
Full-text available
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
... The detrimental effects of perceptual distractors on WM (VWM; Bennett and Cortese, 1996;Magnussen et al., 1991;Magnussen and Greenlee, 1992;Wais et al., 2010) can be due to the competition among distractors and WM items to be represented in the early visual cortices (Silvanto and Soto, 2012;Sneve et al., 2011). This is in line with the sensory recruitment model of visual WM, which suggests that visual processing of perceptual stimuli shares similar networks that maintain the visual details of these perceptual stimuli (Emrich et al., 2013;Harrison and Tong, 2009;Serences et al., 2009). Thus, when information retrieved from LTM is reactivated in WM, perceptual inputs can disrupt these reactivated memories despite the initial retrieval being successful. ...
... Task-relevant information is thought to be encoded more efficiently during visual working memory (VWM) maintenance (Jackson et al., 2017;Serences et al., 2009) and in addition to decision processes, SD is thought to arise from post-perceptual memory processes. Bliss et al. (2017) found stronger SD effects at longer compared with shorter intertrial intervals and delays between stimulus and response (peaking at 3 and 6 s, respectively) for spatial position judgments, suggesting that the effect is due to VWM processes. ...
Article
Full-text available
Serial dependence (SD) refers to the effect in which a person’s current perceptual judgment is attracted toward recent stimulus history. Perceptual and memory processes, as well as response and decisional biases, are thought to contribute to SD effects. The current study examined the processing stages of SD facial identity effects in the context of task-related decision processes and how such effects may differ from visual working memory (VWM) interactions. In two experiments, participants were shown a series of two sequentially presented face images. In Experiment 1, the two faces were separated by an interstimulus interval (ISI) of 1, 3, 6, or 10 s, and participants were instructed to reproduce the second face after a varying response delay of 0, 1, 3, 6, or 10 s. Results showed that SD effects occurred most consistently at ISI of 1 s and response delays of 1 and 6 s consistent with early and late stages of processing. In Experiment 2, the ISI was held constant at 1 s, and to separate SD from VWM interactions participants were post-cued to reproduce either the first or the second face. When the second face was the target, SD effects again occurred at response delays of 1 and 6 s, but not when the first face was the target. Together, the results demonstrates that SD facial identity effects occur independently of task-related processes in a distinct temporal fashion and suggest that SD and VWM interactions may rely on separate underlying mechanisms.
Preprint
Full-text available
During visual imagination a perceptual representation is activated in the absence of sensory input. This is sometimes described as seeing with the mind’s eyes. A number of physiological studies indicate that the brain uses more or less the same neural resources for real visual perception and visual imagination. The intensity of visual imagination is typically assessed with questionnaires, while more objective measures are missing. Aim of the present study was, to test a new experimental paradigm that may allow to objectively quantify imagination. For this we used priming and adaptation effects during observation of ambiguous figures. Our perception of an ambiguous stimulus is unstable and alternates spontaneously between two possible interpretations. If we first observe an unambiguous stimulus variant (the conditioning stimulus), the subsequently presented ambiguous stimulus can either be perceived in the same way as the test stimulus (priming effect) or in the opposite way (adaptation effect) as a function of the conditioning time. We tested for these classical conditioning effects (priming and adaptation) using an ambiguous Necker Cube and Letter /Number stimuli as test stimuli and unambiguous variants thereof as conditioning stimuli. In a second experimental condition, we tested whether the previous imagination of an unambiguous conditioning stimulus variant – instead of its observation – can have similar conditioning effects on the subsequent test stimulus. We found no systematic classical conditioning effect on the group level, neither for the cube stimuli nor for the letter/number stimuli. However, highly significant correlations between effects of Real and Imaginary condition were observed for both stimulus types. The absence of classical condition effects at the group level may be explained by using only one conditioning time, which may fit with individual priming and adaptation constants of some of our participants but not of others. Our strong correlation results indicate that observers with clear classical conditioning effects have about the same type (priming or adaptation) and intensity of imaginary conditioning effects. As a consequence, not only past perceptual experiences but also past imaginations can influence our current percepts. This is further confirmation that the mechanisms underlying perception and imagination are similar. Our post-hoc qualitative observations from three self-defined aphantasic observers makes our paradigm a promising objective measure to identify aphantasia.
Article
Visual short-term memory (VSTM), the ability to store information no longer visible, is essential for human behavior. VSTM limits vary across the population and are correlated with overall cognitive ability. It has been proposed that low-memory individuals are unable to select only relevant items for storage and that these limitations are greatest when memory demands are high. However, it is unknown whether these effects simply reflect task difficulty and whether they impact the quality of memory representations. Here we varied the number of items presented, or set size, to investigate the effect of memory demands on the performance of visual short-term memory across low- and high-memory groups. Group differences emerged as set size exceeded memory limits, even when task difficulty was controlled. In a change-detection task, the low-memory group performed more poorly when set size exceeded their memory limits. We then predicted that low-memory individuals encoding items beyond measured memory limits would result in the degraded fidelity of memory representations. A continuous report task confirmed that low, but not high, memory individuals demonstrated decreased memory fidelity as set size exceeded measured memory limits. The current study demonstrates that items held in VSTM are stored distinctly across groups and task demands. These results link the ability to maintain high quality representations with overall cognitive ability.
Article
In the present study, we used an impulse perturbation method to probe working memory maintenance of colors in neurally active and activity-quiescent states, focusing on a set of pre-registered analyses. We analyzed the electroencephalograph (EEG) data of 30 participants who completed a delayed match-to-sample working memory task, in which one of the two items that were presented was retro-cued as task relevant. The analyses revealed that both cued and uncued colors were decodable from impulse-evoked activity, the latter in contrast to previous reports of working memory for orientation gratings. Decoding of colors from oscillations in the alpha band showed that cued items could be decoded therein whereas uncued items could not. Overall, the outcomes suggest that subtle differences exist between the representation of colors, and that of stimuli with spatial properties, but the present results also demonstrate that regardless of their specific neural state, both are accessible through visual impulse perturbation
Preprint
Full-text available
Working Memory (WM) enables flexible behavior by forming a temporal bridge between recent sensory events and possible actions. Through re-analysis of published human EEG studies, we show that successful WM-guided behaviors are accompanied by bidirectional cortical traveling waves linking sensory cortical areas implicated in WM storage with frontal cortical areas implicated in response selection and production. We identified a feedforward (occipital-to-frontal) theta wave that emerged shortly after a response probe and whose latency predicted intra- and inter-individual differences in response times, and a feedback (frontal-to-occipital) beta wave that emerged after response termination. Importantly, both waveforms were present only around the time of an overt motor response. When participants could select task-relevant WM content prepare but not execute a task-appropriate action, neither waveform was observed. Our observations suggest that cortical traveling waves play an important role in the generation and execution of WM-guided behaviors.
Article
When memorizing an integrated object such as a Kanizsa figure, the completion of parts into a coherent whole is attained by grouping processes which render a whole-object representation in visual working memory (VWM). The present study measured event-related potentials (ERPs) and oscillatory amplitudes to track these processes of encoding and representing multiple features of an object in VWM. To this end, a change detection task was performed, which required observers to memorize both the orientations and colors of six “pacman” items while inducing configurations of the pacmen that systematically varied in terms of their grouping strength. The results revealed an effect of object configuration in VWM despite physically constant visual input: change detection for both orientation and color features was more accurate with increased grouping strength. At the electrophysiological level, the lateralized ERPs and alpha activity mirrored this behavioral pattern. Perception of the orientation features gave rise to the encoding of a grouped object as reflected by the amplitudes of the Ppc. The grouped object structure, in turn, modulated attention to both orientation and color features as indicated by the enhanced N1pc and N2pc. Finally, during item retention, the representation of individual objects and the concurrent allocation of attention to these memorized objects were modulated by grouping, as reflected by variations in the CDA amplitude and a concurrent lateralized alpha suppression, respectively. These results indicate that memorizing multiple features of grouped, to-be-integrated objects involves multiple, sequential stages of processing, providing support for a hierarchical model of object representations in VWM.
Article
It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery.
Article
Items held in visual working memory can be quickly updated, replaced, removed, and even manipulated in accordance with current behavioral goals. Here, we use multivariate pattern analyses to identify the patterns of neuronal activity that realize the executive control processes supervising these flexible stores. We find that portions of the middle temporal gyrus and the intraparietal sulcus represent what item is cued for continued memorization independently of representations of the item itself. Importantly, this selection-specific activity could not be explained by sensory representations of the cue and is only present when control is exerted. Our results suggest that the selection of memorized items might be controlled in a distributed and decentralized fashion. This evidence provides an alternative perspective to the notion of “domain general” central executive control over memory function.
Preprint
During perception, decoding the orientation of gratings depends on complex interactions between the orientation of the grating, aperture edges, and topographic structure of the visual map. Here, we aimed to test how aperture biases described during perception affect working memory (WM) decoding. For memoranda, we used gratings multiplied by radial and angular modulators to generate orthogonal aperture biases for identical orientations. Therefore, if WM representations are simply maintained sensory representations, they would have similar aperture biases. If they are abstractions of sensory features, they would be unbiased and the modulator would have no effect on orientation decoding. Neural patterns of delay period activity while maintaining the orientation of gratings with one modulator (e.g. radial) were interchangeable with patterns while maintaining gratings with the other modulator (e.g. angular) in visual and parietal cortex, suggesting that WM representations are insensitive to aperture biases during perception. Then, we visualized memory abstractions of stimuli using models of visual field map properties. Regardless of aperture biases, WM representations of both modulated gratings were recoded into a single oriented line. These results provide strong evidence that visual WM representations are abstractions of percepts, immune to perceptual aperture biases, and compel revisions of WM theory.
Article
One of the most important human faculties is the ability to acquire not just new memories but the capacity to perform entirely new tasks. However, little is known about the brain mechanisms underlying the learning of novel tasks. Specifically, it is unclear to what extent learning of different tasks depends on domain-general and/or domain-specific brain mechanisms. Here human subjects (n = 45) learned to perform 6 new tasks while undergoing functional MRI. The different tasks required the engagement of perceptual, motor, and various cognitive processes related to attention, expectation, speed-accuracy tradeoff, and metacognition. We found that a bilateral frontoparietal network was more active during the initial compared with the later stages of task learning, and that this effect was stronger for task variants requiring more new learning. Critically, the same frontoparietal network was engaged by all 6 tasks, demonstrating its domain generality. Finally, although task learning decreased the overall activity in the frontoparietal network, it increased the connectivity strength between the different nodes of that network. These results demonstrate the existence of a domain-general brain network whose activity and connectivity reflect learning for a variety of new tasks, and thus may underlie the human capacity for acquiring new abilities.
Preprint
Full-text available
Visual working memory (WM) extensively interacts with visual perception. When information between the two processes is in conflict, cognitive control can be recruited to effectively mitigate the resultant interference. The current study investigated the neural bases of the control of conflict between visual WM and visual perception. We recorded the electroencephalogram (EEG) from 25 human subjects (13 male) performing a dual task combining visual WM and tilt discrimination, the latter occurring during the WM delay. The congruity in orientation between the memorandum and the discriminandum was manipulated. Behavioral data were fitted to a reinforcement-learning model of cognitive control to derive trial-wise estimates of demand for proactive and reactive control, which were then used for EEG analyses. The level of proactive control was associated with sustained frontal-midline theta activity preceding trial onset, as well as with the strength of the neural representation of the memorandum. Subsequently, discriminandum onset triggered a control prediction error signal that was reflected in a left frontal positivity. On trials when an incongruent discriminandum was not expected, reactive control that scaled with the prediction error acted to suppress the neural representation of the discriminandum, producing below-baseline decoding of the discriminandum that, in turn, exerted a repulsive serial bias on WM recall on the subsequent trial. These results illustrate the flexible recruitment of two modes of control and how their dynamic interplay acts to mitigate interference between simultaneously processed perceptual and mnemonic representations. Significance Statement One hallmark of human cognition is the context dependent, flexible control of behavior. Here we studied the “mental juggling” required when, while holding information in mind, we have to respond to something that “pops up in front of us” before returning to the interrupted task. Using parameter estimates from a reinforcement-learning model, we analyzed EEG data to identify neural correlates of two discrete modes of cognitive control that act to minimize interference between perception and WM. Proactive control, indexed by frontal midline theta power, increased prior to trial onset when a high level of conflict was expected. Reactive control acted to suppress the representation of items likely to interfere with performance, a processing step with consequences for the subsequent trial.
Article
Full-text available
Introduction Research on the neural mechanisms of perceptual decision-making has typically focused on simple categorical choices, say between two alternative motion directions. Studies on such discrete alternatives have often suggested that choices are encoded either in a motor-based or in an abstract, categorical format in regions beyond sensory cortex. Methods In this study, we used motion stimuli that could vary anywhere between 0° and 360° to assess how the brain encodes choices for features that span the full sensory continuum. We employed a combination of neuroimaging and encoding models based on Gaussian process regression to assess how either stimuli or choices were encoded in brain responses. Results We found that single-voxel tuning patterns could be used to reconstruct the trial-by-trial physical direction of motion as well as the participants’ continuous choices. Importantly, these continuous choice signals were primarily observed in early visual areas. The tuning properties in this region generalized between choice encoding and stimulus encoding, even for reports that reflected pure guessing. Discussion We found only little information related to the decision outcome in regions beyond visual cortex, such as parietal cortex, possibly because our task did not involve differential motor preparation. This could suggest that decisions for continuous stimuli take can place already in sensory brain regions, potentially using similar mechanisms to the sensory recruitment in visual working memory.
Article
Full-text available
Used a "transsaccadic" partial report procedure to measure memory for position and identity information across saccades. Delaying the partial-report cue after the eye movement had little effect on report accuracy. Mask presentation hindered recall only at the shortest delay. Accuracy was much higher when the letter array contained 6 letters than when it contained 10 letters. Intra-array errors were much more frequent than extra-array errors. These results suggest that memory across eye movements decays slowly, has a limited capacity, is maskable for a brief time, and retains identity information better than position information. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A neural model is constructed based on the structure of a visual orientation hypercolumn in mammalian striate cortex. It is then assumed that the perceived orientation of visual contours is determined by the pattern of neuronal activity across orientation columns. Using statistical estimation theory, limits on the precision of orientation estimation and discrimination are calculated. These limits are functions of single unit response properties such as orientation tuning width, response amplitude and response variability, as well as the degree of organization in the neural network. It is shown that a network of modest size, consisting of broadly orientation selective units, can reliably discriminate orientation with a precision equivalent to human performance. Of the various network parameters, the discrimination threshold depends most critically on the number of cells in the hypercolumn. The form of the dependence on cell number correctly predicts the results of psychophysical studies of orientation discrimination. The model system's performance is also consistent with psychophysical data in two situations in which human performance is not optimal. First, interference with orientation discrimination occurs when multiple stimuli activate cells in the same hypercolumn. Second, systematic errors in the estimation of orientation can occur when a stimulus is composed of intersecting lines. The results demonstrate that it is possible to relate neural activity to visual performance by an examination of the pattern of activity across orientation columns. This provides support for the hypothesis that perceived orientation is determined by the distributed pattern of neural activity. The results also encourage the view that limits on visual discrimination are determined by the responses of many neurons rather than the sensitivity of individual cells.
Article
Full-text available
The borders of human visual areas V1, V2, VP, V3, and V4 were precisely and noninvasively determined. Functional magnetic resonance images were recorded during phase-encoded retinal stimulation. This volume data set was then sampled with a cortical surface reconstruction, making it possible to calculate the local visual field sign (mirror image versus non-mirror image representation). This method automatically and objectively outlines area borders because adjacent areas often have the opposite field sign. Cortical magnification factor curves for striate and extrastriate cortical areas were determined, which showed that human visual areas have a greater emphasis on the center-of-gaze than their counterparts in monkeys. Retinotopically organized visual areas in humans extend anteriorly to overlap several areas previously shown to be activated by written words.
Article
Full-text available
The receptive field properties of cells in layers 2, 3, and 4 of area 17 (V1) of the monkey were studied quantitatively using colored and broad-band gratings, bars, and spots. Many cells in all regions studied responded selectively to stimulus orientation, direction, and color. Nearly all cells (95%) in layers 2 and 3 exhibited statistically significant orientation preferences (biases), most exhibited at least some color sensitivity, and many were direction sensitive. The degree of selectivity of cells in layers 2 and 3 varied continuously among cells; we did not find discrete regions containing cells sensitive to orientation and direction but not color, and vice versa. There was no relationship between the degree of orientation sensitivity of the cells studied and their degree of color sensitivity. There was also no obvious relationship between the receptive field properties studied and the cells' location relative to cytochrome oxidase-rich regions. Our findings are difficult to reconcile with the hypothesis that there is a strict segregation of cells sensitive to orientation, direction, and color in layers 2 and 3. In fact, the present results suggest the opposite since most cells in these layers are selective for a number of stimulus attributes.
Article
Full-text available
Inferior temporal (IT) cortex of primates is known to play an important role in visual memory. Previous studies of IT neurons during performance of working memory tasks have found modulation of responses when a current stimulus matched an item in memory; however, this effect was lost if other stimuli intervened in the retention interval. To examine how IT cortex retains memories while new stimuli are activating the cells, we recorded from IT neurons while monkeys performed a delayed matching-to-sample task, with multiple intervening items between the sample and matching test stimulus. About half of the cells responded differently to a test stimulus if it matched the sample, and this difference was maintained following intervening stimuli. For most of the affected cells, the responses to matching stimuli were suppressed; however, for a few cells the opposite effect was seen. Temporal contiguity alone could not explain the results, as there was no modulation of responses when a stimulus on one trial was repeated on the next trial. Thus, an active reset mechanism appears to restrict the memory comparison to just the stimuli presented within a trial. The suppressive effects appear to be generated within or before IT cortex since the suppression of response to matching stimuli began almost immediately with the onset of the visual response. The memory of the sample stimulus affected not only the responses to matching stimuli but also those to nonmatching stimuli. There was suggestive evidence that the more similar a nonmatching stimulus to the sample, the more the response was suppressed. About a quarter of the cells showed stimulus-selective activity in the delay interval following the sample. However, this activity appeared to be eliminated by intervening stimuli. Thus, it is unlikely that delay-interval activity in IT contributed to the performance of this particular version of delayed matching to sample. To determine how much information about the match-nonmatch status of the stimulus was conveyed by individual neurons, we analyzed the responses with discriminant analysis. The responses of an individual IT neuron could be used to classify a stimulus as matching or nonmatching on about 60% of the trials. To achieve the same performance as the animal would require averaging the responses of a minimum of 25 IT neurons. There was no evidence that mnemonic information was carried by temporal variations in the spike trains. By contrast, there was a modest amount of temporal variation in sensory responses to different visual stimuli.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
We often search for a face in a crowd or for a particular object in a cluttered environment. In this type of visual search, memory interacts with attention: the mediating neural mechanisms should include a stored representation of the object and a means for selecting that object from among others in the scene. Here we test whether neurons in inferior temporal cortex, an area known to be important for high-level visual processing, might provide these components. Monkeys were presented with a complex picture (the cue) to hold in memory during a delay period. The cue initiated activity that persisted through the delay among the neurons that were tuned to its features. The monkeys were then given 2-5 choice pictures and were required to make an eye movement to the one (the target) that matched the cue. About 90-120 milliseconds before the onset of the eye movement to the target, responses to non-targets were suppressed and the neuronal response was dominated by the target. The results suggest that inferior temporal cortex is involved in selecting the objects to which we attend and foveate.
Article
Full-text available
The linear transform model of functional magnetic resonance imaging (fMRI) hypothesizes that fMRI responses are proportional to local average neural activity averaged over a period of time. This work reports results from three empirical tests that support this hypothesis. First, fMRI responses in human primary visual cortex (V1) depend separably on stimulus timing and stimulus contrast. Second, responses to long-duration stimuli can be predicted from responses to shorter duration stimuli. Third, the noise in the fMRI data is independent of stimulus contrast and temporal period. Although these tests can not prove the correctness of the linear transform model, they might have been used to reject the model. Because the linear transform model is consistent with our data, we proceeded to estimate the temporal fMRI impulse-response function and the underlying (presumably neural) contrast-response function of human V1.
Article
Full-text available
1. Electrophysiological recording data from multiple cells in motor cortex and elsewhere often are interpreted using the population vector method pioneered by Georgopoulos and coworkers. This paper proposes an alternative method for interpreting coding across populations of cells that may succeed under circumstances in which the population vector fails. 2. Population codes are analyzed using probability theory to find the complete conditional probability density of a movement parameter given the firing pattern of a set of cells. 3. The conditional probability density when a single cell fires is proportional to the shape of the cell's tuning curve of firing rate in response to different movement parameters. 4. The conditional density when multiple cells fire is proportional to the product of their tuning curves. 5. Movement parameters can be estimated from the conditional density using statistical maximum likelihood or minimum mean-squared error methods. 6. Simulations show that density estimation correctly finds movement directions for nonuniform distributions of preferred directions and noncosine cell tuning curves, whereas the population vector method fails for these cases. 7. Probability methods thus provide a statistically based alternative to the population vector for interpreting electrophysiological recording data from multiple cells.
Article
Full-text available
Working memory involves the short-term maintenance of an active representation of information so that it is available for further processing. Visual working memory tasks, in which subjects retain the memory of a stimulus over brief delays, require both the perceptual encoding of the stimulus and the subsequent maintenance of its representation after the stimulus is removed from view. Such tasks activate multiple areas in visual and prefrontal cortices. To delineate the roles these areas play in perception and working memory maintenance, we used functional magnetic resonance imaging (fMRI) to obtain dynamic measures of neural activity related to different components of a face working memory task-non-selective transient responses to visual stimuli, selective transient responses to faces, and sustained responses over memory delays. Three occipitotemporal areas in the ventral object vision pathway had mostly transient responses to stimuli, indicating their predominant role in perceptual processing, whereas three prefrontal areas demonstrated sustained activity over memory delays, indicating their predominant role in working memory. This distinction, however, was not absolute. Additionally, the visual areas demonstrated different degrees of selectivity, and the prefrontal areas demonstrated different strengths of sustained activity, revealing a continuum of functional specialization, from occipital through multiple prefrontal areas, regarding each area's relative contribution to perceptual and mnemonic processing.
Article
Full-text available
Perceptually, color is used to discriminate objects by hue and to identify color boundaries. The primate retina and the lateral geniculate nucleus (LGN) have cell populations sensitive to color modulation, but the role of the primary visual cortex (V1) in color signal processing is uncertain. We re-evaluated color processing in V1 by studying single-neuron responses to luminance and to equiluminant color patterns equated for cone contrast. Many neurons respond robustly to both equiluminant color and luminance modulation (color-luminance cells). Also, there are neurons that prefer luminance (luminance cells), and a few neurons that prefer color (color cells). Surprisingly, most color-luminance cells are spatial-frequency tuned, with approximately equal selectivity for chromatic and achromatic patterns. Therefore, V1 retains the color sensitivity provided by the LGN, and adds spatial selectivity for color boundaries.
Article
Full-text available
Functional magnetic resonance imaging (fMRI) is widely used to study the operational organization of the human brain, but the exact relationship between the measured fMRI signal and the underlying neural activity is unclear. Here we present simultaneous intracortical recordings of neural signals and fMRI responses. We compared local field potentials (LFPs), single- and multi-unit spiking activity with highly spatio-temporally resolved blood-oxygen-level-dependent (BOLD) fMRI responses from the visual cortex of monkeys. The largest magnitude changes were observed in LFPs, which at recording sites characterized by transient responses were the only signal that significantly correlated with the haemodynamic response. Linear systems analysis on a trial-by-trial basis showed that the impulse response of the neurovascular system is both animal- and site-specific, and that LFPs yield a better estimate of BOLD responses than the multi-unit responses. These findings suggest that the BOLD contrast mechanism reflects the input and intracortical processing of a given area rather than its spiking output.
Article
Full-text available
The functional architecture of the object vision pathway in the human brain was investigated using functional magnetic resonance imaging to measure patterns of response in ventral temporal cortex while subjects viewed faces, cats, five categories of man-made objects, and nonsense pictures. A distinct pattern of response was found for each stimulus category. The distinctiveness of the response to a given category was not due simply to the regions that responded maximally to that category, because the category being viewed also could be identified on the basis of the pattern of response when those regions were excluded from the analysis. Patterns of response that discriminated among all categories were found even within cortical regions that responded maximally to only one category. These results indicate that the representations of faces and objects in ventral temporal cortex are widely distributed and overlapping.
Article
Full-text available
Higher order cognition depends on the ability to recall information from memory and hold it in mind to guide future behavior. To specify the neural mechanisms underlying these processes, we used event-related functional magnetic resonance imaging to compare brain activity during the performance of a visual associative memory task and a visual working memory task. Activity within category-selective subregions of inferior temporal cortex reflected the type of information that was actively maintained during both the associative memory and working memory tasks. In addition, activity in the anterior prefrontal cortex and hippocampus was specifically enhanced during associative memory retrieval. These data are consistent with the view that the active maintenance of visual information is supported by activation of object representations in inferior temporal cortex, but that goal-directed associative memory retrieval additionally depends on top-down signals from the anterior prefrontal cortex and medial temporal lobes.
Article
Full-text available
The potential for human neuroimaging to read out the detailed contents of a person's mental state has yet to be fully explored. We investigated whether the perception of edge orientation, a fundamental visual feature, can be decoded from human brain activity measured with functional magnetic resonance imaging (fMRI). Using statistical algorithms to classify brain states, we found that ensemble fMRI signals in early visual areas could reliably predict on individual trials which of eight stimulus orientations the subject was seeing. Moreover, when subjects had to attend to one of two overlapping orthogonal gratings, feature-based attention strongly biased ensemble activity toward the attended orientation. These results demonstrate that fMRI activity patterns in early visual areas, including primary visual cortex (V1), contain detailed orientation information that can reliably predict subjective perception. Our approach provides a framework for the readout of fine-tuned representations in the human brain and their subjective contents.
Article
Full-text available
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations.
Article
Full-text available
Working memory refers to the temporary retention of information that was just experienced or just retrieved from long-term memory but no longer exists in the external environment. These internal representations are short-lived, but can be stored for longer periods of time through active maintenance or rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goal-directed behaviour. Empirical studies of working memory using neuroscientific techniques, such as neuronal recordings in monkeys or functional neuroimaging in humans, have advanced our knowledge of the underlying neural mechanisms of working memory. This rich dataset can be reconciled with behavioural findings derived from investigating the cognitive mechanisms underlying working memory. In this paper, I review the progress that has been made towards this effort by illustrating how investigations of the neural mechanisms underlying working memory can be influenced by cognitive models and, in turn, how cognitive models can be shaped and modified by neuroscientific data. One conclusion that arises from this research is that working memory can be viewed as neither a unitary nor a dedicated system. A network of brain regions, including the prefrontal cortex (PFC), is critical for the active maintenance of internal representations that are necessary for goal-directed behaviour. Thus, working memory is not localized to a single brain region but probably is an emergent property of the functional interactions between the PFC and the rest of the brain.
Article
Full-text available
Single-unit recording studies have demonstrated a close link between neural activity in the middle temporal (MT) area and motion perception. In contrast, researchers using functional magnetic resonance imaging and multivoxel pattern analysis methods have recently documented direction-specific responses within many regions of the visual system (e.g., visual cortical areas V1-V4v) not normally associated with motion processing. Our goal was to determine how these direction-selective response patterns directly relate to the conscious perception of motion. We dissociated neuronal responses associated with the perceptual experience of motion from the physical presence of motion in the display by asking observers to report the perceived direction of an ambiguous stimulus. Activation patterns in the human MT complex closely matched the reported perceptual state of the observer, whereas patterns in other visual areas did not. These results suggest that, even when selective responses to a given feature are distributed relatively broadly across the visual system, the conscious experience of that feature may be primarily based on activity within specialized cortical areas.
Article
Full-text available
Limits on the storage capacity of working memory significantly affect cognitive abilities in a wide range of domains, but the nature of these capacity limits has been elusive. Some researchers have proposed that working memory stores a limited set of discrete, fixed-resolution representations, whereas others have proposed that working memory consists of a pool of resources that can be allocated flexibly to provide either a small number of high-resolution representations or a large number of low-resolution representations. Here we resolve this controversy by providing independent measures of capacity and resolution. We show that, when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varied over a narrow range that cannot be explained in terms of a general resource pool but can be well explained by a small set of discrete, fixed-resolution representations.
Article
Full-text available
It has been shown that we have a highly capacity-limited representational space with which to store objects in visual working memory. However, most objects are composed of multiple feature attributes, and it is unknown whether observers can voluntarily store a single attribute of an object without necessarily storing all of its remaining features. In this study, we used a masking paradigm to measure the efficiency of encoding, and neurophysiological recordings to directly measure visual working memory maintenance while subjects viewed multifeature objects and were required to remember only a single feature or all of the features of the objects. We found that measures of both encoding and maintenance varied systematically as a function of which object features were task relevant. These experiments show that individuals can control which features of an object are selectively stored in working memory.
Article
Full-text available
this article is to understand how the fMRI response relates to neural activity. The vascular source of the fMRI signal places important limits on the technique. Because the hemodynamic response is sluggish, perhaps the fMRI response is proportional to the local average neural activity, averaged over a small region of the brain and averaged over a period of time. We will refer to this as the "linear transform model" of fMRI response. The linear transform model, specialized for a visual area of the brain, is depicted in Figure 1. According to this model, neural activity is a nonlinear function of the contrast of a visual stimulus, but fMRI response is a linear transform (averaged over time) of the neural activity in V1. Noise might be introduced at each stage of the process, but the effects of these individual noises can be summarized by a single noise source that is added to the output.
Article
Previous fMRI results suggest that extrastriate visual areas have a predominant role in perceptual processing while the prefrontal cortex (PFC) has a predominant role in working memory. In contrast, single-unit recording studies in monkeys have demonstrated a relationship between extrastriate visual areas and visual working memory tasks. In this study we tested whether activity in both the PFC and fusiform face area (FFA) changed with increasing demands of an n-back task for gray-scale faces. Since stimulus presentation was identical across conditions, the n-back task allowed us to parametrically vary working memory demands across conditions while holding perceptual and motor demands constant. This study replicated the result of PFC areas of activation that increased directly with load n of the task. The novel finding in all subjects was FFA activation that also increased directly with load n of the task. Since perceptual demands were equivalent across the three task conditions, these findings suggest that activity in both the PFC and the FFA vary with face working memory demands.
Article
Working memory is often conceptualized as storage buffers that retain information briefly, rehearsal processes that refresh the buffers, and executive processes that manipulate the contents of the buffers. We review evidence about the brain mechanisms that may underlie storage and rehearsal in working memory. We hypothesize that storage is mediated by the same brain structures that process perceptual information and that rehearsal en-gages a network of brain areas that also controls attention to external stimuli. KEYWORDS—working memory; neuroimaging; storage; rehearsal Multiply 68 Â 7 in your head. How did you perform this task? You might have first multi-plied 8 Â 7, noted that the answer is 56, carried the 5, multi-plied 6 Â 7 to yield 42, added the carried 5 to get 47, and then answered 476. Working memory is engaged by this task and others that require the brief storage and manipulation of infor-mation in the service of some goal. Working memory is a system that can store a small amount of information briefly, keeping that information quickly accessible and available for transfor-mation by rules and strategies, while updating it frequently. Without working memory, people would not be able to reason, solve problems, speak and understand language, and engage in other activities associated with intelligent life.
Article
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Article
Short-term memory storage can be divided into separate subsystems for verbal information and visual information, and recent studies have begun to delineate the neural substrates of these working-memory systems. Although the verbal storage system has been well characterized, the storage capacity of visual working memory has not yet been established for simple, suprathreshold features or for conjunctions of features. Here we demonstrate that it is possible to retain information about only four colours or orientations in visual working memory at one time. However, it is also possible to retain both the colour and the orientation of four objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing sixteen individual features to be retained when distributed across four objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features, which places significant constraints on cognitive and neurobiological models of the temporary storage of visual information.
Article
Previous fMRI results suggest that extrastriate visual areas have a predominant role in perceptual processing while the prefrontal cortex (PFC) has a predominant role in working memory. In contrast, single-unit recording studies in monkeys have demonstrated a relationship between extrastriate visual areas and visual working memory tasks. In this study we tested whether activity in both the PFC and fusiform face area (FFA) changed with increasing demands of an n-back task for gray-scale faces. Since stimulus presentation was identical across conditions, the n-back task allowed us to parametrically vary working memory demands across conditions while holding perceptual and motor demands constant. This study replicated the result of PFC areas of activation that increased directly with load n of the task. The novel finding in all subjects was FFA activation that also increased directly with load n of the task. Since perceptual demands were equivalent across the three task conditions, these findings suggest that activity in both the PFC and the FFA vary with face working memory demands.
Article
Spatial selective attention and spatial working memory have largely been studied in isolation. Studies of spatial attention have provided clear evidence that observers can bias visual processing towards specific locations, enabling faster and better processing of information at those locations than at unattended locations. We present evidence supporting the view that this process of visual selection is a key component of rehearsal in spatial working memory. Thus, although working memory has sometimes been depicted as a storage system that emerges 'downstream' of early sensory processing, current evidence suggests that spatial rehearsal recruits top-down processes that modulate the earliest stages of visual analysis.
Article
In the vertebrate nervous system, sensory stimuli are typically encoded through the concerted activity of large populations of neurons. Classically, these patterns of activity have been treated as encoding the value of the stimulus (e.g., the orientation of a contour), and computation has been formalized in terms of function approximation. More recently, there have been several suggestions that neural computation is akin to a Bayesian inference process, with population activity patterns representing uncertainty about stimuli in the form of probability distributions (e.g., the probability density function over the orientation of a contour). This paper reviews both approaches, with a particular emphasis on the latter, which we see as a very promising framework for future modeling and experimental work.
Article
Previous research has suggested that visual short-term memory has a fixed capacity of about four objects. However, we found that capacity varied substantially across the five stimulus classes we examined, ranging from 1.6 for shaded cubes to 4.4 for colors (estimated using a change detection task). We also estimated the information load per item in each class, using visual search rate. The changes we measured in memory capacity across classes were almost exactly mirrored by changes in the opposite direction in visual search rate (r2=.992 between search rate and the reciprocal of memory capacity). The greater the information load of each item in a stimulus class (as indicated by a slower search rate), the fewer items from that class one can hold in memory. Extrapolating this linear relationship reveals that there is also an upper bound on capacity of approximately four or five objects. Thus, both the visual information load and number of objects impose capacity limits on visual short-term memory.
Article
Attending to the spatial location or to nonspatial features of visual stimuli can modulate neuronal responses in primate visual cortex. The modulation by spatial attention changes the gain of sensory neurons and strengthens the representation of attended locations without changing neuronal selectivities such as directionality, i.e., the ratio of responses to preferred and anti-preferred directions of motion. Whether feature-based attention acts in a similar manner is unknown. To clarify this issue, we recorded the responses of 135 direction-selective neurons in the middle temporal area (MT) of two macaques to an unattended moving random dot pattern (the distractor) positioned inside a neuron's receptive field while the animals attended to a second moving pattern positioned in the opposite hemifield. Responses to different directions of the distractor were modulated by the same factor (approximately 12%) as long as the attended direction remained unchanged. On the other hand, systematically changing the attended direction from a neuron's preferred to its anti-preferred direction caused a systematic change of the attentional modulation from an enhancement to a suppression, increasing directionality by about 20%. The results show that (1) feature-based attention exerts a multiplicative modulation upon neuronal responses and that the strength of this modulation depends on the similarity between the attended feature and the cell's preferred feature, in line with the feature-similarity gain model, and (2) at the level of the neuronal population, feature-based attention increases the selectivity for attended features by increasing the responses of neurons preferring this feature value while decreasing responses of neurons tuned to the opposite feature value.
Article
Humans can experience aftereffects from oriented stimuli that are not consciously perceived, suggesting that such stimuli receive cortical processing. Determining the physiological substrate of such effects has proven elusive owing to the low spatial resolution of conventional human neuroimaging techniques compared to the size of orientation columns in visual cortex. Here we show that even at conventional resolutions it is possible to use fMRI to obtain a direct measure of orientation-selective processing in V1. We found that many parts of V1 show subtle but reproducible biases to oriented stimuli, and that we could accumulate this information across the whole of V1 using multivariate pattern recognition. Using this information, we could then successfully predict which one of two oriented stimuli a participant was viewing, even when masking rendered that stimulus invisible. Our findings show that conventional fMRI can be used to reveal feature-selective processing in human cortex, even for invisible stimuli.
Article
Primary and secondary visual cortex (V1 and V2) form the foundation of the cortical visual system. V1 transforms information received from the lateral geniculate nucleus (LGN) and distributes it to separate domains in V2 for transmission to higher visual areas. During the past 20 years, schemes for the functional organization of V1 and V2 have been based on a tripartite framework developed by Livingstone & Hubel (1988) . Since then, new anatomical data have accumulated concerning V1's input, its internal circuitry, and its output to V2. These new data, along with physiological and imaging studies, now make it likely that the visual attributes of color, form, and motion are not neatly segregated by V1 into different stripe compartments in V2. Instead, there are just two main streams, originating from cytochrome oxidase patches and interpatches, that project to V2. Each stream is composed of a mixture of magno, parvo, and konio geniculate signals. Further studies are required to elucidate how the patches and interpatches differ in the output they convey to extrastriate cortex.
Article
Cognitive neuroscience research on working memory has been largely motivated by a standard model that arose from the melding of psychological theory with neuroscience data. Among the tenets of this standard model are that working memory functions arise from the operation of specialized systems that act as buffers for the storage and manipulation of information, and that frontal cortex (particularly prefrontal cortex) is a critical neural substrate for these specialized systems. However, the standard model has been a victim of its own success, and can no longer accommodate many of the empirical findings of studies that it has motivated. An alternative is proposed: Working memory functions arise through the coordinated recruitment, via attention, of brain systems that have evolved to accomplish sensory-, representation-, and action-related functions. Evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies, from monkeys and humans, is considered, as is the question of how to interpret delay-period activity in the prefrontal cortex.
Article
Using visual information to guide behaviour requires storage in a temporary buffer, known as visual short-term memory (VSTM), that sustains attended information across saccades and other visual interruptions. There is growing debate on whether VSTM capacity is limited to a fixed number of objects or whether it is variable. Here we report four experiments using functional magnetic resonance imaging that resolve this controversy by dissociating the representation capacities of the parietal and occipital cortices. Whereas representations in the inferior intra-parietal sulcus (IPS) are fixed to about four objects at different spatial locations regardless of object complexity, those in the superior IPS and the lateral occipital complex are variable, tracking the number of objects held in VSTM, and representing fewer than four objects as their complexity increases. These neural response patterns were observed during both VSTM encoding and maintenance. Thus, multiple systems act together to support VSTM: whereas the inferior IPS maintains spatial attention over a fixed number of objects at different spatial locations, the superior IPS and the lateral occipital complex encode and maintain a variable subset of the attended objects, depending on their complexity. VSTM capacity is therefore determined both by a fixed number of objects and by object complexity.
Article
Functional neuroimaging has successfully identified brain areas that show greater responses to visual motion and adapted responses to repeated motion directions. However, such methods have been thought to lack the sensitivity and spatial resolution to isolate direction-selective responses to individual motion stimuli. Here, we used functional magnetic resonance imaging (fMRI) and pattern classification methods to show that ensemble activity patterns in human visual cortex contain robust direction-selective information, from which it is possible to decode seen and attended motion directions. Ensemble activity in areas V1-V4 and MT+/V5 allowed us to decode which of eight possible motion directions the subject was viewing on individual stimulus blocks. Moreover, ensemble activity evoked by single motion directions could effectively predict which of two overlapping motion directions was the focus of the subject's attention and presumably dominant in perception. Our results indicate that feature-based attention can bias direction-selective population activity in multiple visual areas, including MT+/V5 and early visual areas (V1-V4), consistent with gain-modulation models of feature-based attention and theories of early attentional selection. Our approach for measuring ensemble direction selectivity may provide new opportunities to investigate relationships between attentional selection, conscious perception, and direction-selective responses in the human brain.
Article
A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.
Article
We investigated the role of object-based attention in modulating the maintenance of faces and scenes held online in working memory (WM). Participants had to remember a face and a scene, while cues presented during the delay instructed them to orient their attention to one or the other item. Event-related functional magnetic resonance imaging revealed that orienting attention in WM modulated the activity in fusiform and parahippocampal gyri, involved in maintaining representations of faces and scenes respectively. Measures from complementary behavioral studies indicated that this increase in activity corresponded to improved WM performance. The results show that directed attention can modulate maintenance of specific representations in WM, and help define the interplay between the domains of attention and WM.
Article
states than the more typically used univariate approach. Patterns of fMRI activation can be used to discriminate cognitive states (sometimes called 'mind reading'), to relate brain activity to behaviour and to clarify the structure of neural representations. Here, we point out an additional use of MVPA: its ability to interpret over- lapping functional activations. A general issue that arises in fMRI studies concerns the interpretation of overlapping activity from independent contrasts. When a set of voxels is commonly activated by two (or more) contrasts of experimental conditions,
Article
Some fundamental principles of colour vision, deduced from perceptual studies, have been understood for a long time. Physiological studies have confirmed the existence of three classes of cone photoreceptors, and of colour-opponent neurons that compare the signals from cones, but modern work has drawn attention to unexpected complexities of early organization: the proportions of cones of different types vary widely among individuals, without great effect on colour vision; the arrangement of different types of cones in the mosaic seems to be random, making it hard to optimize the connections to colour-opponent mechanisms; and new forms of colour-opponent mechanisms have recently been discovered. At a higher level, in the primary visual cortex, recent studies have revealed a simpler organization than had earlier been supposed, and in some respects have made it easier to reconcile physiological and perceptual findings.
Article
Does visual working memory represent a fixed number of objects, or is capacity reduced as object complexity increases? We measured accuracy in detecting changes between sample and test displays and found that capacity estimates dropped as complexity increased. However, these apparent capacity reductions were strongly correlated with increases in sample-test similarity (r= .97), raising the possibility that change detection was limited by errors in comparing the sample and test, rather than by the number of items that were maintained in working memory. Accordingly, when sample-test similarity was low, capacity estimates for even the most complex objects were equivalent to the estimate for the simplest objects (r= .88), suggesting that visual working memory represents a fixed number of items regardless of complexity. Finally, a correlational analysis suggested a two-factor model of working memory ability, in which the number and resolution of representations in working memory correspond to distinct dimensions of memory ability.
Article
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Decoding the visual and subjective contents of the human brain
  • Y Kamitani
  • F Tong
Kamitani Y, Tong F. Decoding the visual and subjective contents of the human brain. Nature Neuroscience. 2005; 8:679–685.
Transient and sustained activity in a distributed neural system for human working memory
  • Sm Courtney
  • Lg Ungerleider
  • K Keil
  • Jv Haxby
Courtney SM, Ungerleider LG, Keil K, Haxby JV. Transient and sustained activity in a distributed neural system for human working memory. Nature. 1997; 386:608–611. [PubMed: 9121584]
Discrimination threshold is the sample-test disparity for which observers achieved 75% accuracy (see Staircase Procedure in the Method section) Standard errors of the means are given in parentheses. For remember-color trials, the values refer to saturation of the color
  • Note
Note. Discrimination threshold is the sample-test disparity for which observers achieved 75% accuracy (see Staircase Procedure in the Method section). Standard errors of the means are given in parentheses. For remember-color trials, the values refer to saturation of the color. V2v .431 (.050) .543 (.079) .515 (.043) .422 (.053)