ArticleLiterature Review

The Neural Basis of Perceptual Learning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Perceptual learning is a lifelong process. We begin by encoding information about the basic structure of the natural world and continue to assimilate information about specific patterns with which we become familiar. The specificity of the learning suggests that all areas of the cerebral cortex are plastic and can represent various aspects of learned information. The neural substrate of perceptual learning relates to the nature of the neural code itself, including changes in cortical maps, in the temporal characteristics of neuronal responses, and in modulation of contextual influences. Top-down control of these representations suggests that learning involves an interaction between multiple cortical areas.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Firstly, there is functional reorganization, whereby training is associated with an increase in the number of task-related neurons (8,9). Secondly, for neurons already involved in the task, there is a sharpening of response selectivity (10). These changes in representations are thought to be mediated by a combination of synaptic plasticity (11), a reorganization of the structure of local excitation and inhibition (12), and signals feeding back from other brain areas (10,13). ...
... Secondly, for neurons already involved in the task, there is a sharpening of response selectivity (10). These changes in representations are thought to be mediated by a combination of synaptic plasticity (11), a reorganization of the structure of local excitation and inhibition (12), and signals feeding back from other brain areas (10,13). ...
... Learning signals that can be communicated and processed independently of sensory-motor signals have been hypothesized to be a vital element for the coordination of synaptic plasticity (14,15). The second is a sharpening signal, which is required if the process of transiently enhancing representations arises from top-down attention (10,16). Both learning and sharpening have been linked to the phenomenon of bursting. ...
Preprint
Full-text available
Theories of attention and learning have hypothesized a central role for high-frequency bursting in cognitive functions, but experimental reports of burst-mediated representations in vivo have been limited. Here we used a novel demultiplexing approach to separate independent streams of information from considering neurons as having three possible states: silent, singlet- and burst-firing. We studied this ternary neural code in vivo while animals learned to behaviorally report direct electrical stimulation of the somatosensory cortex and found two acquired yet independent representations. One code, the event rate, represented the stimulus in a small fraction of cells and showed a small modulation upon detection errors. The other code, the burst fraction, correlated more globally with stimulation and more promptly responded to detection errors. Bursting modulation was potent and its time course evolved, even in cells that were considered unresponsive based on the firing rate. During the later stages of training, this modulation in bursting happened earlier, gradually aligning temporally with the representation in event rate. The alignment of bursting and event rate modulation sharpened firing rate coded representations, and was strongly associated behavioral accuracy. Thus a fine grain separation of spike timing patterns reveals two signals that accompany stimulus representations: an error signal that can be essential to guide learning and a sharpening signal that could enact top-down attention.
... Ongoing debate surrounds the effectiveness of vision restoration methods, particularly in optimizing the VPL approach, including whether to prioritize detection or discrimination tasks, select types of stimuli, and target normal or blind areas (Lu & Dosher, 2022;Sagi, 2011;Saionz et al., 2022). It has been suggested that VPL with basic visual features (e.g., orientation and rotation) and location specificity involves early visual processing (e.g., primary visual cortex), whereas complex motion and depth discrimination tasks require visual decisionmaking of higher order regions (e.g., MT, middle temporal area; LIP, lateral intraparietal area) (Gilbert et al., 2001;Sagi, 2011;Sasaki et al., 2010). Orientation-motion discrimination tasks, sharpening tuning specificity of the primary visual cortex (V1), have mostly been applied in cortical blindness (Cavanaugh & Huxlin, 2017;Das et al., 2014;Huxlin et al., 2009). ...
... First, the orientation discrimination task of Gabor with spatial frequency may reshape early visual processing, including the tuning properties of the V1 retinotopically corresponding to the trained stimulus location (Sasaki et al., 2010;Schoups et al., 2001). Neural representations of visual stimuli may be enhanced through synaptic strengthening and dendritic remodeling (Gilbert et al., 2001;Karmarkar & Dan, 2006). Second, VPL affects connectivity between the visual cortex and higher regions involved in decision-making, including MT and LIP, through top-down cognitive modulation (Dosher & Lu, 1998;Law & Gold, 2008). ...
Article
Full-text available
Introduction Visual field defects (VFDs) represent a debilitating poststroke complication, characterized by unseen parts of the visual field. Visual perceptual learning (VPL), involving repetitive visual training in blind visual fields, may effectively restore visual field sensitivity in cortical blindness. This current multicenter, double‐blind, randomized, controlled clinical trial investigated the efficacy and safety of VPL‐based digital therapeutics (Nunap Vision [NV]) for treating poststroke VFDs. Methods Stroke outpatients with VFDs (>6 months after stroke onset) were randomized into NV (defective field training) or Nunap Vision‐Control (NV‐C, central field training) groups. Both interventions provided visual perceptual training, consisting of orientation, rotation, and depth discrimination, through a virtual reality head‐mounted display device 5 days a week for 12 weeks. The two groups received VFD assessments using Humphrey visual field (HVF) tests at baseline and 12‐week follow‐up. The final analysis included those completed the study (NV, n = 40; NV‐C, n = 35). Efficacy measures included improved visual area (sensitivity ≥6 dB) and changes in the HVF scores during the 12‐week period. Results With a high compliance rate, NV and NV‐C training improved the visual areas in the defective hemifield (>72 degrees²) and the whole field (>108 degrees²), which are clinically meaningful improvements despite no significant between‐group differences. According to within‐group analyses, mean total deviation scores in the defective hemifield improved after NV training (p = .03) but not after NV‐C training (p = .12). Conclusions The current trial suggests that VPL‐based digital therapeutics may induce clinically meaningful visual improvements in patients with poststroke VFDs. Yet, between‐group differences in therapeutic efficacy were not found as NV‐C training exhibited unexpected improvement comparable to NV training, possibly due to learning transfer effects.
... Sensory systems that are responsible for the continuous adjustment to the changing of input statistics maintain a high degree of plasticity throughout the lifespan (Baylor, 1987;Dunn, Lankheet, & Rieke, 2007;Stiles, 2000;Wade & Wandell, 2002;Wandell & Smirnakis, 2009). Visual perceptual learning refers to repeated practice on the visual task that induces long-term improvement in visual performance (Gilbert, Sigman, & Crist, 2001;Sasaki, Nanez, & Watanabe, 2010). Evidence for adult brain plasticity has been demonstrated by behavioral and neural changes associated with visual perceptual learning (Beyeler, Rokem, Boynton, & Fine, 2017;Dosher & Lu, 2017;Fahle & Poggio, 2002;Fiorentini & Berardi, 1980;Watanabe & Sasaki, 2015). ...
... Consequently, it becomes less likely for the target and flanker features to interfere with each other, thereby weakening the crowding effect. Gilbert et al. (2001) reviewed the neural mechanisms of perceptual learning; they proposed that increased precision and discrimination result from the refinement of tuning curves and a reduction in the size of neuronal ensembles representing trained attributes. Gilbert and colleagues suggest that, with learning, neurons acquire greater selectivity and optimally distance themselves from each other, thereby enhancing their coverage of the stimulus domain. ...
Article
This study aimed to investigate the impact of eccentric-vision training on population receptive field (pRF) estimates to provide insights into brain plasticity processes driven by practice. Fifteen participants underwent functional magnetic resonance imaging (fMRI) measurements before and after behavioral training on a visual crowding task, where the relative orientation of the opening (gap position: up/down, left/right) in a Landolt C optotype had to be discriminated in the presence of flanking ring stimuli. Drifting checkerboard bar stimuli were used for pRF size estimation in multiple regions of interest (ROIs): dorsal-V1 (dV1), dorsal-V2 (dV2), ventral-V1 (vV1), and ventral-V2 (vV2), including the visual cortex region corresponding to the trained retinal location. pRF estimates in V1 and V2 were obtained along eccentricities from 0.5° to 9°. Statistical analyses revealed a significant decrease of the crowding anisotropy index (p = 0.009) after training, indicating improvement on crowding task performance following training. Notably, pRF sizes at and near the trained location decreased significantly (p = 0.005). Dorsal and ventral V2 exhibited significant pRF size reductions, especially at eccentricities where the training stimuli were presented (p < 0.001). In contrast, no significant changes in pRF estimates were found in either vV1 (p = 0.181) or dV1 (p = 0.055) voxels. These findings suggest that practice on a crowding task can lead to a reduction of pRF sizes in trained visual cortex, particularly in V2, highlighting the plasticity and adaptability of the adult visual system induced by prolonged training.
... Brain Sci. 2023, 13, 1504 2 of 12 the touch signals educate (or calibrate) the visual signals; in particular, the ability of child to optimally integrate vision and touch gradually develops up to 8-10 years of age [14,1 It is worth noting that cross-calibration is not limited to development but is a lifelong p cess; however, the relevant neural basis has been poorly explored [16]. In this review, we summarize recent work about cross-modal recalibration (mai based on self-motion perception) and hope to gain some insights about the underly mechanism and offer some suggestions for future research. ...
... In other words, the touch signals educate (or calibrate) the visual signals; in particular, the ability of children to optimally integrate vision and touch gradually develops up to 8-10 years of age [14,15]. It is worth noting that cross-calibration is not limited to development but is a lifelong process; however, the relevant neural basis has been poorly explored [16]. ...
Article
Full-text available
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual–vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual–vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual–vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual–vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
... On the other hand, the hallmark of PL is its speci city, which is de ned as a performance or sensitivity increase in a sensory feature as a result of repetitive training or exposure to the feature, and is regarded as re ecting cortical plasticity [10][11][12][13][14]. However, this hallmark has been a challenge in the last decade [15]. ...
... 12, and 18 cycles per degree, c/deg). The results of CS were converted to logarithms before the analysis.Quality-of-life assessmentVision-related QoL was assessed using the 25-Item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25). ...
Preprint
Full-text available
Purpose We aimed to observe changes in the visual function of patients with primary open-angle glaucoma (POAG) after undergoing binocular asynchronous visual training. Methods Seven patients with POAG with binocular visual field defects underwent binocular asynchronous dichoptic virtual reality (VR)-based visual perceptual training for 20 days (45 min/session/day, 5 sessions/week, for 4 weeks). Perimetry, contrast sensitivity (CS), and vision-related quality of life assessments were performed for all patients. Results Six months after completing training, nine of 14 eyes showed better performance in the perimetry (the mean deviation [MD] of perimetry was improved as compared to baseline), including six severe eyes and three mild POAG eyes. Moreover, the MD values of four of the nine eyes showed significant improvement (more than 1-dB increase as compared to baseline), including three severe eyes and one mild POAG eye. However, the MD values did not differ significantly between baseline and post-training. Contrast sensitivity tests, performed at three spatial frequencies (3, 6, and 18 cycles/degree) were significantly enhanced after asynchronous dichoptic training (p = 0.021, 0.026, and 0.020, respectively). Conclusion Patients with POAG, particularly those with severe POAG, performed significantly better in perimetry after training. All patients showed significantly improved performance on the CS tests. Improvements in visual function were sustained for at least 6 months. These results suggest that visual rehabilitation in patients with POAG can be achieved through asynchronous VR-based dichoptic visual perceptual training. A larger randomized clinical trial is required to confirm these effects. Trial registration The trial registration number is #ChiCTR2100054625.
... One popular account is the "unitization" or "new feature" hypothesis: Through learning, different features of the target might be successfully unitized into a new functional unit (or a new feature) at early stages of visual processing (Czerwinski et al., 1992;Goldstone, 1998;Humphreys, 2016), and such a new-learned feature could pop out from the search array preattentively. For example, PL is thought to allow a familiar shape (e.g., a "5" shape in a field of "2" shapes) or a familiar object with specific color and shape (e.g., yellow corn) developing into a new feature or functional unit (Gilbert et al., 2001;Rappaport et al., 2013;Rappaport et al., 2016). Efficient searches for categories like "vehicle" or "animal" had also been taken as evidence for learned features (Li et al., 2002;Thorpe et al., 1996;Treisman, 2006). ...
... One reasonable speculation is that features from a same dimension could be unitized together at the early stage of visual process after extensive training, while those from different dimensions could not. This speculation went well with previous studies (Gilbert et al., 2001;Li et al., 2002;Treisman, 2006) which claimed that new features (such as "5" shapes, or vehicles that could be considered as feature conjunctions within shape dimension) could be learned through training but differs from the opinion that learning may induce new color-shape conjunctions at preattentive stages of visual processing (Humphreys, 2016). Further studies are needed to examine this speculation, especially under conditions with similar paradigms and measurements. ...
Article
Full-text available
It is well known that feature search is efficient, whereas conjunction search is usually inefficient. However, prior studies have shown that some conjunction search could become very efficient through perceptual learning, behaving like a traditional feature search. An unanswered question is whether a new feature is learned when an inefficient conjunction search become efficient after extensive training. A popular view is that the trained conjunction has been successfully unitized into a new feature and thus could pop out from neighboring distractors. Here, by using stimulus specificity and transfer of perceptual learning as an approach, we investigate whether a new feature is learned when an initially inefficient conjunction search becomes highly efficient after extensive training. In two experiments, we consistently found that long-term perceptual learning over days could induce an inefficient-to-efficient pattern change in a color-orientation conjunction search. Moreover, the learning effect of the conjunction target could partly transfer to a new target that shared a same color or a same orientation with the trained target. Remarkably, the total amount of the learning effect was approximately equal to the sum of the transfer effects of individual features. Such additive learning pattern could last for at least several months, although the learning of separate features showed different patterns of persistence. These results do not support that the trained conjunction is unitized into a new and inseparable feature after learning. Instead, our findings point to a feature-based attention enhancement mechanism underlying long-term perceptual learning and its persistence of color-orientation conjunction search.
... VPL has been extensively studied particularly in the past two decades mainly because of its close links to cortical plasticity (Gilbert et al., 2001;Yang & Maunsell, 2004;Schoups et al., 2001;Law and Gold, 2008;Byers & Serences, 2014;Shibata et al, 2017). VPL is also regarded as a promising tool with which to improve degraded or declined perceptual abilities due to visual diseases (Levi & Polat, 1996;Levi, 2009) or aging (Andersen et al., 2010;Yotsumoto et al., 2014, Lemon & DeLoss, 2016. ...
Preprint
Visual perceptual learning (VPL) is defined as long-term improvement on a visual task as a result of visual experience. In many cases, the improvement is highly specific to the location where the target is presented, which refers to location specificity. In the current study, we investigated the effect of a geometrical relationship between the trained location and an untrained location on transfer of VPL. We found that significant transfer occurs either diagonally or along a line passing the fixation point. This indicates that whether location specificity or location transfer occurs at least partially depends on the geometrical relationship between trained location and an untrained location.
... 60% cat and 40% dog) with a defined category decision boundary and could readily adapt to the relearning of category membership. Other studies have highlighted plasticity changes in the brain following perceptual learning of objects, particularly changes in 'neural tuning' in sensory [111][112][113] as well as prefrontal regions (see [114]), often in a task-dependent manner [115,116]. Similarly, results from several neuroimaging studies have elucidated the neural processes involved in CP in the human brain [117,118] and suggest that interactions between the temporal and prefrontal cortices as well as other brain regions may act as a network in the formation of object categories [119,120]. ...
Article
Full-text available
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the ‘moo’ sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue ‘Decision and control processes in multisensory perception’.
... Perceptual learning is a powerful mechanism to enhance perception and acquire novel memory representations throughout one's lifetime. It is defined as the experiencedependent gain in perceptual capacity through growing experience with a certain, usually previously unfamiliar, type of stimulus material (Gibson, 1969;Gilbert et al., 2001;Irvine et al., 2000). In the auditory modality, repeated exposure to a novel class of sounds rapidly improves listeners' abilities to perceptually parse them and, for instance, efficiently discriminate different exemplars (Wright & Zhang, 2009). ...
Article
Full-text available
Perceptual learning is a powerful mechanism to enhance perceptual abilities and to form robust memory representations of previously unfamiliar sounds. Memory formation through repeated exposure takes place even for random and complex acoustic patterns devoid of semantic content. The current study sought to scrutinise how perceptual learning of random acoustic patterns is shaped by two potential modulators: temporal regularity of pattern repetition and listeners' attention. To this end, we adapted an established implicit learning paradigm and presented short acoustic sequences that could contain embedded repetitions of a certain sound segment (i.e., pattern) or not. During each experimental block, one repeating pattern recurred across multiple trials, while the other patterns were presented in only one trial. During the presentation of sound sequences that contained either temporally regular or jittered within-trial pattern repetitions, participants' attention was directed either towards or away from the auditory stimulation. Overall, we found a memory-related modulation of the event-related potential (ERP) and an increase in inter-trial phase coherence for patterns that recurred across multiple trials (compared to non-recurring patterns), accompanied by a performance increase in a (within-trial) repetition detection task when listeners attended the sounds. Remarkably, we show a memory-related ERP effect even for the first pattern occurrence per sequence when participants attended the sounds, but not when they were engaged in a visual distractor task. These findings suggest that learning of unfamiliar sound patterns is robust against temporal irregularity and inattention, but attention facilitates access to established memory representations upon first occurrence within a sequence.
... Since vision is a fundamental part of human experience, many neurocognitive research have examined the relationship between visual perception and brain activity. Visual stimuli cause different brain activity patterns [2][3] [4]. Visual stimuli may be decoded to study human visual information processing [5]. ...
Article
Full-text available
To ensure that the FC-GDN is properly calibrated for the EEG-ImageNet dataset, we subject it to extensive training and gather all of the relevant weights for its parameters. Making use of the FC-GDN pseudo-code. The dataset is split into a "train" and "test" section in Kfold cross-validation. Ten-fold recommends using ten folds, with one fold being selected as the test split at each iteration. This divides the dataset into 90% training data and 10% test data. In order to train all 10 folds without overfitting, it is necessary to apply this procedure repeatedly throughout the whole dataset. Each training fold is arrived at after several iterations. After training all ten folds, results are analyzed. For each iteration, the FC-GDN weights are optimized by the SGD and ADAM optimizers. The ideal network design parameters are based on the convergence of the trains and the precision of the tests. This study offers a novel geometric deep learning-based network architecture for classifying visual stimulation categories using electroencephalogram (EEG) data from human participants while they watched various sorts of images. The primary goals of this study are to (1) eliminate feature extraction from GDL-based approaches and (2) extract brain states via functional connectivity. Tests with the EEG-ImageNet database validate the suggested method's efficacy. FC-GDN is more efficient than other cutting-edge approaches for boosting classification accuracy, requiring fewer iterations. In computational neuroscience, neural decoding addresses the problem of mind-reading. Because of its simplicity of use and temporal precision, Electroencephalographys (EEG) are commonly employed to monitor brain activity. Deep neural networks provide a variety of ways to detecting brain activity. Using a Function Connectivity (FC) - Geometric Deep Network (GDN) and EEG channel functional connectivity, this work directly recovers hidden states from high-resolution temporal data. The time samples taken from each channel are utilized to represent graph signals on a topological connection network based on EEG channel functional connectivity. A novel graph neural network architecture evaluates users' visual perception state utilizing extracted EEG patterns associated to various picture categories using graphically rendered EEG recordings as training data. The efficient graph representation of EEG signals serves as the foundation for this design. Proposal for an FC-GDN EEG-ImageNet test. Each category has a maximum of 50 samples. Nine separate EEG recorders were used to obtain these images. The FC-GDN approach yields 99.4% accuracy, which is 0.1% higher than the most sophisticated method presently available
... Perceptual learning is a powerful mechanism to enhance perception and acquire novel memory representations throughout one's lifetime. It is defined as the experience-dependent gain in perceptual capacity through growing experience with a certain, usually previously unfamiliar, type of stimulus material (Gibson, 1969;Gilbert et al., 2001;Irvine et al., 2000). In the auditory modality, repeated exposure to a novel class of sounds rapidly improves listeners' abilities to perceptually parse them and, for instance, efficiently discriminate different exemplars (Wright & Zhang, 2009). ...
Preprint
Full-text available
Perceptual learning is a powerful mechanism to enhance perceptual abilities and to form robust memory representations of previously unfamiliar sounds. Memory formation through repeated exposure takes place even for random and complex acoustic patterns devoid of semantic content. The current study sought to scrutinise how perceptual learning of random acoustic patterns is shaped by two potential modulators: temporal regularity of pattern repetition and listeners' attention. To this end, we adapted an established implicit learning paradigm and presented short acoustic sequences that could contain embedded repetitions of a certain sound segment (i.e., pattern) or not. During each experimental block, one repeating pattern recurred across multiple trials, while the other patterns were presented in only one trial. During the presentation of sound sequences that contained either temporally regular or jittered within-trial pattern repetitions, participants' attention was directed either towards or away from the auditory stimulation. Overall, we found a memory-related modulation of the event-related potential (ERP) and an increase in inter-trial phase coherence for patterns that recurred across multiple trials (compared to non-recurring patterns), accompanied by a performance increase in a (within-trial) repetition detection task when listeners attended the sounds. Remarkably, we show a memory-related ERP effect even for the first pattern occurrence per sequence when participants attended the sounds, but not when they were engaged in a visual distractor task. These findings suggest that learning of unfamiliar sound patterns is robust against temporal irregularity and inattention, but attention facilitates access to established memory representations upon first occurrence within a sequence.
... Learning-dependent reorganization of cortical representations has been described in a number of experimental models based on different kinds of learning rules (Recanzone et al., 1992;Gilbert et al., 2001;Crist et al., 2001;Rutkowski and Weinberger, 2005;Blake et al., 2006;Polley et al., 2006;Bieszczad and Weinberger, 2010;Conner et al., 2010;Reed et al., 2011;Rosselet et al., 2011). Several reports showed that the magnitude of cortical map expansion was correlated to the amount of learning (Recanzone et al., 1993;Rutkowski and Weinberger, 2005;Polley et al., 2006;Bieszczad and Weinberger 2010). ...
Article
Full-text available
Intrinsic signal optical imaging (ISOI) has been used previously for the detection of changes in sensory processing in the somatosensory cortex in response to environment alteration or after deprivation of sensory information. To date, there have been no reports of ISOI being used in learning-induced changes in the somatosensory cortex. In the present study, ISOI was performed twice in the same mouse: before and after conditional fear learning. The conditioning paradigm consisted of pairing sensory stimulation of vibrissae with electric tail shock. In order to map the cortical representation of the vibrissa B1 with ISOI, we deflected the vibrissa with an intensive stimulation (frequency of 10 Hz for 6 s). After conditioning, we found that the cortical representation of vibrissa B1 had expanded by an average of 44%, compared with pre-learning, by using images obtained with ISOI. Previously, we demonstrated an enlargement of the cortical representation of the vibrissae stimulated by the same behavioral training paradigm but using [14C]2-deoxyglucose. This current investigation provides the first ISOI-based evidence of learning-induced changes in plasticity in the barrel cortex. The results indicate that irrespective of physiological mechanisms used for visualization of the vibrissae representation or subject�s testing state (aware or anesthetized animal), the conditioning induced changes in each case in the cortical processing of intensive stimuli. This suggests specific functional reorganization of the neuronal circuits. Moreover, ISOI as a noninvasive method of mapping cortical activation in the same animal before and after behavioral training could serve as a very useful tool for precise manipulation within the cortex and for assessing the resulting effects on experience-dependent cortical plasticity.
... It is not easy to design a behavioral experiment which directly tests the contribution of the input and the output of the high-level vision system. Although Wang and colleagues (2019) tried to use the perceptual learning as a tool to study the similarities and differences among basic emotions, it is still worth noting that due to the neural base of perceptual learning (e.g., Gilbert, Sigman, & Crist, 2001;Seitz, 2017) and the complex physical structure of expressions (e.g., Xu et al., 2008), the perceptual learning method may not be the best way to unveil the macro-level system structure of high-level vision. On the other hand, one recent study intermixed the spatial frequency and orientation judgment tasks using different stimuli (gabor and dots; Ceylan, Herzog, & Pascucci, 2021). ...
Preprint
Full-text available
The visual system can be viewed and studied as an information processing system. If so, then the visual system should follow specific fundamental properties: either a memory or a memoryless system. Previous studies in serial dependence in vision found that the perception of the current stimulus is positively determined by the previous one. However, we are not entirely sure whether this phenomenon is a Markov processing. In this study, participants were asked to rate the social characteristics (attractiveness, trustworthiness, and dominance) of a face, either followed by the same characteristic (the one-trait condition) or another one (the two-trait condition) in randomized orders. By doing so, we can directly test the contribution of the previous input and output to the current output and thus study the properties of the system. Using Derivative of Gaussian, Markov Chain and Linear Mixed effect modeling, convergent results suggested that the serial dependence was absent and the memoryless and Markovian properties were violated in the two-trait condition when testing both attractiveness and dominance, but not in the other conditions. Thus, different facets of (presumably) the same computational task may follow asymmetrical system properties. The study also develops serial dependence as an effective technique to reveal the relationships between different computation tasks.
... The compound stimulus representation hypothesis postulates that practicing with these stimuli induces a representation that incorporates a specific color and a specific orientation, just like a task-dependent object file (Czerwinski & Shiffrin, 1992;Gilbert, Sigman, & Crist, 2001;Goldstone, 1998). Orientation and color would both have to be identified to activate the compound stimulus and with practice this stimulus representation becomes associated with the response given during practice. ...
Preprint
Full-text available
Three experiments are reported testing the hypothesis that response selection skill involves task-dependent associations between a stimulus feature and a response. In the experiments, participants first practiced responding to either the orientation or the color of a line stimulus after which they responded to the other stimulus feature. The question was whether a consistency effect would occur in that response time would be affected by the consistency of the then irrelevant stimulus feature. RTs and errors supported this prediction for stimulus orientation, which confirms development of associations between that feature and the response. There was only limited evidence for color-response associations which could be attributed to the slow identification of the color feature. It appeared that during practice participants could ignore the irrelevant feature but that after practice identification of that feature was mandatory. These results indicate that the typical improvement with practice in selection tasks is caused in part by an association between the most rapidly identified stimulus feature and the following response without the need to wait to identify other stimulus features.
... Despite the creditable effort to move away from the traditional reductionist approach [35], this new line of research has only partially included the social aspect in the study of human learning. Namely, at best social context has been included in the study design and data collection, while the focus of data analysis has been almost exclusively either on the learner [36] or (less often) on the teacher [37], and only rarely on the interaction [38]. This is also the case in many developmental studies that see the teacher (carer) as providing an input to the learner (the child; e.g. ...
Article
Full-text available
Learning in humans is highly embedded in social interaction: since the very early stages of our lives, we form memories and acquire knowledge about the world from and with others. Yet, within cognitive science and neuroscience, human learning is mainly studied in isolation. The focus of past research in learning has been either exclusively on the learner or (less often) on the teacher, with the primary aim of determining developmental trajectories and/or effective teaching techniques. In fact, social interaction has rarely been explicitly taken as a variable of interest, despite being the medium through which learning occurs, especially in development, but also in adulthood. Here, we review behavioural and neuroimaging research on social human learning, specifically focusing on cognitive models of how we acquire semantic knowledge from and with others, and include both developmental as well as adult work. We then identify potential cognitive mechanisms that support social learning, and their neural correlates. The aim is to outline key new directions for experiments investigating how knowledge is acquired in its ecological niche, i.e. socially, within the framework of the two-person neuroscience approach. This article is part of the theme issue ‘Concepts in interaction: social engagement and inner experiences’.
... Since vision is one of the most essential components in the human perception system, several neuro-cognitive studies have been dedicated to the relation between visual perception and brain activity. These studies have discerned that human brain activities contain distinguishable patterns associated with various categories of visual stimulation [2][3][4]. To study the mechanism N. Khaleghi et al. ...
Article
Neural decoding is of great importance in computational neuroscience to automatically interpret brain activities in order to address the challenging problem of mind-reading. Analyzing the vision-related EEG records is of great importance to discern the relation between visual perception and brain activity. Considering the recent advances and achievements in the field of deep neural networks, several architectures have been implemented to decode brain activities. In this paper, functional connectivity-based geometric deep network (FC-GDN) is proposed to leverage the spatio-temporal distributed information in EEG recordings evoked by images to directly extract hidden states of high-resolution time samples considering the functional connectivity between EEG channels. To this end, a topological connectivity graph is constructed based on the functional connectivity between EEG channels and time samples of each EEG channel are considered as a graph signal on top of corresponding graph node. Furthermore, a novel graph neural network architecture based on this efficient graph representation of EEG signals is proposed, in which visually provoked EEG recordings are used as training data in order to decode visual perception state of the participants in terms of extracted EEG patterns related to different image categories. The performance of the proposed FC-GDN is evaluated on the EEG-ImageNet dataset, consisting of 40 image categories and each category includes 50 sample images, shown to 6 participants while their EEG signals were recorded. The average accuracy of 98.4% is obtained for FC-GDN, showing an average improvement of 1.1% compared to the best state-of-the-art method.
... Top-down influences can affect sensory processing at all cortical and thalamic levels [89]. Common top-down modulators of sensory processing can include stress, attention, expectation, emotion, motivation and learned experience [89][90][91][92]. ...
Article
Full-text available
Multisensory integration refers to sensory inputs from different sensory modalities being processed simultaneously to produce a unitary output. Surrounded by stimuli from multiple modalities, animals utilize multisensory integration to form a coherent and robust representation of the complex environment. Even though multisensory integration is fundamentally essential for animal life, our understanding of the underlying mechanisms, especially at the molecular, synaptic and circuit levels, remains poorly understood. The study of sensory perception in Caenorhabditis elegans has begun to fill this gap. We have gained a considerable amount of insight into the general principles of sensory neurobiology owing to C. elegans’ highly sensitive perceptions, relatively simple nervous system, ample genetic tools and completely mapped neural connectome. Many interesting paradigms of multisensory integration have been characterized in C. elegans, for which input convergence occurs at the sensory neuron or the interneuron level. In this narrative review, we describe some representative cases of multisensory integration in C. elegans, summarize the underlying mechanisms and compare them with those in mammalian systems. Despite the differences, we believe C. elegans is able to provide unique insights into how processing and integrating multisensory inputs can generate flexible and adaptive behaviors. With the emergence of whole brain imaging, the ability of C. elegans to monitor nearly the entire nervous system may be crucial for understanding the function of the brain as a whole.
... For instance, retuning representations in the early visual cortex to enhance performance in one task would impact performance in many other tasks that use the same representations. By contrast, plasticity based on selective reweighting of task-relevant connections in a multiplexing cortical organization 146 , in which different tasks involve independent connections from sensory areas to decisions, could help maintain stability during visual perceptual learning of multiple tasks over time. Deep convolutional neural networks might provide a promising way to investigate the trade-off between plasticity and stability (Box 2). ...
Article
The visual expertise of adult humans is jointly determined by evolution, visual development and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioural aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss the specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions. Perceptual learning, or performance improvements after training on perceptual tasks, is a widespread phenomenon in visual perception. In this Review, Lu and Dosher describe findings regarding the specificity and transfer of perceptual learning, mechanisms of learning and key applications in visual rehabilitation.
... Increased activation in the posteromedial regions, including the posterior cingulate gyrus and precuneus, has consistently been reported in the literature in relation to both ToM (Schurz et al., 2014) and schizotypy (Modinos et al., 2010;Wang et al., 2015), as well as the expression of schizophrenia-related genes (Romero-Garcia et al., 2020). The brain activation associated with the total SPQ score led to the identification of left posterior brain regions, namely the MTG, fusiform, lingual and parahippocampal gyrus, which have previously been shown to be relevant to mental imagery (Spagna et al., 2021), face processing (Lobmaier et al., 2008) and visuospatial cognition and memory storage (Gilbert et al., 2001). Our results are in line with a number of previous studies, suggesting that abnormal left MTG functioning could serve as a marker of vulnerability to schizophrenia (Ehrlich et al., 2010;Seidman et al., 2014;Zhao et al., 2018). ...
Article
Full-text available
Schizophrenia, a severe psychiatric disorder, is associated with abnormal brain activation during theory of mind (ToM) processing. Researchers recently suggested that there is a continuum running from subclinical schizotypal personality traits to fully expressed schizophrenia symptoms. Nevertheless, it remains unclear whether schizotypal personality traits in a nonclinical population are associated with atypical brain activation during ToM tasks. Our aim was to investigate correlations between fMRI brain activation during affective and cognitive ToM tasks and scores on the Schizotypal Personality Questionnaire (SPQ) and Basic Empathy Scale (BES) in 39 healthy individuals. The total SPQ score positively correlated with brain activation during affective ToM processing in clusters extending from the left medial temporal gyrus (MTG), lingual gyrus and fusiform gyrus to the parahippocampal gyrus (BA 19). During affective ToM processing, the right inferior occipital gyrus, the right MTG, precuneus and posterior cingulate cortex negatively correlated with the emotional disconnection subscore and the total score of self-reported empathy. These posterior brain regions are known to be involved in memory and language, as well as in creative reasoning, in nonclinical individuals. Our findings highlight changes in brain processing associated to trait-schizotypy in nonclinical individuals during affective but not cognitive ToM processing.
... Although the recognition memory is generally deemed to be inferior in the auditory compared with the visual domain (Cohen et al., 2009), there is compelling evidence that the human brain is exceptionally capable of rapidly forming robust short-and longer-term memories for various types of random auditory patterns, such as tone pip sequences (Bianco et al., 2020), temporal patterns of clicks (Kang et al., 2017), and white noise (Agus et al., 2010). It has been argued that listeners build up these representations during perceptual learning, which refers to experience-dependent changes in the perceptual ability to effectively extract and use information from sensory input through repeated exposure (Gibson, 1969;Gilbert et al., 2001). ...
Article
Full-text available
It is remarkable that human listeners can perceive periodicity in noise, as the isochronous repetition of a particular noise segment is not accompanied by salient physical cues in the acoustic signal. Previous research suggested that listeners rely on short temporally local and idiosyncratic features to perceptually segment periodic noise sequences. The present study sought to test this assumption by disentangling consistency of perceptual segmentation within and between listeners. Presented periodic noise sequences either consisted of seamless repetitions of a 500-ms segment or of repetitions of a 200-ms segment that were interleaved with 300-ms portions of random noise. Both within-and between-subject consistency was stronger for interleaved (compared with seamless) periodic sequences. The increased consistency likely resulted from reduced temporal jitter of potential features used for perceptual segmentation when the recurring segment was shorter and occurred interleaved with random noise. These results support the notion that perceptual segmentation of periodic noise relies on subtle temporally local features. However, the finding that some specific noise sequences were segmented more consistently across listeners than others challenges the assumption that the features are necessarily idiosyncratic. Instead, in some specific noise samples, a preference for certain spectral features is shared between individuals.
... This result set the basis for studying the electrophysiological properties of these newly generated neurons, which is currently in progress. Preclinical and clinical studies, however, demonstrated that the performance of sensory systems in the cerebral cortex can be substantially improved through intensive learning and practice and that these improvements are mediated by plastic changes in key neural networks (Buonomano & Merzenich, 1998;Gilbert et al., 2001). Our data using CRW would indicate that hypoxia may act as the driving force of neuronal adaptation to increased demand. ...
Thesis
Full-text available
Article
Reading is both a visual and a linguistic task, and as such it relies on both general-purpose, visual mechanisms and more abstract, meaning-oriented processes. Disentangling the roles of these resources is of paramount importance in reading research. The present study capitalizes on the coupling of Fast Periodic Visual Stimulation (FPVS; Rossion, 2014) and MEG recordings to address this issue and investigate the role of dierent kinds of visual and linguistic units in the visual word identification system. We compared strings of pseudo-characters (BACS; C. Vidal & Chetail, 2017); strings of consonants (e.g,. sfcl); readable, but unattested strings (e.g., amsi); frequent, but non-meaningful chunks (e.g., idge); suffixes (e.g., ment); and words (e.g., vibe); and looked for discrimination responses with a particular focus on the ventral, occipito-temporal regions. The results revealed sensitivity to alphabetic, readable, familiar and lexical stimuli. Interestingly, there was no discrimination between suffixes and equally frequent, but meaningless endings, thus highlighting a lack of sensitivity to semantics. Taken together, the data suggest that the visual word identification system, at least in its early processing stages, is particularly tuned to form-based regularities, most likely reflecting its reliance on general-purpose, statistical learning mechanisms that are a core feature of the visual system as implemented in the ventral stream.
Thesis
Full-text available
Learning and memory of recurring sound patterns play a crucial role for efficient perception of acoustic signals that dynamically unfold in time. Human listeners are remarkably sensitive to patterns, i.e., short sound segments that repeat within continuous auditory input, and can form robust memory representations for distinct patterns through repeated exposure. While there is compelling evidence for an exceptional perceptual learning capacity even for novel and meaningless acoustic patterns, less is known about whether and how the acquisition of memory representations at multiple time scales is modulated by the listening context. The present thesis comprises one behavioural and two EEG studies, which aimed to explore pattern repetition detection in continuous sounds as well as implicit memory formation for specific patterns that recur over a longer time scale (unbeknownst to the participants) under different listening conditions. More specifically, three aspects of the listening context were experimentally manipulated: presentation format of the repeating pattern (Study 1), listeners’ attentional focus, and temporal regularity of pattern repetition within a longer continuous sound sequence (Study 2 & 3). Combined results suggest that learning of acoustic patterns through repetition builds on a flexible mechanism that is robust against varying contextual demands, such as they often occur during naturalistic listening. Despite reliable pattern repetition detection and longer-term memory formation across all listening contexts, certain contextual features enhanced short-term perceptual representations, which in turn improved longer term memory formation. Together, these findings advance the understanding of (shorter- and longer-term) memory acquisition for novel acoustic patterns and suggest that auditory perceptual learning can be facilitated through targeted design of listening contexts.
Article
Full-text available
Visual perceptual learning (VPL), experience-induced gains in discriminating visual features, has been studied extensively and intensively for many years, its profile in feature space, however, remains unclear. Here human subjects were trained to perform either a simple low-level feature (grating orientation) or a complex high-level object (face view) discrimination task over a long-time course. During, immediately after and one month after training, all results showed that, in feature space, VPL in grating orientation discrimination was a center-surround profile; VPL in face view discrimination, however, was a monotonic gradient profile. Importantly, these two profiles can be emerged by a deep convolutional neural network with a modified AlexNet consisted of seven and twelve layers, respectively. Altogether, our study reveals for the first time a feature hierarchy-dependent profile of VPL in feature space, placing a necessary constraint on our understanding of the neural computation of VPL.
Article
Full-text available
Interpretation of neural activity in response to stimulations received from the surrounding environment is necessary to realize automatic brain decoding. Analyzing the brain recordings corresponding to visual stimulation helps to infer the effects of perception occurring by vision on brain activity. In this paper, the impact of arithmetic concepts on vision-related brain records has been considered and an efficient convolutional neural network-based generative adversarial network (CNN-GAN) is proposed to map the electroencephalogram (EEG) to salient parts of the image stimuli. The first part of the proposed network consists of depth-wise one-dimensional convolution layers to classify the brain signals into 10 different categories according to Modified National Institute of Standards and Technology (MNIST) image digits. The output of the CNN part is fed forward to a fine-tuned GAN in the proposed model. The performance of the proposed CNN part is evaluated via the visually provoked 14-channel MindBigData recorded by David Vivancos, corresponding to images of 10 digits. An average accuracy of 95.4% is obtained for the CNN part for classification. The performance of the proposed CNN-GAN is evaluated based on saliency metrics of SSIM and CC equal to 92.9% and 97.28%, respectively. Furthermore, the EEG-based reconstruction of MNIST digits is accomplished by transferring and tuning the improved CNN-GAN’s trained weights.
Chapter
The ability to process haptic, proprioceptive, and tactile stimuli can be trained due to the neuronal plasticity of the human central nervous system. Plasticity, in the neuroscientific sense, describes the ability of the central nervous system to adapt to changing environmental conditions and requirements. Accordingly, the performance of the haptic system can be improved by suitable training, similarly to motor training processes. This chapter contains sensory training of healthy adults, proprioception and balance training in older age, sensory rehabilitation after stroke, neuropsychological tests and trainings for education and clinical use, and propaedeutic considerations for education of clinical examinations and hands-on therapy.
Preprint
Learning to discriminate overlapping gustatory stimuli that predict distinct outcomes – a feat known as discrimination learning – can mean the difference between ingesting a poison or a nutritive meal. Despite the obvious importance of this process, very little is known on the neural basis of taste discrimination learning. In other sensory modalities, this form of learning can be mediated by either sharpening of sensory representations, or enhanced ability of “decision-making” circuits to interpret sensory information. Given the dual role of the gustatory insular cortex (GC) in encoding both sensory and decision-related variables, this region represents an ideal site for investigating how neural activity changes as animals learn a novel taste discrimination. Here we present results from experiments relying on two photon calcium imaging of GC neural activity in mice performing a taste-guided mixture discrimination task. The task allows for recording of neural activity before and after learning induced by training mice to discriminate increasingly similar pairs of taste mixtures. Single neuron and population analyses show a time-varying pattern of activity, with early sensory responses emerging after taste delivery and binary, choice encoding responses emerging later in the delay before a decision is made. Our results demonstrate that while both sensory and decision-related information is encoded by GC in the context of a taste mixture discrimination task, learning and improved performance are associated with a specific enhancement of decision-related responses.
Article
Full-text available
Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.
Chapter
This study identifies and adopt the key principles for a sustainable lean interior design in the construction industry. The concept of lean management has been of interest to researchers and practitioners in the construction industry and is implemented in most countries. This paper is part of the study to critically investigate the factors effecting the process of selecting lean tools and techniques in the construction industry in the public sector in the four governates of the Kingdom of Bahrain. The quantitative research methodology has been adopted in this study. The investigation will determine the level of lean construction management implementation and consequently its effect on sustainability in the interior design of housing projects. It is hoped that this study will benefit both academics and practitioners.
Article
Learning to discriminate between environmental visual stimuli is essential to make right decisions and guide appropriate behaviors. Moreover, impairments in visual discrimination learning are observed in several neuropsychiatric disorders. Visual discrimination learning requires perception and memory processing, in which the hippocampus critically involved. To understand the molecular mechanisms underpinning hippocampus function in visual discrimination learning, we examined the hippocampal gene expression profiles of Sprague-Dawley rats with different cognitive performance (high cognition group vs. low cognition group) in the modified visual discrimination learning task, using high-throughput RNA sequencing technology. Compared with the low cognition group, bioinformatics analysis indicated that 319 genes were differentially expressed in the high cognition group with statistical significance, of which 253 genes were down-regulated and 66 genes were up-regulated. The functional enrichment analysis showed that protein translation and energy metabolism were up-regulated pathways, while transforming growth factor beta receptor signaling pathway, bone morphogenetic protein signaling pathway, apoptosis, inflammation response, transport, and glycosaminoglycan metabolism were down-regulated pathways, which were related to good cognitive performance in the visual discrimination learning task. Taken together, our finding reveals the differential gene expression and enrichment biological pathways related to cognitive performance differences in visual discrimination learning of rats, which provides us direct insight into the molecular mechanisms of hippocampus function in visual discrimination learning and may contribute to developing potential treatment strategies for neuropsychiatric disorders accompanied with cognitive impairments.
Article
Statistical regularities and predictions can influence the earliest stages of visual processing. Studies examining their effects on detection, however, have yielded inconsistent results. In continuous flash suppression (CFS), where a static image projected to one eye is suppressed by a dynamic image presented to the other, the predictability of the suppressed signal may facilitate or delay detection. To identify the factors that differentiate these outcomes and dissociate the effects of expectation from those of behavioral relevance, we conducted three CFS experiments that addressed confounds related to the use of reaction time measures and complex images. In experiment 1, orientation recognition performance and visibility rates increased when a suppressed line segment completed a partial shape surrounding the CFS patch, demonstrating that valid configuration cues facilitate detection. In Experiment 2, however, predictive cues marginally affected visibility and did not modulate localization performance, challenging existing findings. In experiment 3, a relevance manipulation was introduced; participants pressed a key upon detecting lines of a particular orientation, ignoring the other possible orientation. Visibility and localization were enhanced for relevant orientations. Predictive cues modulated visibility, orientation recognition sensitivity, and response latencies, but not localization-an objective measure sensitive to partial breakthrough. Thus, while a consistent surround can strongly enhance detection during passive observation, predictive cueing primarily affects post-detection factors such as response readiness and recognition confidence. Relevance and predictability did not interact, suggesting that the contributions of these two processes to detection are mostly orthogonal.
Article
Full-text available
Perception provides us with access to the external world, but that access is shaped by our own experiential histories. Through perceptual learning, we can enhance our capacities for perceptual discrimination, categorization, and attention to salient properties. We can also encode harmful biases and stereotypes. This article reviews interdisciplinary research on perceptual learning, with an emphasis on the implications for our rational and normative theorizing. Perceptual learning raises the possibility that our inquiries into topics such as epistemic justification, aesthetic criticism, and moral knowledge should include not only an examination of cognition but also of perception.
Article
Rapid extraction of temporal and spatial patterns from repeated experience is known as statistical learning (SL). Studies on SL show that after few minutes of exposure, observers exhibit knowledge of regularities hidden in a sequence or array of objects. Previous findings suggest that visuo-spatial statistical learning might relate to numerical processing mechanisms. Hence, the current study examines for the first time visuo-spatial SL in a population with a deficiency in the numerical system: individuals with mathematical learning difficulties (MLD). Thirty-two female participants (16 with MLD and 16 matched controls) were tested on a visuo-spatial statistical learning task. The results revealed that visuo-spatial SL was significantly worse in the MLD group than in a control group, although MLD performed as well as controls in a visual discrimination task. In addition, whereas the control group showed reliable visuo-spatial SL above chance, the MLD group did not. Because learned regularities can broadly facilitate cognitive processing, individuals with MLD may thus suffer from additional behavioural challenges beyond their numerical difficulties.
Article
Full-text available
Numerous studies have found that repetitive transcranial magnetic stimulation (rTMS) modulates plasticity. rTMS has often been used to change neural networks underlying learning, often under the assumption that the mechanism of rTMS-induced plasticity should be highly similar to that associated with learning. The presence of visual perceptual learning (VPL) reveals the plasticity of early visual systems, which is formed through multiple phases. Hence, we tested how high-frequency (HF) rTMS and VPL modulate the effect of visual plasticity by investigating neurometabolic changes in early visual areas. We employed an excitatory-to-inhibitory (E/I) ratio, which refers to glutamate concentration divided by GABA+ concentration, as an index of the degree of plasticity. We compared neurotransmitter concentration changes after applying HF rTMS to the visual cortex with those after training in a visual task, in otherwise identical procedures. Both the time courses of the E/I ratios and neurotransmitter contributions to the E/I ratio significantly differed between HF rTMS and training conditions. The peak E/I ratio occurred 3.5 h after HF rTMS with decreased GABA+, whereas the peak E/I ratio occurred 0.5 h after visual training with increased glutamate. Furthermore, HF rTMS temporally decreased the thresholds for detecting phosphene and perceiving low-contrast stimuli, indicating increased visual plasticity. These results suggest that plasticity in early visual areas induced by HF rTMS is not as involved in the early phase of development of VPL that occurs during and immediately after training.
Article
Background: Converging lines of evidence point to hippocampal dysfunction in psychosis spectrum disorders, including altered functional connectivity. Evidence also suggests that antipsychotic medications can modulate hippocampal dysfunction. The goal of this project was to identify patterns of hippocampal connectivity predictive of response to antipsychotic treatment in 2 cohorts of patients with a psychosis spectrum disorder, one medication-naïve and the other one unmedicated. Hypothesis: We hypothesized that we would identify reliable patterns of hippocampal connectivity in the 2 cohorts that were predictive of treatment response and that medications would modulate abnormal hippocampal connectivity after 6 weeks of treatment. Study design: We used a prospective design to collect resting-state fMRI scans prior to antipsychotic treatment and after 6 weeks of treatment with risperidone, a commonly used antipsychotic medication, in both cohorts. We enrolled 44 medication-naïve first-episode psychosis patients (FEP) and 39 unmedicated patients with schizophrenia (SZ). Study results: In both patient cohorts, we observed a similar pattern where greater hippocampal connectivity to regions of the occipital cortex was predictive of treatment response. Lower hippocampal connectivity of the frontal pole, orbitofrontal cortex, subcallosal area, and medial prefrontal cortex was predictive of treatment response in unmedicated SZ, but not in the medication-naïve cohort. Furthermore, greater reduction in hippocampal connectivity to the visual cortex with treatment was associated with better clinical response. Conclusions: Our results suggest that greater connectivity between the hippocampus and occipital cortex is not only predictive of better treatment response, but that antipsychotic medications have a modulatory effect by reducing hyperconnectivity.
Chapter
FirsT- and second-order systems have been proposed to explain visual information processing. With regard to the communications between the two systems, mixed results have been shown. The transfer of perceptual learning between first- and second-order systems was examined in fine orientation discrimination tasks. Observers were either trained with luminance-modulated (LM) orientation and tested with contrast-modulated (CM) orientation (Experiment 1) or trained with CM orientation and tested with LM orientation (Experiment 2). The difficulty of the discrimination of the two types of orientations was equalized. Learning curves were tracked and compared between observers who had training and those who had no training. Results showed that the performance of observers trained with LM orientation improved rapidly in CM task and vice versa, while the performance of untrained observers tended to stay low. This two-way transfer suggests that there are bidirectional communications between first- and second-order systems wherein higher-level cortical areas might be involved and the recruitment of common population of neurons might be playing an important role.KeywordsPerceptual learningFirst-orderSecond-orderTransferOrientation discriminationContrast-modulated
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Article
Full-text available
How do deaf and deafblind individuals process touch? This question offers a unique model to understand the prospects and constraints of neural plasticity. Our brain constantly receives and processes signals from the environment and combines them into the most reliable information content. The nervous system adapts its functional and structural organization according to the input, and perceptual processing develops as a function of individual experience. However, there are still many unresolved questions regarding the deciding factors for these changes in deaf and deafblind individuals, and so far, findings are not consistent. To date, most studies have not taken the sensory and linguistic experiences of the included participants into account. As a result, the impact of sensory deprivation vs. language experience on somatosensory processing remains inconclusive. Even less is known about the impact of deafblindness on brain development. The resulting neural adaptations could be even more substantial, but no clear patterns have yet been identified. How do deafblind individuals process sensory input? Studies on deafblindness have mostly focused on single cases or groups of late-blind individuals. Importantly, the language backgrounds of deafblind communities are highly variable and include the usage of tactile languages. So far, this kind of linguistic experience and its consequences have not been considered in studies on basic perceptual functions. Here, we will provide a critical review of the literature, aiming at identifying determinants for neuroplasticity and gaps in our current knowledge of somatosensory processing in deaf and deafblind individuals.
Article
Full-text available
Threat and extinction memories are crucial for organisms’ survival in changing environments. These memories are believed to be encoded by separate ensembles of neurons in the brain, but their whereabouts remain elusive. Using an auditory fear-conditioning and extinction paradigm in male mice, here we discovered that two distinct projection neuron subpopulations in physical proximity within the insular cortex (IC), targeting the central amygdala (CeA) and nucleus accumbens (NAc), respectively, to encode fear and extinction memories. Reciprocal intracortical inhibition of these two IC subpopulations gates the emergence of either fear or extinction memory. Using rabies-virus-assisted tracing, we found IC-NAc projection neurons to be preferentially innervated by intercortical inputs from the orbitofrontal cortex (OFC), specifically enhancing extinction to override fear memory. These results demonstrate that IC serves as an operation node harboring distinct projection neurons that decipher fear or extinction memory under the top-down executive control from OFC.
Article
A new study provides insight into the neuronal mechanisms that underlie visual learning in the tree shrew, revealing how improved coding for trained stimuli in visual cortex can negatively affect the perception of other stimuli.
Article
The concept of lean management has been of interest to researchers and practitioners in the construction industry and is implemented in most countries. While achieving waste mitigation in developed countries, lean is still in its elementary stages in the Middle East. Moreover, the implementation in the Middle East poses more challenges than benefit when choosing the right lean tools and techniques for projects. This study will critically investigate the factors affecting the process of adopting lean tools and techniques, which are directed towards technology, artificial intelligence and IoT in the construction industry in the public sector in the four governates of the Kingdom of Bahrain. Thereafter, the investigation will determine the level of lean construction management implementation and consequently its effect on sustainability using a quantitative method. The result from the questionnaire is hoped to shed a light on the poorly researched topic in Bahrain and benefit academics and practitioners.
Article
Full-text available
Neural assemblies in a number of animal species display self-organized, synchronized oscillations in response to sensory stimuli in a variety of brain areas. In the olfactory system of insects, odour-evoked oscillatory synchronization of antennal lobe projection neurons (PNs) is superimposed on slower and stimulus-specific temporal activity patterns. Hence, each odour activates a specific and dynamic projection neuron assembly whose evolution during a stimulus is locked to the oscillation clock. Here we examine, using locusts, the changes in population dynamics of projection-neuron assemblies over repeated odour stimulations, as would occur when an animal first encounters and then repeatedly samples an odour for identification or localization. We find that the responses of these assemblies rapidly decrease in intensity, while they show a marked increase in spike time precision and inter-neuronal oscillatory coherence. Once established, this enhanced precision in the representation endures for several minutes. This change is stimulus-specific, and depends on events within the antennal lobe circuits, independent of olfactory receptor adaptation: it may thus constitute a form of sensory memory. Our results suggest that this progressive change in olfactory network dynamics serves to converge, over repeated odour samplings, on a more precise and readily classifiable odour representation, using relational information contained across neural assemblies.
Article
Full-text available
A prominent and stereotypical feature of cortical circuitry in the striate cortex is a plexus of long-range horizontal connections, running for 6-8 mm parallel to the cortical surface, which has a clustered distribution. This is seen for both intrinsic cortical connections within a particular cortical area and the convergent and divergent connections running between area 17 and other cortical areas. To determine if these connections are related to the columnar functional architecture of cortex, we combined labeling of the horizontal connections by retrograde transport of rhodamine-filled latex microspheres (beads) and labeling of the orientation columns by 2-deoxyglucose autoradiography. We first mapped the distribution of orientation columns in a small region of area 17 or 18, then made a small injection of beads into the center of an orientation column of defined specificity, and after allowing for retrograde transport, labeled vertical orientation columns with the 2-deoxyglucose technique. The retrogradely labeled cells were confined to regions of orientation specificity similar to that of the injection site, indicating that the horizontal connections run between columns of similar orientation specificity. This relationship was demonstrated for both the intrinsic horizontal and corticocortical connections. The extent of the horizontal connections, which allows single cells to integrate information over larger parts of the visual field than that covered by their receptive fields, and the functional specificity of the connections, suggests possible roles for these connections in visual processing.
Article
Full-text available
What happens to visual experience in the absence of visual attention? Does lack of attention render us effectively blind, or is there a significant residual experience? Here I show that the surprising results of a recent study were due not to the novel way in which attention was controlled, but simply to the use of novice rather than expert observers. So the evidence remains strong that some aspects of visual experience are essentially independent of attention.
Article
Full-text available
Tested the 2-process theory of detection, search, and attention presented by the current authors (1977) in a series of experiments. The studies (a) demonstrate the qualitative difference between 2 modes of information processing: automatic detection and controlled search; (b) trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and (c) show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning of categories is shown to improve controlled search performance. A general framework for human information processing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (31/2 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A 2-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically--without S control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the S. A series of studies, with approximately 8 Ss, using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search was utilized in varied-mapping paradigms, and in the present studies, it took the form of serial, terminating search. (60 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In this paper, we report that when the low-level features of targets and distractors are held constant, visual search performance can be strongly influenced by familiarity. In the first condition, a was the target amid as distractors, and vice versa. The response time increased steeply as a function of number of distractors (82 msec/item). When the same stimuli were rotated by 90° (the second condition), however, they became familiar patterns— and—and gave rise to much shallower search functions (31 msec/item). In the third condition, when the search was for a familiar target, (or), among unfamiliar distractors, (or), the slope was about 46 msec/item. In the last condition, when the search was for an unfamiliar target, (or), among familiar distractors, s (or s), parallel search functions were found with a slope of about 1.5 msec/item. These results show that familiarity speeds visual search and that it does so principally when the distractors, not the targets, are familiar.
Article
Full-text available
The ability to detect small differences in the positions of two lines (vernier acuity) showed some improvement with practice in all eight subjects, even for subjects given no error feedback. The average decline in threshold with training (2,000–2,500 responses) was about 40%. We used three target orientations: vertical, horizontal, and right oblique. Orientational differences remained stable in only one subject. In five subjects, orientational differences present at the beginning of training diminished or disappeared with increased experience; in two, they increased.
Article
Full-text available
We investigated the relationship between focal attention and a feature-gradient detection that is performed in a parallel manner. We found that a feature gradient can be detected without measurable impairment of performance even while a concurrent form-recognition task is carried out, in spite of the fact that the form-recognition task engages focal attention and thus removes attentive resources from the vicinity of the feature gradient. This outcome suggests strongly that certain perceptions concerning salient boundaries and singularities in a visual scene can be accomplished without the aid of resource-limited processes, such as focal attention, and, by implication, that there may exist two distinct perceptual faculties (one attentive, the other not) that are able to bring complementary kinds of visual information simultaneously to our awareness.
Article
Full-text available
Properties of the receptive fields of simple cells in macaque cortex were compared with properties of independent component filters generated by independent component analysis (ICA) on a large set of natural images. Histograms of spatial frequency bandwidth, orientation tuning bandwidth, aspect ratio and length of the receptive fields match well. This indicates that simple cells are well tuned to the expected statistics of natural stimuli. There is no match, however, in calculated and measured distributions for the peak of the spatial frequency response: the filters produced by ICA do not vary their spatial scale as much as simple cells do, but are fixed to scales close to the finest ones allowed by the sampling lattice. Possible ways to resolve this discrepancy are discussed.
Article
Full-text available
The intrinsic connections of the cortex have long been known to run vertically, across the cortical layers. In the present study we have found that individual neurons in the cat primary visual cortex can communicate over suprisingly long distances horizontally (up to 4 mm), in directions parallel to the cortical surface. For all of the cells having widespread projections, the collaterals within their axonal fields were distributed in repeating clusters, with an average periodicity of 1 mm. This pattern of extensive clustered projections has been revealed by combining the techniques of intracellular recording and injection of horseradish peroxidase with three-dimensional computer graphic reconstructions. The clustering pattern was most apparent when the cells were rotated to present a view parallel to the cortical surface. The pattern was observed in more than half of the pyramidal and spiny stellate cells in the cortex and was seen in all cortical layers. In our sample, cells made distant connections within their own layer and/or within another layer. The axon of one cell had clusters covering the same area in two layers, and the clusters in the deeper layer were located under those in the upper layer, suggesting a relationship between the clustering phenomenon and columnar cortical architecture. Some pyramidal cells did not project into the white matter, forming intrinsic connections exclusively. Finally, the axonal fields of all our injected cells were asymmetric, extending for greater distances along one cortical axis than along the orthogonal axis. The axons appeared to cover areas of cortex representing a larger part of the visual field than that covered by the excitatory portion of the cell's own receptive field. These connections may be used to generate larger receptive fields or to produce the inhibitory flanks in other cells' receptive fields.
Article
Full-text available
Recording brain activity in vivo during learning is fundamental to understanding how memories are formed. We used functional calcium imaging to track odor representations in the primary chemosensory center of the honeybee, the antennal lobe, while training animals to discriminate a rewarded odor from an unrewarded one. Our results show that associative learning transforms odor representations and decorrelates activity patterns for the rewarded versus the unrewarded odor, making them less similar. Additionally, activity for the rewarded but not for the unrewarded odor is increased. These results indicate that neural representations of the environment may be modified through associative learning.
Article
Full-text available
The neuronal structure and connectivity underlying receptive field organisation of cells in the cat visual cortex have been investigated. Intracellular recordings were made using a micropipette filled with a histochemical marker, which was injected into the cells after their receptive fields had been characterised. This allowed visualisation of the dendritic and axonal arborisations of functionally identified neurones.
Article
Full-text available
Immediately after focal retinal lesions, receptive fields (RFs) in primary visual cortex expand considerably, even when the retinal damage is limited to the photoreceptor layer. The time course of these changes suggests that mere lack of stimulation in the vicinity of the RF accompanied by stimulation in the surrounding region causes the RF expansion. While recording from single cells in cat area 17, we simulated this pattern of stimulation with a pattern of moving lines in the visual field, masking out an area covering the RF of the recorded cell, thereby producing an "artificial scotoma." Over approximately 10 min this masking resulted in a 5-fold average expansion in RF area. Stimulating the RF center caused the field to collapse in size, returning to near its original extent; reconditioning with the masked stimulus led to RF reexpansion. Stimulation in the surrounding region was required for the RF expansion to occur--little expansion was seen during exposure to a blank screen. We propose that the expansion may account for visual illusions, such as perceptual fill-in of stabilized images and illusory contours and may constitute the prodrome of altered cortical topography after retinal lesions. These findings support the idea that even in adult animals RFs are dynamic, capable of being altered by the sensory context.
Article
Full-text available
A differential pairing procedure was applied in vivo to individual neurons in the primary visual cortex of anesthetized paralyzed cats, in order to produce changes in their relative orientation preference. While we recorded from a single cell, its visual response to a light bar was driven iontophoretically to a "high" level when stimulating with an initially nonpreferred orientation (S+), and alternately reduced to a "low" level when stimulating with the preferred orientation (S). This associative procedure was devised to test the possible role of neuronal coactivity in controlling the plasticity of orientation selectivity. Among 87 cells tested, 35 (40%) showed significant long-lasting changes, either in the relative orientation preference for the two "paired" stimuli S+ and S-, in the global orientation tuning profile, or in both. Measurements of relative orientation preference demonstrated significant effects in 27 cells (31%), all in favor of the positively reinforced orientation (S+). Modifications of orientation selectivity (studied over the entire orientation spectrum in 45 of the conditioned cells) usually consisted (21 out of 25 modified cells) of a competitive reorganization of the orientation tuning curve: the preferred orientation shifted toward S+, and a loss of relative visual responsiveness was observed for orientations close to the negatively reinforced orientation (S-). The largest changes were found in deprived kittens at the peak of the critical period, although the probability of inducing a significant change studied during the first year of postnatal life was independent of age. These functional modifications demonstrated at the cellular level are analogous to those induced by a global manipulation of the visual environment, when only a restricted spectrum of orientations is experienced during the critical period. Our results support the hypothesis that covariance levels between pre- and postsynaptic activity determine the sign and the amplitude of the modification of efficacy of cortical synapses.
Article
Full-text available
In many different spatial discrimination tasks, such as in determining the sign of the offset in a vernier stimulus, the human visual system exhibits hyperacuity by evaluating spatial relations with the precision of a fraction of a photoreceptor's diameter. It is proposed that this impressive performance depends in part on a fast learning process that uses relatively few examples and that occurs at an early processing stage in the visual pathway. This hypothesis is given support by the demonstration that it is possible to synthesize, from a small number of examples of a given task, a simple network that attains the required performance level. Psychophysical experiments agree with some of the key predictions of the model. In particular, fast stimulus-specific learning is found to take place in the human visual system, and this learning does not transfer between two slightly different hyperacuity tasks.
Article
The presence of "maps" in sensory cortex is a hallmark of the mammalian nervous system, but the functional significance of topographic organization has been called into question by physiological studies claiming that patterns of neural behavioral activity transcend topographic boundaries. This paper discusses recent behavioral and physiological studies suggesting that, when animals or human subjects learn perceptual tasks, the neural modifications associated with the learning are distributed according to the spatial arrangement of the primary sensory cortical map. Topographical cortical representations of sensory events, therefore, appear to constitute a true structural framework for information processing and plasticity. (C) 1999 John Wiley & Sons, Inc.
Article
Intracortical injections of horseradish peroxidase (HRP) reveal a system of periodically organized intrinsic connections in primate striate cortex. In layers 2 and 3 these connections form a reticular or latticelike pattern, extending for about 1.5–2.0 mm around an injection. This connectional lattice is composed of HRP-labeled walls (350–450 μm apart in Saimiri and about 500–600 μm in macaque) surrounding unlabeled central lacunae. Within the lattice walls there are regularly arranged punctate loci of particularly dense HRP label, appearing as isolated patches as the lattice wall labeling thins further from the injection site. A periodic organization has also been demonstrated for the intrinsic connections in layer 4B, which are apparently in register with the supragranular periodicities, although separated from these by a thin unlabeled region. The 4B lattice is particularly prominent in squirrel monkey, extending for 2–3 mm from an injection. In both layers, these intrinsic connections are demonstrated by orthogradely and retrogradely transported HRP and seem to reflect a system of neurons with long horizontal axon collaterals, presumably with arborizations at regularly spaced intervals. The intrinsic connectional lattice in layers 2 and 3 resembles the repetitive array of cytochrome oxidase activity in these layers; but despite similarities of dimension and pattern, the two systems do not appear identical. In primate, as previously described in tree shrews (Rockland et al., 1982), the HRP-labeled anatomical connections resemble the pattern of 2-deoxy-glucose accumulation resulting from stimulation with oriented lines, although the functional importance of these connections remains obscure.
Article
Line orientation discrimination improves with selective practice for oblique orientations and not for principal orientations. This training effect was observed with an identification task as well as with two alternative forced choice tasks. Despite the improvement for oblique orientations, just noticeable differences in orientation are still larger for the practised oblique orientation than for the principal orientations after 5000 practice trials. These findings suggest that the oblique effect in line orientation has at least two sensorial components, one of which is attributed to the meridional variations in the preferred orientation of area 17 S-cells.
Article
How do neurons of the visual cortex acquire their acute sensitivity to the orientation of a visual stimulus? The question has preoccupied those who study the cortex since Hubel and Wiesel1 first described orientation selectivity over twenty-five years ago. At the time, they proposed an elegant and enduring model for the origin of orientation selectivity. Fig. 1A, which is adapted from their original paper and which contains the essence of their model, is by now familiar to most students of the visual system and to many others besides. Yet the model, and the central question that it addresses, is still the subject of intense debate. Competing models have arisen in the intervening years, along with diverse experiments that bear on them.
Article
Performance on a wide range of perceptual tasks improves with practice. Most accounts of perceptual learning are concerned with changes in neuronal sensitivity or changes in the way a stimulus is represented. Another possibility is that different areas of the brain are involved in performing a task while learning it and after learning it. Here we demonstrate that the right parietal cortex is involved in novel but not learned visual conjunction search. We observed that single pulse transcranial magnetic stimulation (TMS) to the right parietal cortex impairs visual conjunction search when the stimuli are novel and require a serial search strategy, but not once the particular search task has been learned. The effect of TMS returns when a different, novel, serial search task is presented.
Article
By examining the experimental data on the statistical properties of natural scenes together with (retinal) contrast sensitivity data, we arrive at a first principle, theoretical hypothesis for the purpose of retinal processing and its relationship to an animal's environment. We argue that the retinal goal is to transform the visual input as much as possible into a statistically independent basis as the first step in creating a redundancy reduced representation in the cortex, as suggested by Barlow. The extent of this whitening of the input is limited, however, by the need to suppress input noise. Our explicit theoretical solutions for the retinal filters also show a simple dependence on mean stimulus luminance: they predict an approximate Weber law at low spatial frequencies and a De Vries-Rose law at high frequencies. Assuming that the dominant source of noise is quantum, we generate a family of contrast sensitivity curves as a function of mean luminance. This family is compared to psychophysical data.
Article
Adult owl and squirrel monkeys were trained to master a small-object retrieval sensorimotor skill. Behavioral observations along with positive changes in the cortical area 3b representations of specific skin surfaces implicated specific glabrous finger inputs as important contributors to skill acquisition. The area 3b zones over which behaviorally important surfaces were represented were destroyed by microlesions, which resulted in a degradation of movements that had been developed in the earlier skill acquisition. Monkeys were then retrained at the same behavioral task. They could initially perform it reasonably well using the stereotyped movements that they had learned in prelesion training, although they acted as if key finger surfaces were insensate. However, monkeys soon initiated alternative strategies for small object retrieval that resulted in a performance drop. Over several- to many-week-long period, monkeys again used the fingers for object retrieval that had been used successfully before the lesion, and reacquired the sensorimotor skill. Detailed maps of the representations of the hands in SI somatosensory cortical fields 3b, 3a, and 1 were derived after postlesion functional recovery. Control maps were derived in the same hemispheres before lesions, and in opposite hemispheres. Among other findings, these studies revealed the following 1) there was a postlesion reemergence of the representation of the fingertips engaged in the behavior in novel locations in area 3b in two of five monkeys and a less substantial change in the representation of the hand in the intact parts of area 3b in three of five monkeys. 2) There was a striking emergence of a new representation of the cutaneous fingertips in area 3a in four of five monkeys, predominantly within zones that had formerly been excited only by proprioceptive inputs. This new cutaneous fingertip representation disproportionately represented behaviorally crucial fingertips. 3) There was an approximately two times enlargement of the representation of the fingers recorded in cortical area 1 in postlesion monkeys. The specific finger surfaces employed in small-object retrieval were differentially enlarged in representation. 4) Multiple-digit receptive fields were recorded at a majority of emergent, cutaneous area 3a sites in all monkeys and at a substantial number of area 1 sites in three of five postlesion monkeys. Such fields were uncommon in area 1 in control maps. 5) Single receptive fields and the component fields of multiple-digit fields in postlesion representations were within normal receptive field size ranges. 6) No significant changes were recorded in the SI hand representations in the opposite (untrained, intact) control hemisphere. These findings are consistent with "substitution" and "vicariation" (adaptive plasticity) models of recovery from brain damage and stroke.
Article
Ocular dominance columns were examined by a variety of techniques in juvenile macaque monkeys in which one eye had been removed or sutured closed soon after birth. In two monkeys the removal was done at 2 weeks and the cortex studied at 11/2 years. Physiological recordings showed continuous responses as an electrode advanced along layer IVC in a direction parallel to the surface. Examination of the cortex with the Fink-Heimer modification of the Nauta method after lesions confined to single lateral-geniculate layers showed a marked increase, in layer IVC, in the widths of columns belonging to the surviving eye, and a corresponding shrinkage of those belonging to the removed eye. Monocular lid closures were made in one monkey at 2 weeks of age, for a period of 18 months, in another at 3 weeks for 7 months, and in a third at 2 days for 7 weeks. Recordings from the lateral geniculate body showed brisk activity from the deprived layers and the usual abrupt eye transitions at the boundaries between layers. Cell shrinkage in the deprived layers was moderate - far less severe than that following eye removal, more marked ipsilaterally than contralaterally, and more marked the earlier the onset of the deprivation. In autoradiographs following eye injection with a mixture of tritiated proline and tritiated fucose the labelling of terminals was confined to geniculate layers corresponding to the injected eye. Animals in which the open eye was injected showed no hint of invasion of terminals into the deprived layers. Similarly in the tectum there was no indication of any change in the distribution of terminals from the two eyes. The autoradiographs of the lateral geniculates provide evidence for several previously undescribed zones of optic nerve terminals, in addition to the six classical subdivisions. In the cortex four independent methods, physiological recording, transneuronal autoradiography, Nauta degeneration, and a reduced-silver stain for normal fibres, all agreed in showing a marked shrinkage of deprived-eye columns and expansion of those of the normal eye, with preservation of the normal repeat distance (left-eye column plus right-eye column). There was a suggestion that changes in the columns were more severe when closure was done at 2 weeks as opposed to 3, and more severe on the side ipsilateral to the closure. The temporal crescent representation in layer IVC of the hemisphere opposite the closure showed no obvious adverse effects. Cell size and packing density in the shrunken IVth layer columns seemed normal. In one normal monkey in which an eye was injected the day after birth, autoradiographs of the cortex at 1 week indicated only a very mild degree of segregation of input from the two eyes; this had the form of parallel bands. Tangential recordings in layer IVC at 8 days likewise showed considerable overlap of inputs, though some segregation was clearly present; at 30 days the segregation was much more advanced. These preliminary experiments thus suggest that the layer IVC columns are not fully developed until some weeks after birth. Two alternate possibilities are considered to account for the changes in the ocular dominance columns in layer IVC following deprivation. If one ignores the above evidence in the newborn and assumes that the columns are fully formed at birth, then after eye closure the afferents from the normal eye must extend their territory, invading the deprived-eye columns perhaps by a process of sprouting of terminals. On the other hand, if at birth the fibres from each eye indeed occupy all of lay IVC, retracting to form the columns only during the first 6 weeks or so, perhaps by a process of competition, then closure of one eye may result in a competitive disadvantage of the terminals from that eye, so that they retract more than they would normally. This second possibility has the advantage that it explains the critical period for deprivation effects in the layer IV columns, this being the time after birth during which retraction is completed. It would also explain the greater severity of the changes in the earlier closures, and would provide an interpretation of both cortical and geniculate effects in terms of of competition of terminals in layer IVC for territory on postsynaptic cells.
Article
Of the many possible functions of the macaque monkey primary visual cortex (striate cortex, area 17) two are now fairly well understood. First, the incoming information from the lateral geniculate bodies is rearranged so that most cells in the striate cortex respond to specifically oriented line segments, and, second, information originating from the two eyes converges upon single cells. The rearrangement and convergence do not take place immediately, however: in layer IVc, where the bulk of the afferents terminate, virtually all cells have fields with circular symmetry and are strictly monocular, driven from the left eye or from the right, but not both; at subsequent stages, in layers above and below IVc, most cells show orientation specificity, and about half are binocular. In a binocular cell the receptive fields in the two eyes are on corresponding regions in the two retinas and are identical in structure, but one eye is usually more effective than the other in influencing the cell; all shades of ocular dominance are seen. These two functions are strongly reflected in the architecture of the cortex, in that cells with common physiological properties are grouped together in vertically organized systems of columns. In an ocular dominance column all cells respond preferentially to the same eye. By four independent anatomical methods it has been shown that these columns have the from of vertically disposed alternating left-eye and right-eye slabs, which in horizontal section form alternating stripes about 400 mu m thick, with occasional bifurcations and blind endings. Cells of like orientation specificity are known from physiological recordings to be similarly grouped in much narrower vertical sheeet-like aggregations, stacked in orderly sequences so that on traversing the cortex tangentially one normally encounters a succession of small shifts in orientation, clockwise or counterclockwise; a 1 mm traverse is usually accompanied by one or several full rotations through 180 degrees, broken at times by reversals in direction of rotation and occasionally by large abrupt shifts. A full complement of columns, of either type, left-plus-right eye or a complete 180 degrees sequence, is termed a hypercolumn. Columns (and hence hypercolumns) have roughly the same width throughout the binocular part of the cortex. The two independent systems of hypercolumns are engrafted upon the well known topographic representation of the visual field. The receptive fields mapped in a vertical penetration through cortex show a scatter in position roughly equal to the average size of the fields themselves, and the area thus covered, the aggregate receptive field, increases with distance from the fovea. A parallel increase is seen in reciprocal magnification (the number of degrees of visual field corresponding to 1 mm of cortex). Over most or all of the striate cortex a movement of 1-2 mm, traversing several hypercolumns, is accompanied by a movement through the visual field about equal in size to the local aggregate receptive field. Thus any 1-2 mm block of cortex contains roughly the machinery needed to subserve an aggregate receptive field. In the cortex the fall-off in detail with which the visual field is analysed, as one moves out from the foveal area, is accompanied not by a reduction in thickness of layers, as is found in the retina, but by a reduction in the area of cortex (and hence the number of columnar units) devoted to a given amount of visual field: unlike the retina, the striate cortex is virtually uniform morphologically but varies in magnification. In most respects the above description fits the newborn monkey just as well as the adult, suggesting that area 17 is largely genetically programmed. The ocular dominance columns, however, are not fully developed at birth, since the geniculate terminals belonging to one eye occupy layer IVc throughout its length, segregating out into separate columns only after about the first 6 weeks, whether or not the animal has visual experience. If one eye is sutured closed during this early period the columns belonging to that eye become shrunken and their companions correspondingly expanded. This would seem to be at least in part the result of interference with normal maturation, though sprouting and retraction of axon terminals are not excluded.
Article
IT has generally been assumed that the connections of spinal cord cells are laid down during development and then remain stable throughout adult life. However, it has been shown that dorsal horn cells can be excited in a novel fashion by afferents over distant intact dorsal roots if section of nearby dorsal roots results in the degeneration of the afferent sensory fibres which normally activate the cells1. Here we show that this plasticity of afferent connection can be provoked by section of peripheral nerves where there is no gross anatomical change of the central terminals of the sectioned fibres and yet the cells on which the cut nerves end begin to respond to the nearest intact nerves.
Article
The ability of human observers to discriminate the orientation of a pair of straight lines differing by 3 degrees improved with practice. The improvement did not transfer across hemifield or across quadrants within the same hemifield. The practice effect occurred whether or not observers were given feedback. However, orientation discrimination did not improve when observers attended to brightness rather than orientation of the lines. This suggests that cognitive set affects tuning in retinally local orientation channels (perhaps by guiding some form of unsupervised learning mechanism) and that retinotopic feature extraction may not be wholly preattentive.
Article
Peripheral axotomy initiates changes in central primary afferent receiving areas of the dorsal horn of the spinal cord. Most of the presently known changes are degenerative in nature and consist of such things as cell and axon death or declines in peptides or enzymes. Other changes are regenerative in nature and because most of these occur in the superficial dorsal horn, which is where fine primary afferents end, we wished to ask whether peripheral axotomy results in a change in the distribution in these fine afferents. Using recently available markers for fine primary afferent axons and small dorsal root ganglion cells, we demonstrate that peripheral axotomy results in a considerable increase in the immunolabeled area for these compounds. Our interpretation is that there may be an extension of fine primary afferent fibers into lamina III and possibly lamina IV following peripheral axotomy. If further work bears out this conclusion, this would provide a possible explanation for the chronic pain states that sometimes follow peripheral nerve damage.
Article
The adult brain has a remarkable ability to adjust to changes in sensory input. Removal of afferent input to the somatosensory, auditory, motor or visual cortex results in a marked change of cortical topography. Changes in sensory activity can, over a period of months, alter receptive field size and cortical topography. Here we remove visual input by focal binocular retinal lesions and record from the same cortical sites before and within minutes after making the lesion and find immediate striking increases in receptive field size for cortical cells with receptive fields near the edge of the retinal scotoma. After a few months even the cortical areas that were initially silenced by the lesion recover visual activity, representing retinotopic loci surrounding the lesion. At the level of the lateral geniculate nucleus, which provides the visual input to the striate cortex, a large silent region remains. Furthermore, anatomical studies show that the spread of geniculocortical afferents is insufficient to account for the cortical recovery. The results indicate that the topographic reorganization within the cortex was largely due to synaptic changes intrinsic to the cortex, perhaps through the plexus of long-range horizontal connections.
Article
1. Temporal response characteristics of neurons were sampled in fine spatial grain throughout the hand representations in cortical areas 3a and 3b in adult owl monkeys. These monkeys had been trained to detect small differences in tactile stimulus frequencies in the range of 20-30 Hz. Stimuli were presented to an invariant, restricted spot on a single digit. 2. The absolute numbers of cortical locations and the cortical area over which neurons showed entrained frequency-following responses to behaviorally important stimuli were significantly greater when stimulation was applied to the trained skin, as compared with stimulation on an adjacent control digit, or at corresponding skin sites in passively stimulated control animals. 3. Representational maps defined with sinusoidal stimuli were not identical to maps defined with just-visible tapping stimuli. Receptive-field/frequency-following response site mismatches were recorded in every trained monkey. Mismatches were less frequently recorded in the representations of control skin surfaces. 4. At cortical locations with entrained responses, neither the absolute firing rates of neurons nor the degree of the entrainment of the response were correlated with behavioral discrimination performance. 5. All area 3b cortical locations with entrained responses evoked by stimulation at trained or untrained skin sites were combined to create population peristimulus time and cycle histograms. In all cases, stimulation of the trained skin resulted in 1) larger-amplitude responses, 2) peak responses earlier in the stimulus cycle, and 3) temporally sharper responses, than did stimulation applied to control skin sites. 6. The sharpening of the response of cortical area 3b neurons relative to the period of the stimulus could be accounted for by a large subpopulation of neurons that had highly coherent responses. 7. Analysis of cycle histograms for area 3b neuron responses revealed that the decreased variance in the representation of each stimulus cycle could account for behaviorally measured frequency discrimination performance. A strong correlation between these temporal response distributions and the discriminative performances for stimuli applied at all studied skin surfaces was even stronger (r = 0.98) if only the rising phases of cycle histogram were considered in the analysis. 8. The responses of neurons in area 3a could not account for measured differences in frequency discrimination performance. 9. These representational changes did not occur in monkeys that were stimulated on the same schedule but were performing an auditory discrimination task during skin stimulation. 10. It is concluded that by behaviorally training adult owl monkeys to discriminate the temporal features of a tactile stimulus, distributed spatial and temporal response properties of cortical neurons are altered.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
1. The responses of cortical neurons evoked by cutaneous stimulation were investigated in the hand representation of cortical area 3a in adult owl monkeys that had been trained in a tactile frequency discrimination task. Cortical representations of the hands in these experimental hemispheres were compared with those representing the opposite, untrained hand, as well as with those representing a passively stimulated hand in a second class of control monkeys. 2. A large cutaneous representation of the hairy and glabrous skin surfaces of the hand emerged in area 3a in each trained hemisphere. 3. With the emergence of cutaneous responses recorded for neurons at many area 3a locations, the normally recorded deep receptor inputs were no longer evident at most of these locations. 4. There was a greater territory of representation of the small area of skin that was stimulated in the behavioral task in trained monkeys, when compared with the representations of corresponding skin sites in the opposite hemisphere of the same monkeys, or to the representations of equivalent skin sites stimulated in passively stimulated control monkeys. 5. There was great variability in the receptive-field properties of neurons responsive to cutaneous inputs among trained monkeys. In most recording sites within the representations of the behaviorally engaged hands, the cutaneous receptive fields were large, extending over a significant part of the glabrous or hairy surfaces of the hand. However, in one monkey, very small, topographically ordered cutaneous receptive fields were recorded over a wide zone of area 3a. 6. The physiologically defined borders between areas 3a and 3b were in register with the cytoarchitectonically defined borders between these two cortical areas in trained and in control monkeys. 7. This study demonstrates that there is a reorganization of the cutaneous and "deep" representation of hand in cortical area 3a, with the main change being an emergence of a large cutaneous representation and the parallel disappearance of a large part of the normal deep representation in this field. These changes are discussed in light of the possible functional roles of cortical area 3a.
Article
The retinotopic map in the visual cortex of adult mammals can reorganize in response to a small injury in a restricted region of retina. Although the mechanisms underlying this neural plasticity in adults are not well understood, it is possible that rapid, adaptive alterations in the effectiveness of existing connections play a key role in the reorganization of cortical topography following peripheral deafferentation. In order to test this hypothesis, a small retinal lesion was made in one eye of adult cats and the visual cortex was mapped before and immediately after enucleating the non-lesioned eye. We found that substantial reorganization takes place within hours of enucleation.
Article
The characteristics of automatized performance resemble those of preattentive processing in some respects. In the context of visual search tasks, these include spatially parallel processing, involuntary calling of attention, learning without awareness, and time-sharing with other tasks. However, this article reports some evidence suggesting that extended practice produces its effects through different mechanisms from those that underlie preattentive processing. The dramatic changes in search rate seem to depend not on the formation of new preattentive detectors for the task-relevant stimuli, nor on learned abstracted procedures for responding quickly and efficiently, but rather on changes that are very specific both to the particular stimuli and to the particular task used in practice. We suggest that the improved performance may depend on the accumulation of separate memory traces for each individual experience of a display (see Logan, 1988), and we show that the traces differ for conjunction search in which stimuli must be individuated and for feature search where a global response to the display is sufficient.
Article
These experiments examined motor cortical representation patterns after forelimb postural adjustments in rats. The experiments tested the hypothesis that postural adjustments that stretch muscles that are most strongly activated from the primary motor cortex (MI) enlarge their cortical representation. Intracortical electrical stimulation within MI, forelimb muscle activity and movements, and vibrissa movements were used to evaluate the border between the MI forelimb and vibrissa representations before and after forelimb position changes in anesthetized adult rats. The forelimb was originally maintained in retraction (wrist extension and elbow flexion) and then changed to protraction (wrist flexion and elbow extension). Movements and forelimb EMG evoked by electrical stimulation were evaluated during this period (up to 3 hr) through a set of four electrodes implanted in layer V of MI. Changing the forelimb configuration had both immediate and delayed effects on forelimb muscle activity evoked from MI. At some sites, the magnitude of evoked forelimb muscle activity immediately increased with forelimb protraction. At one-quarter of all sites, forelimb muscle activity was evoked where it was not previously detected following an average delay of 22-31 min after forelimb protraction. This change can be interpreted as an expansion of the forelimb area into the vibrissa representation. These data further support the hypothesis that motor cortical representations are flexible and show that sustained changes in somatic sensory input to MI are sufficient to reorganize MI output.
Article
Large changes in somatotopic organization can be induced in adult primate somatosensory cortex by cutting peripheral afferents. The role, if any, of the thalamus in these changes has not been investigated previously. In the present experiments, electrophysiological recording in the ventroposterior lateral nucleus (VPL) has revealed that not only can reorganization occur in the thalamus, but it may be as extensive as that revealed in the cortex of the same monkeys. Thus, for at least some types of deafferentation, the reorganization revealed in the cortex may depend largely on subcortical changes.