Article

Visual Imagery of Famous Faces: Effects of Memory and Attention Revealed by fMRI

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The lateral PFC is involved in a range of different faceprocessing tasks including identity recognition (Ishai et al., 2002), working memory for faces (Courtney et al., 1996) and the configural processing of the eyes and mouth (Renzi et al., 2013). Importantly, prior studies have also demonstrated that the lateral PFC is involved in facial expression processing (Gorno-Tempini et al., 2001;Iidaka et al., 2001). ...
... Taken together, these studies demonstrate that TMS disruption of one face-selective area causes remote effects across other nodes of the face processing network. Having previously targeted faceprocessing areas in the occipitotemporal cortex (e.g. the OFA and STS) in the present study we disrupted the face-selective area in the right IFG (Ishai et al., 2002;Nikel et al., 2022). ...
... The face-selective area in the IFG has been shown to process a range of different face-processing tasks. These include familiar face recognition (Rapcsak et al., 1996), working memory for faces (Courtney et al., 1997), famous-face recognition (Ishai et al., 2002), processing of information from the eyes (Chan and Downing, 2011) and configural processing of the component parts of faces (e.g. the eyes and mouth) (Renzi et al., 2013). Other studies have demonstrated that the IFG is involved in the top-down control of ventral temporal cortex when recognising faces (Heekeren et al., 2004;Baldauf and Desimone, 2014) and is functionally connected to the amygdala (Davies- Thompson and Andrews, 2012). ...
Article
Full-text available
Recognizing facial expressions is dependent on multiple brain networks specialized for different cognitive functions. In the current study, participants (N = 20) were scanned using functional magnetic resonance imaging (fMRI), while they performed a covert facial expression naming task. Immediately prior to scanning thetaburst transcranial magnetic stimulation (TMS) was delivered over the right lateral prefrontal cortex (PFC), or the vertex control site. A group whole-brain analysis revealed that TMS induced opposite effects in the neural responses across different brain networks. Stimulation of the right PFC (compared to stimulation of the vertex) decreased neural activity in the left lateral PFC but increased neural activity in three nodes of the default mode network (DMN): the right superior frontal gyrus, right angular gyrus and the bilateral middle cingulate gyrus. A region of interest analysis showed that TMS delivered over the right PFC reduced neural activity across all functionally localised face areas (including in the PFC) compared to TMS delivered over the vertex. These results suggest that visually recognizing facial expressions is dependent on the dynamic interaction of the face-processing network and the DMN. Our study also demonstrates the utility of combined TMS/fMRI studies for revealing the dynamic interactions between different functional brain networks.
... One area identified in neural models of face processing is in the lateral prefrontal cortex (Chan 2013). Studies of both humans and nonhuman primates report face-selective neural activity in the lateral prefrontal cortex (Haxby et al. 1995;Haxby et al. 1996;Scalaidhe et al. 1997;Ishai et al. 2002;Tsao et al. 2008;Chan and Downing 2011;Shepherd and Freiwald 2018) but how the lateral prefrontal cortex interacts with face-selective areas in the occipitotemporal cortex remains unclear. In the current study, we compared the neural response to faces in the lateral prefrontal cortex with that observed in the more commonly studied face-selective areas in the occipitotemporal cortex. ...
... Neuroimaging studies of face processing have also demonstrated that the lateral prefrontal cortex is involved in the top-down control of ventral temporal cortex when recognizing faces (Heekeren et al. 2004;Baldauf and Desimone 2014). In addition, the lateral prefrontal cortex has been implicated in familiar face recognition (Rapcsak et al. 1996), working memory for faces (Courtney et al. 1996(Courtney et al. , 1997, famousface recognition (Ishai et al. 2002), processing of information from the eyes (Chan and Downing 2011), and configural processing of the component parts of faces (e.g. the eyes and mouth) (Renzi et al. 2013). Such a broad range of different face processing functions suggests that the lateral prefrontal cortex may engage with other face processing areas depending on the specific requirements of the face processing task being performed. ...
... It has been proposed that right frontal activity may be associated with the maintenance of a simple, icon-like image of the face, whereas the left frontal activity represents a more elaborate face representation that is created after longer retention delays and is more easily maintained (Haxby et al. 1995). Regions in the frontal gyrus were found to be activated during visual imagery of faces but not during face perception (Ishai et al. 2002). During visual imagery, the frontal regions evoke topdown control for generating and maintaining visual images of faces. ...
Article
Full-text available
Neuroimaging studies identify multiple face-selective areas in the human brain. In the current study we compared the functional response of the face area in the lateral prefrontal cortex to that of other face-selective areas. In Experiment 1 participants (n = 32) were scanned viewing videos containing faces, bodies, scenes, objects, and scrambled objects. We identified a face-selective area in the right inferior frontal gyrus (rIFG). In Experiment 2 participants (n = 24) viewed the same videos or static images. Results showed that the rIFG, right posterior superior temporal sulcus (rpSTS) and right occipital face area (rOFA) exhibited a greater response to moving than static faces. In Experiment 3 participants (n = 18) viewed face videos in the contralateral and ipsilateral visual fields. Results showed that the rIFG and rpSTS showed no visual field bias, while the rOFA and right fusiform face area (rFFA) showed a contralateral bias. These experiments suggest two conclusions; firstly, in all three experiments the face area in the IFG was not as reliably identified as face areas in the occipitotemporal cortex. Secondly, the similarity of the response profiles in the IFG and pSTS suggests the areas may perform similar cognitive functions, a conclusion consistent with prior neuroanatomical and functional connectivity evidence.
... In humans, neuroimaging has defined a critical node in face processing within the VTC: the fusiform face area (FFA) (Kanwisher et al., 1997). fMRI studies support both a sensory ("bottom-up") as well as a cognitive ("top-down") role for the FFA, which is activated not only when subjects view faces, but also when they expect to see a face (Puri et al., 2009;Bollinger et al., 2010), perform imagery tasks involving faces (O'Craven and Kanwisher, 2000;Ishai et al., 2002), and hold face representations in working memory (Ranganath et al., 2004). Activation of category-selective regions of VTC can predict recall of items in that category (Polyn et al., 2005;Norman et al., 2017). ...
... Face-selective units were recoded from 3 of 4 of this subset of subjects, and small numbers of place-selective units were recorded from 2 of 4 (also see Fig. 9A). Guided by prior fMRI results (O'Craven and Kanwisher, 2000;Ishai et al., 2002) and bolstered by single-unit findings in the medial temporal cortex, (Kreiman et al., 2000) we first sought to identify whether face-selective units were reactivated during visual imagery of faces. We examined mean firing rates of VTC units within 2 s of the verbal recall event, reasoning that activity associated with visualization most likely occurred in this window. ...
... The slope is significantly different from zero (p , 0.05), as expected. face areas to face recall and imagery (O'Craven and Kanwisher, 2000;Ishai et al., 2002;Norman et al., 2017). We have previously shown that reinstatements during free recall are associated with sharp wave ripples in hippocampus (Norman et al., 2019); however, further research is needed to characterize the representational content that is being reinstated (i.e., low-level pictorial representations or high-level semantic features). ...
Article
Full-text available
Research in functional neuroimaging has suggested that category-selective regions of visual cortex, including the ventral temporal cortex (VTC), can be reactivated endogenously through imagery and recall. Face representation in the monkey face-patch system has been well studied and is an attractive domain in which to explore these processes in humans. The VTCs of 8 human subjects (4 female) undergoing invasive monitoring for epilepsy surgery were implanted with microelectrodes. Most (26 of 33) category-selective units showed specificity for face stimuli. Different face exemplars evoked consistent and discriminable responses in the population of units sampled. During free recall, face-selective units preferentially reactivated in the absence of visual stimulation during a 2 s window preceding face recall events. Furthermore, we show that in at least 1 subject, the identity of the recalled face could be predicted by comparing activity preceding recall events to activity evoked by visual stimulation. We show that face-selective units in the human VTC are reactivated endogenously, and present initial evidence that consistent representations of individual face exemplars are specifically reactivated in this manner. SIGNIFICANCE STATEMENT The role of “top-down” endogenous reactivation of native representations in higher sensory areas is poorly understood in humans. We conducted the first detailed single-unit survey of ventral temporal cortex (VTC) in human subjects, showing that, similarly to nonhuman primates, humans encode different faces using different rate codes. Then, we demonstrated that, when subjects recalled and imagined a given face, VTC neurons reactivated with the same rate codes as when subjects initially viewed that face. This suggests that the VTC units not only carry durable representations of faces, but that those representations can be endogenously reactivated via “top-down” mechanisms.
... The lateral PFC is involved in a range of different face processing tasks including identity recognition (Ishai et al., 2002), working memory for faces (Courtney et al., 1996) and the configural processing of the eyes and mouth (Renzi et al., 2013). Importantly, prior studies have also demonstrated that the lateral PFC is involved in facial expression processing (Gorno-Tempini et al., 2001;Iidaka et al., 2001). ...
... Based on these studies we chose to disrupt the PFC with TMS while participants performed a facial expression naming task in the fMRI scanner. TMS was delivered over the inferior frontal gyrus (IFG), a region of the lateral PFC that has been implicated in a range of face processing tasks (Chan, 2013;Ishai et al., 2002). We chose to target the right IFG because our prior study demonstrated that face-selective activity can be more reliably identified in the right hemisphere (Nikel et al., 2022). ...
Preprint
Full-text available
Recognizing facial expressions is dependent on multiple brain networks specialized for different cognitive functions. In the current study participants (N=20) were scanned using functional magnetic resonance imaging (fMRI) while they performed a covert facial expression naming task. Immediately prior to scanning thetaburst transcranial magnetic stimulation (TMS) was delivered over the right lateral prefrontal cortex (PFC), or the vertex control site. A group whole-brain analysis revealed that TMS induced opposite effects in the neural responses across different brain networks. Stimulation of the right PFC (compared to stimulation of the vertex) decreased neural activity in the left lateral PFC but increased neural activity in three nodes of the default mode network (DMN): the right superior frontal gyrus (SFG), right angular gyrus and the bilateral middle cingulate gyrus. A region of interest (ROI) analysis showed that TMS delivered over the right PFC reduced neural activity across all functionally localised face areas (including in the PFC) compared to TMS delivered over the vertex. These results causally demonstrate that visually recognizing facial expressions is dependent on the dynamic interaction of the face processing network and the DMN. Our study also demonstrates the utility of combined TMS / fMRI studies for revealing the dynamic interactions between different functional brain networks.
... A face-responsive area in the inferior frontal cortex was first reported in humans using functional brain imaging and monkeys using single unit recording (13)(14)(15)(16)(17)(18). Further reports of this area followed in fMRI studies in humans (12,(19)(20)(21)(22) and monkeys (20,23). The human neuroimaging studies have found this area to be face-responsive using perceptual matching of different views of the same identity (13), face working memory (14,15,17), retrieval from long-term memory (16), imagery from long-term memory (19), repetition-suppression (24), release from adaptation (26), and functional localizers with dynamic face stimuli (21,22). ...
... Further reports of this area followed in fMRI studies in humans (12,(19)(20)(21)(22) and monkeys (20,23). The human neuroimaging studies have found this area to be face-responsive using perceptual matching of different views of the same identity (13), face working memory (14,15,17), retrieval from long-term memory (16), imagery from long-term memory (19), repetition-suppression (24), release from adaptation (26), and functional localizers with dynamic face stimuli (21,22). The existence of face selective neurons in the inferior frontal cortex also was shown in a human patient with implanted electrodes who reported face-related hallucinations after direct stimulation in prefrontal cortex (27). ...
Preprint
Full-text available
Neural models of a distributed system for face perception implicate a network of regions in the ventral visual stream for recognition of identity. Here, we report an fMRI neural decoding study in humans that shows that this pathway culminates in a right inferior frontal cortex face area (rIFFA) with a representation of individual identities that has been disentangled from variable visual features in different images of the same person. At earlier stages in the pathway, processing begins in early visual cortex and the occipital face area (OFA) with representations of head view that are invariant across identities, and proceeds to an intermediate level of representation in the fusiform face area (FFA) in which identity is emerging but still entangled with head view. Three-dimensional, view-invariant representation of identities in the rIFFA may be the critical link to the extended system for face perception, affording activation of person knowledge and emotional responses to familiar faces. Significance Statement In this fMRI decoding experiment, we address how face images are processed in successive stages to disentangle the view-invariant representation of identity from variable visual features. Representations in early visual cortex and the occipital face area distinguish head views, invariant across identities. An intermediate level of representation in the fusiform face area distinguishes identities but still is entangled with head view. The face-processing pathway culminates in the right inferior frontal area with representation of view-independent identity. This paper clarifies the homologies between the human and macaque face processing systems. The findings show further, however, the importance of the inferior frontal cortex in decoding face identity, a result that has not yet been reported in the monkey literature.
... Regions, model complexity and ROI size were selected based on prior literature [44,[69][70][71]. A generalized linear model (GLM) was used to regress 6 head motion parameters (3 translation and 3 rotational), white matter and cerebrospinal fluid signals from preprocessed data. ...
Article
Full-text available
Visual alterations under classic psychedelics can include rich phenomenological accounts of eyes-closed imagery. Preclinical evidence suggests agonism of the 5-HT2A receptor may reduce synaptic gain to produce psychedelic-induced imagery. However, this has not been investigated in humans. To infer the directed connectivity changes to visual connectivity underlying psychedelic visual imagery in healthy adults, a double-blind, randomised, placebo-controlled, cross-over study was performed, and dynamic causal modelling was applied to the resting state eyes-closed functional MRI scans of 24 subjects after administration of 0.2 mg/kg of the serotonergic psychedelic drug, psilocybin (magic mushrooms), or placebo. The effective connectivity model included the early visual area, fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus. We observed a pattern of increased self-inhibition of both early visual and higher visual-association regions under psilocybin that was consistent with preclinical findings. We also observed a pattern of reduced inhibition from visual-association regions to earlier visual areas that indicated top-down connectivity is enhanced during visual imagery. The results were analysed with behavioural measures taken immediately after the scans, suggesting psilocybin-induced decreased sensitivity to neural inputs is associated with the perception of eyes-closed visual imagery. The findings inform our basic and clinical understanding of visual perception. They reveal neural mechanisms that, by affecting balance, may increase the impact of top-down feedback connectivity on perception, which could contribute to the visual imagery seen with eyes-closed during psychedelic experiences.
... This result may indicate that a propensity toward visual imagery produces greater recruitment of visual regions for memory retrieval tasks in general. This is consistent with previous research that has demonstrated a connection between visual imagery and face perception in occipital activation (Ishai, 2002;Slotnick et al., 2012). We found activations particularly in early visual cortex, V2 and V3 activations occurred for both left and right hemispheres. ...
Article
Full-text available
Conscious experience and perception are restricted to a single perspective. Although evidence to suggest differences in phenomenal experience can produce observable differences in behavior, it is not well understood how these differences might influence memory. We used fMRI to scan n = 49 participants while they encoded and performed a recognition memory test for faces and words. We calculated a cognitive bias score reflecting individual participants’ propensity toward either Visual Imagery or Internal Verbalization based on their responses to the Internal Representations Questionnaire (IRQ). Neither visual imagery nor internal verbalization scores were significantly correlated with memory performance. In the fMRI data, there were typical patterns of activation differences between words and faces during both encoding and retrieval. There was no effect of internal representation bias on fMRI activation during encoding. At retrieval, however, a bias toward visualization was positively correlated with memory-related activation for both words and faces in inferior occipital gyri. Further, there was a crossover interaction in a network of brain regions such that visualization bias was associated with greater activation for words and verbalization bias was associated with greater activation for faces, consistent with increased effort for non-preferred stimulus retrieval. These findings suggest that individual differences in cognitive representations affect neural activation across different types of stimuli, potentially affecting memory retrieval performance.
... Visual imagination, akin to mental imagery, involves the mental generation and manipulation of visual representations, independent of external visual input. It engages very similar neural processes and regions as visual imagery (occipital and temporal lobes, prefrontal cortex; (Beaty et al., 2016;Ishai et al., 2002). The main distinction is that visual imagery typically involves the recreation of visual experiences that are based on memory and past experiences (individuals often "see" specific images or scenes that they have previously encountered or are familiar with), whereas visual imagination typically involves the creation of novel images or scenarios that may not have a direct basis in past experiences (allowing for the generation of new, sometimes fantastical or abstract, visual constructs that extend beyond personal experience). ...
Preprint
Full-text available
Background: This case study investigated differences in brain electrical activity between two laboratory conditions in an individual who reports a subjective experience of a phenomenon he calls "upsight." The individual describes upsight as the capacity to perceive at will holographic images as though they appear on an inset screen that overlays his ordinary visual field, with eyes open or closed. Methods: The individual alternated 200 times between 30-second epochs of a control condition (recalling mentally an image he had seen previously) and the upsight (seeing the image on the internal "screen") condition while 64-channel EEG
... The calcarine cortex, as part of the visual network, is in charge of facial and object recognition and is associated with long-term memory. Access to visual information is a precondition for attention, so as the visual cortex's central hub, the occipital lobe is essential for executive function, attention, vigilance, and memory (Ishai et al., 2002). Although vision disorder is unfamiliar to TLE patients, the visual perception disorder, abnormality in the visual network, and its impact on memory, cognition, and sociability have been confirmed in previous studies (Alessio et al., 2013;Cataldi et al., 2013;Zhang et al., 2009). ...
Article
Full-text available
To comprehensively investigate the potential temporal dynamic and static abnormalities of spontaneous brain activity (SBA) in left temporal lobe epilepsy (LTLE) and right temporal lobe epilepsy (RTLE) and to detect whether these alterations correlate with cognition. Twelve SBA metrics, including ALFF, dALFF, fALFF, dfALFF, ReHo, dReHo, DC, dDC, GSCorr, dGSCorr, VMHC, and dVMHC, in 46 LTLE patients, 43 RTLE patients, and 53 healthy volunteers were compared in the voxel-wise analysis. Correlation analyses between metrics in regions showing statistic differences and epilepsy duration, epilepsy severity, and cognition scores were also performed. Compared with the healthy volunteers, the alteration of SBA was identified both in LTLE and RTLE patients. The ALFF, fALFF, and dALFF values in LTLE, as well as the fALFF values in RTLE, increased in the bilateral thalamus, basal ganglia, mesial temporal lobe, cerebellum, and vermis. Increased dfALFF in the bilateral basal ganglia, increased ReHo and dReHo in the bilateral thalamus in the LTLE group, increased ALFF and dALFF in the pons, and increased ReHo and dReHo in the right hippocampus in the RTLE group were also detected. However, the majority of deactivation clusters were in the ipsilateral lateral temporal lobe. For LTLE, the fALFF, DC, dDC, and GSCorr values in the left lateral temporal lobe and the ReHo and VMHC values in the bilateral lateral temporal lobe all decreased. For RTLE, the ALFF, fALFF, dfALFF, ReHo, dReHo, and DC values in the right lateral temporal lobe and the VMHC values in the bilateral lateral temporal lobe all decreased. Moreover, for both the LTLE and RTLE groups, the dVMHC values decreased in the calcarine cortex. The most significant difference between LTLE and RTLE was the higher activation in the cerebellum of the LTLE group. The alterations of many SBA metrics were correlated with cognition and epilepsy duration. The patterns of change in SBA abnormalities in the LTLE and RTLE patients were generally similar. The integrated application of temporal dynamic and static SBA metrics might aid in the investigation of the propagation and suppression pathways of seizure activity as well as the cognitive impairment mechanisms in TLE.
... Consistent with this result we have further demonstrated that both the IFG and pSTS exhibit on equal response to faces in both visual fields while the OFA and FFA exhibit a contralateral bias (Nikel et al., 2022;Pitcher et al., 2020). The IFG has been implicated in face working memory studies (Courtney et al., 1996), famous face recognition (Ishai, 2002) and in preferentially processing information from the eyes (Chan & Downing, 2011). The greater degree of laterality we observed in the right IFG and pSTS is interesting and suggests that the two regions may be functionally connected to perform similar cognitive functions and warrants further investigation. ...
Preprint
Functional magnetic resonance imaging (fMRI) studies have identified a network of face-selective regions distributed across the human brain. In the present study, we analyzed data from a large group of gender-balanced participants to investigate how reliably these face-selective regions could be identified across both cerebral hemispheres. Participants (N=52) were scanned with fMRI while viewing short videos of faces, bodies, and objects. Results revealed that five face-selective regions: the fusiform face area (FFA), posterior superior temporal sulcus (pSTS), anterior superior temporal sulcus (aSTS), inferior frontal gyrus (IFG) and the amygdala were all larger in the right than in the left hemisphere. The occipital face area (OFA) was larger in the right hemisphere as well, but the difference between the hemispheres was not significant. The neural response to moving faces was also greater in face-selective regions in the right than in the left hemisphere. An additional analysis revealed that the pSTS and IFG were significantly larger in the right hemisphere compared to other face-selective regions. This pattern of results demonstrates that moving faces are preferentially processed in the right hemisphere and that the pSTS and IFG appear to be the strongest drivers of this laterality. An analysis of gender revealed that face-selective regions were typically larger in females (N=26) than males (N=26), but this gender difference was not statistically significant.
... Consistent with this result we have further demonstrated that both the IFG and pSTS exhibit on equal response to faces in both visual fields while the OFA and FFA exhibit a contralateral bias (Nikel et al., 2022;Pitcher et al., 2020). The IFG has been implicated in face working memory studies (Courtney et al., 1996), famous face recognition (Ishai, 2002) and in preferentially processing information from the eyes (Chan & Downing, 2011). The greater degree of laterality we observed in the right IFG and pSTS is interesting and suggests that the two regions may be functionally connected to perform similar cognitive functions and warrants further investigation. ...
Preprint
Full-text available
Keywords Face processing, superior temporal sulcus (STS), fusiform face area (FFA), occipital face area (OFA) 2 Abstract Functional magnetic resonance imaging (fMRI) studies have identified a network of face-selective regions distributed across the human brain. In the present study, we analyzed data from a large group of gender-balanced participants to investigate how reliably these face-selective regions could be identified across both cerebral hemispheres. Participants (N=52) were scanned with fMRI while viewing short videos of faces, bodies, and objects. Results revealed that five face-selective regions: the fusiform face area (FFA), posterior superior temporal sulcus (pSTS), anterior superior temporal sulcus (aSTS), inferior frontal gyrus (IFG) and the amygdala were all larger in the right than in the left hemisphere. The occipital face area (OFA) was larger in the right hemisphere as well, but the difference between the hemispheres was not significant. The neural response to moving faces was also greater in face-selective regions in the right than in the left hemisphere. An additional analysis revealed that the pSTS and IFG were significantly larger in the right hemisphere compared to other face-selective regions. This pattern of results demonstrates that moving faces are preferentially processed in the right hemisphere and that the pSTS and IFG appear to be the strongest drivers of this laterality. An analysis of gender revealed that face-selective regions were typically larger in females (N=26) than males (N=26), but this gender difference was not statistically significant. 3
... This result may indicate that a propensity toward visual imagery produces greater recruitment of visual regions for memory retrieval tasks in general. This is consistent with previous research that has demonstrated a connection between visual imagery and face perception in occipital activation (Ishai, 2002;Slotnick et al., 2012). Greater activation in these areas for memory retrieval may suggest that individuals with a bias toward visual imagery are more likely to recruit occipital areas as they reconstruct a memory in a visual form regardless of the task format itself. ...
Preprint
Conscious experience and perception are restricted to a single perspective. There is evidence to suggest differences in phenomenal experience can produce observable differences in behavior, however it is not well understood how these differences might influence memory. We used fMRI to scan n=49 participants while they encoded and performed a recognition memory test for faces and words. We calculated a cognitive bias score reflecting individual participants' propensity toward either Visual Imagery or Internal Verbalization based on their responses to the Internal Representations Questionnaire (IRQ). We found weak positive correlations between memory performance for faces and a bias toward visual imagery and between memory performance for words and bias toward internal verbalization. There were typical patterns of activation differences between words and faces during both encoding and retrieval. There was no effect of internal representation bias on fMRI activation during encoding. At retrieval, however, a bias toward visualization was positively correlated with memory-related activation for both words and faces in inferior occipital gyri. Further, there was a crossover interaction in a network of brain regions such that visualization bias was associated with greater activation for words and verbalization bias was associated with greater activation for faces, consistent with increased effort for non-preferred stimulus retrieval. These findings suggest that individual differences in cognitive representations affect neural activation across different types of stimuli, potentially affecting memory retrieval performance.
... Visual imagery is important for a wide range of everyday tasks involving the veridical recall of previous experiences, such as the interpretation of language (Bergen et al., 2007), the mental simulation of routes in navigation (Ghaem et al., 1997), the recollection of faces (Ishai et al., 2002;O'Craven & Kanwisher, 2000), and the reliving of past events (Libby et al., 2007;Moulton & Kosslyn, 2009). While it is generally adaptive to recall the specifics of past events, problematic vivid visual recall (Schacter's, 2013, "sin of persistence") has also been reported in psychological disorders such as obsessive-compulsive disorder, posttraumatic stress disorder, depression, and eating disorders (Holmes et al., 2007(Holmes et al., , 2016, as well as in their treatment through imaginal exposure and imaginal rescripting in cognitive behavioral therapy Holmes et al., 2007;Pearson et al., 2015). ...
Article
Full-text available
Visual imagery vividness (VIV) quantifies how clearly people can “conjure up” mental images. A higher VIV reflects a stronger image, which might be considered an important source of inspiration in creative production. However, despite numerous anecdotes documenting such a connection, a clear empirical relationship has remained elusive. We argue that (a) a misunderstanding of visual imagery as unidimensional and (b) an overreliance on Marks’ Vividness of Visual Imagery Questionnaire (VVIQ) are responsible. Based on both the proximal/distal imagination framework and the distinction between the ventral/dorsal visual pathways, we propose a new Multifactorial Model of Visual Imagery (MMVI). This argues that visual imagery is multidimensional and that only certain dimensions are related to creativity: inventive combinatorial ability, storyboarding, and conceptual expansion (all distal), together with the quasi-eidetic recall of detailed images (proximal). Turning to the VVIQ, a factor analysis of 280 responses in Study 1 yielded a three-factor solution (all proximal): episodic/autobiographical imagery, schematic recall, and controlled animation. None of these factors overlap with the creative dimensions of the MMVI. In Study 2, 133 participants had to remember nonverbalizable details of unfamiliar pictures for later recall; performance on this quasi-eidetic task again did not correlate with any VVIQ factors. We have thus demonstrated that the VVIQ is not unidimensional and that none of its factors appear suitable for probing imagery-creativity connections. The MMVI model is currently theoretical, and future research should confirm its validity, permitting a new, better targeted measure of VIV to be established that fully reflects its multidimensionality.
... Regions, model complexity and ROI size were selected based on prior literature (N. Dijkstra et al., 2017;Ishai, Haxby, & Ungerleider, 2002;Kalkstein, Checksfield, Jacob Bollinger, & Gazzaley, 2011;Reddy, Tsuchiya, & Serre, 2010). A generalized linear model (GLM) was used to regress 6 head motion parameters (3 translation and 3 rotational), white matter and cerebrospinal fluid signals from preprocessed data. ...
Preprint
Full-text available
Visual alterations under classic psychedelics can include rich phenomenological accounts of eyes-closed imagery. Preclinical evidence suggests agonism of the 5-HT2A receptor may reduce synaptic gain to produce psychedelic-induced imagery. However, this has not been investigated in humans. To infer the directed connectivity changes to visual sensory connectivity underlying psychedelic visual imagery in healthy adults, a double-blind, randomised, placebo-controlled, cross-over study was performed, and dynamic causal modelling was applied to the resting state eyes-closed functional MRI scans of 24 subjects after administration of 0.2mg/kg of the serotonergic psychedelic drug, psilocybin (magic mushrooms), or placebo. The effective connectivity model included the early visual area, fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus. We observed a pattern of increased self-inhibition of both early visual and higher visual-association regions under psilocybin that was consistent with preclinical findings. We also observed a pattern of reduced inhibition from visual-association regions to earlier visual areas that indicated top-down connectivity is enhanced during visual imagery. The results were associated with behavioural measures taken immediately after the scans, suggesting psilocybin-induced decreased sensitivity to neural inputs is associated with the perception of eyes-closed visual imagery. The findings inform our basic and clinical understanding of visual perception. They reveal neural mechanisms that, by affecting balance, may increase the impact of top-down feedback connectivity on perception, which could contribute to the visual imagery seen with eyes-closed during psychedelic experiences.
... Visual network (VIN) was part of the sensor cortex systems and mainly located in the occipital lobe, including CAL, LG, cuneus, and so forth. As the center of the visual cortex, the occipital lobe exhibited a range of visual functions, such as vision processing and visual memory encoding (Mechelli et al., 2000), and was related to executive function and attention (Ishai et al., 2002) because the acquisition of visual information was a prerequisite for attention. Memory and alertness function depend on sensory-perceptual information processing, such as vision and audition (Conway, 2001). ...
Article
Full-text available
Background Temporal lobe epilepsy (TLE) is the most prevalent refractory focal epilepsy and is more likely accompanied by cognitive impairment. The fully understanding of the neuronal activity underlying TLE is of great significance. Objective This study aimed to comprehensively explore the potential brain activity abnormalities affected by TLE and detect whether the changes were associated with cognition. Methods Six static intrinsic brain activity (IBA) indicators [amplitude of low-frequency fluctuation (ALFF), fractional ALFF (fALFF), regional homogeneity (ReHo), degree centrality (DC), global signal correlation (GSCorr), and voxel-mirrored homotopic connectivity (VMHC)] and their corresponding dynamic indicators, such as dynamic ALFF (dALFF), dynamic fALFF (dfALFF), dynamic ReHo (dReHo), dynamic DC (dDC), dynamic VMHC (dVMHC), and dynamic GSCorr (dGSCorr), in 57 patients with unilateral TLE and 42 healthy volunteers were compared. Correlation analyses were also performed between these indicators in areas displaying group differences and cognitive function, epilepsy duration, and severity. Results Marked overlap was present among the abnormal brain regions detected using various static and dynamic indicators, primarily including increased ALFF/dALFF/fALFF in the bilateral medial temporal lobe and thalamus, decreased ALFF/dALFF/fALFF in the frontal lobe contralateral to the epileptogenic side, decreased fALFF, ReHo, dReHo, DC, dDC, GSCorr, dGSCorr, and VMHC in the temporal neocortex ipsilateral to the epileptogenic foci, decreased dReHo, dDC, dGSCorr, and dVMHC in the occipital lobe, and increased ALFF, fALFF, dfALFF, ReHo, and DC in the supplementary motor area ipsilateral to the epileptogenic foci. Furthermore, most IBA indicators in the abnormal brain region significantly correlated with the duration of epilepsy and several cognitive scale scores ( P < 0.05). Conclusion The combined application of static and dynamic IBA indicators could comprehensively reveal more real abnormal neuronal activity and the impairment and compensatory mechanisms of cognitive function in TLE. Moreover, it might help in the lateralization of epileptogenic foci and exploration of the transmission and inhibition pathways of epileptic activity.
... Remarkably, a comparison of lateralization results found in the fMRI literature revealed one obvious pattern: inconsistency for all three face sensitive areas. For each area there are several studies showing clear right-hemispheric lateralization (e.g., Frässle et al., 2016c ;Ishai et al., 2005 ;Rhodes et al., 2004 ;Rossion et al., 2012 ), while others found inconclusive results (e.g., Haxby et al., 1999 ;Yovel et al., 2008 ) or even bilateral activity without a significant lateralization effect ( Canário et al., 2020 ;De Winter et al., 2015 ;Ishai et al., 2002 ). In summary and in line with our results, face sensitive areas in the core system only show a gentle tendency towards the right hemisphere. ...
Article
Full-text available
The neural face perception network is distributed across both hemispheres. However, the dominant role in humans is virtually unanimously attributed to the right hemisphere. Interestingly, there are, to our knowledge, no imaging studies that systematically describe the distribution of hemispheric lateralization in the core system of face perception across subjects in large cohorts so far. To address this, we determined the hemispheric lateralization of all core system regions (i.e., occipital face area (OFA), fusiform face area (FFA), posterior superior temporal sulcus (pSTS)) in 108 healthy subjects using functional magnetic resonance imaging (fMRI). We were particularly interested in the variability of hemispheric lateralization across subjects and explored how many subjects can be classified as right-dominant based on the fMRI activation pattern. We further assessed lateralization differences between different regions of the core system and analyzed the influence of handedness and sex on the lateralization with a generalized mixed effects regression model. As expected, brain activity was on average stronger in right-hemispheric brain regions than in their left-hemispheric homologues. This asymmetry was, however, only weakly pronounced in comparison to other lateralized brain functions (such as language and spatial attention) and strongly varied between individuals. Only half of the subjects in the present study could be classified as right-hemispheric dominant. Additionally, we did not detect significant lateralization differences between core system regions. Our data did also not support a general leftward shift of hemispheric lateralization in left-handers. Only the interaction of handedness and sex in the FFA revealed that specifically left-handed men were significantly more left-lateralized compared to right-handed males. In essence, our fMRI data did not support a clear right-hemispheric dominance of the face perception network. Our findings thus ultimately question the dogma that the face perception network – as measured with fMRI – can be characterized as “typically right lateralized”.
... While it is not a visual area of the brain, the hippocampus is heavily implicated in memory, suggesting that it supports parallel judgments for familiarity and recollection (Squire et al., 2007). Prior fMRI studies have also demonstrated that the neural response to familiar and unfamiliar faces can be dissociated in the hippocampus (Elfgren et al., 2006;Ishai et al., 2002;O'Neil et al., 2013;Platek & Kemp, 2009;Ramon et al., 2015). In addition, neuropsychological studies have shown that cells in the hippocampus increase firing rates in response to familiar faces (Fried et al., 1997) and that removal of the amygdala and the hippocampus can impair face learning (Crane & Milner, 2002). ...
Article
Full-text available
Making new acquaintances requires learning to recognise previously unfamiliar faces. In the current study, we investigated this process by staging real-world social interactions between actors and the participants. Participants completed a face-matching behavioural task in which they matched photographs of the actors (whom they had yet to meet), or faces similar to the actors (henceforth called foils). Participants were then scanned using functional magnetic resonance imaging (fMRI) while viewing photographs of actors and foils. Immediately after exiting the scanner, participants met the actors for the first time and interacted with them for 10 min. On subsequent days, participants completed a second behavioural experiment and then a second fMRI scan. Prior to each session, actors again interacted with the participants for 10 min. Behavioural results showed that social interactions improved performance accuracy when matching actor photographs , but not foil photographs. The fMRI analysis revealed a difference in the neural response to actor photographs and foil photographs across all regions of interest (ROIs) only after social interactions had occurred. Our results demonstrate that short social interactions were sufficient to learn and discriminate previously unfamiliar individuals. Moreover, these learning effects were present in brain areas involved in face processing and memory.
... In contrast, findings on the neural substrates of long-term emotional face recognition have not been well characterized. Human functional magnetic resonance imaging (fMRI) investigations suggest that face processing is supported by distributed neural systems (Haxby et al. 2000), including brain regions processing facial features and identity such as the face fusiform area (FFA) (Kanwisher et al. 1997;McCarthy et al. 1997) and the inferior occipital gyrus (IOG)/occipital face area (OFA) (Gauthier et al. 2000), and those processing social and emotional information (e.g., facial expression) such as the amygdala (Gobbini and Haxby 2007), inferior frontal gyrus (IFG) (Ishai et al. 2002) and orbitofrontal cortex (OFC) (Ishai 2008). Initial studies examining the subsequent memory effects for emotional faces have shown that activation associated with the successful encoding (i.e., remembered > forgotten) of emotional (fearful and happy) versus neutral faces were centered on prefrontal regions such as the IFG, dorsolateral prefrontal cortex (dlPFC) and OFC (Sergerie et al. 2005). ...
Article
Full-text available
Studies demonstrated that faces with threatening emotional expressions are better remembered than non-threatening faces. However, whether this memory advantage persists over years and which neural systems underlie such an effect remains unknown. Here, we employed an individual difference approach to examine whether the neural activity during incidental encoding was associated with differential recognition of faces with emotional expressions (angry, fearful, happy, sad and neutral) after a retention interval of > 1.5 years (N = 89). Behaviorally, we found a better recognition for threatening (angry, fearful) versus non-threatening (happy and neutral) faces after a delay of > 1.5 years, which was driven by forgetting of non-threatening faces compared with immediate recognition after encoding. Multivariate principal component analysis (PCA) on the behavioral responses further confirmed the discriminative recognition performance between threatening and non-threatening faces. A voxel-wise whole-brain analysis on the concomitantly acquired functional magnetic resonance imaging (fMRI) data during incidental encoding revealed that neural activity in bilateral inferior occipital gyrus (IOG) and ventromedial prefrontal/orbitofrontal cortex (vmPFC/OFC) was associated with the individual differences in the discriminative emotional face recognition performance measured by an innovative behavioral pattern similarity analysis (BPSA). The left fusiform face area (FFA) was additionally determined using a regionally focused analysis. Overall, the present study provides evidence that threatening facial expressions lead to persistent face recognition over periods of > 1.5 years, and that differential encoding-related activity in the medial prefrontal cortex and occipito-temporal cortex may underlie this effect.
... For the feature extraction of VMI-BCI, the EEG power spectrum estimation method is mainly used in the existing research [15,[18][19][20]41,42], whereas the HHT method, which can obtain high-resolution information in both the time domain and the frequency domain, is used to extract features in this article. Thus, SVM was used to classify the forward and reverse, forward and left turn, forward and right turn, reverse and left turn, reverse and right turn, and left and right turn. ...
Article
Full-text available
A brain–computer interface (BCI) based on kinesthetic motor imagery has a potential of becoming a groundbreaking technology in a clinical setting. However, few studies focus on a visual-motor imagery (VMI) paradigm driving BCI. The VMI-BCI feature extraction methods are yet to be explored in depth. In this study, a novel VMI-BCI paradigm is proposed to execute four VMI tasks: imagining a car moving forward, reversing, turning left, and turning right. These mental strategies can naturally control a car or robot to move forward, backward, left, and right. Electroencephalogram (EEG) data from 25 subjects were collected. After the raw EEG signal baseline was corrected, the alpha band was extracted using bandpass filtering. The artifacts were removed by independent component analysis. Then, the EEG average instantaneous energy induced by VMI (VMI-EEG) was calculated using the Hilbert–Huang transform (HHT). The autoregressive model was extracted to construct a 12-dimensional feature vector to a support vector machine suitable for small sample classification. This was classified into two-class tasks: visual imagination of driving the car forward versus reversing, driving forward versus turning left, driving forward versus turning right, reversing versus turning left, reversing versus turning right, and turning left versus turning right. The results showed that the average classification accuracy of these two-class tasks was 62.68 ± 5.08%, and the highest classification accuracy was 73.66 ± 6.80%. The study showed that EEG features of O1 and O2 electrodes in the occipital region extracted by HHT were separable for these VMI tasks.
... In contrast, ndings on the neural substrates of long-term emotional face recognition have not been well characterized. Human functional magnetic resonance imaging (fMRI) investigations suggest that face processing is supported by distributed neural systems (Haxby et al., 2000), including brain regions processing facial features and identity such as face fusiform area (FFA) (Kanwisher et al., 1997;McCarthy et al., 1997) and inferior occipital gyrus (IOG)/occipital face area (OFA) (Gauthier et al., 2000), and those processing social and emotional information (e.g., facial expression) such as the amygdala (Gobbini & Haxby, 2007), inferior frontal gyrus (IFG) (Ishai et al., 2002) and orbitofrontal cortex (OFC) (Ishai, 2008). Initial studies examining the subsequent memory effects for emotional faces have shown that greater encoding success is associated with higher encoding-related activity (i.e., remembered > forgotten) for emotional (fearful and happy) versus neutral faces in prefrontal regions such as IFG, dorsolateral prefrontal cortex (dlPFC) and OFC (Sergerie et al., 2005). ...
Preprint
Full-text available
Studies demonstrated that faces with threatening emotional expressions are better remembered than non-threatening faces. However, whether this memory advantage persists over years and which neural systems underlie such an effect remains unknown. Here, we employed an individual difference approach to examine whether the neural activity during incidental encoding was associated with differential recognition of faces with emotional expressions (angry, fearful, happy, sad and neutral) after a retention interval of > 1.5 years ( N = 89). Behaviorally, we found a better recognition for threatening (angry, fearful) versus non-threatening (happy and neutral) faces after a > 1.5 years delay, which was driven by forgetting of non-threatening faces compared with immediate recognition after encoding. Multivariate principal component analysis (PCA) on the behavioral responses further confirmed the discriminative recognition performance between threatening and non-threatening faces. A voxel-wise whole-brain analysis on the concomitantly acquired functional magnetic imaging (fMRI) data during incidental encoding revealed that neural activity in bilateral inferior occipital gyrus (IOG) and ventromedial prefrontal/orbitofrontal cortex (vmPFC/OFC) was associated with the individual differences in the discriminative emotional face recognition performance measured by an innovative behavioral pattern similarity analysis (BPSA) based on inter-subject correlation (ISC). The left fusiform face area (FFA) was additionally determined using a regionally focused analysis. Overall, the present study provides evidence that threatening facial expressions lead to persistent face recognition over periods of > 1.5 years and differential encoding-related activity in the medial prefrontal cortex and occipito-temporal cortex may underlie this effect.
... So far, neuroimaging investigations of the EVC have focused on visual rather than motor imagery, and the results have been controversial, with some studies showing BOLD responses above baseline in the EVC (Chen et al., 1998;Craven and Kanwisher, 2000;Dijkstra et al., 2017;Ganis et al., 2004;Ishai et al., 2002;Klein et al., 2000;Lambert et al., 2002;Le Bihan et al., 1993;Sabbah et al., 1995), while others did not (D'Esposito et al., 1997;Formisano et al., 2002;Knauff et al., 2000;Trojano et al., 2000;Wheeler and Petersen, 2000). Regardless of the involvement of the EVC in perceptual imagery, visual imagery content can be decoded from the EVC even when activation is at baseline (Albers et al., 2013;Dijkstra et al., 2017;Koenig-Robert and Pearson, 2019;Naselaris et al., 2015), and most intriguingly, there are common patterns of activity that are shared between perception and visual imagery (Albers et al., 2013;Naselaris et al., 2015;. ...
Chapter
Recent evidence shows that the role of the early visual cortex (EVC) goes beyond visual processing and into higher cognitive functions (Roelfsema and de Lange in Annu. Rev. Vis. Sci. 2:131–151, 2016). Further, neuroimaging results indicate that action intention can be predicted based on the activity pattern in the EVC (Gallivan et al. in Cereb. Cortex 29:4662–4678, 2019; Gutteling et al. in J. Neurosci. 35:6472–6480, 2015). Could it just be imagery? Further, can we decode action intention in the EVC based on activity patterns elicited by motor imagery, and vice versa? To answer this question, we explored whether areas implicated in hand actions and imagery tasks have a shared representation for planning and imagining hand movements. We used a slow event-related functional magnetic resonance imaging (fMRI) paradigm to measure the BOLD signal while participants (\(N=16\)) performed or imagined performing actions with the right dominant hand towards an object, which consisted of a small shape attached on a large shape. The actions included grasping the large or small shape, and reaching to the center of the object while fixating a point above the object. At the beginning of each trial, an auditory cue instructed participants about the task (Imagery, Movement) and the action (Grasp large, Grasp small, Reach) to be performed at the end of the trial. After a 10-s delay, which included a planning phase in Movement trials, a go cue prompted the participants to perform or imagine performing the action (Go phase). We used standard retinotopic mapping procedures to localize the retinotopic location of the object in the EVC. Using multi-voxel pattern analysis, we decoded action type based on activity patterns elicited during the planning phase of real actions (Movement task) as well as in the Go phase of the Imagery task in the anterior intraparietal sulcus (aIPS) and in the EVC. In addition, we decoded imagined actions based on the activity pattern of planned actions (and vice-versa) in aIPS, but not in EVC. Our results suggest a shared representation for planning and imagining specific hand movements in aIPS but not in low-level visual areas. Therefore, planning and imagining actions have overlapping but not identical neural substrates.
... The IFG has been increasingly discussed in recent years as a crucial area of the neural network underlying the perception of real faces. The (right) IFG was for instance recruited when important cues were detected (Hampshire et al. 2010), responded selectively to items that were of most relevance to the currently intended task (Hampshire et al. 2009) and was stronger activated when subjects imagined faces of famous people and answered questions about envisioned facial features like mouth or eyes (Ishai et al. 2002). ...
Article
The most basic aspect of face perception is simply detecting the presence of a face, which requires the extraction of features that it has in common with other faces. Putatively, it is caused by matching high-dimensional sensory input with internal face templates, achieved through a top-down mediated coupling between prefrontal regions and brain areas in the occipito-temporal cortex (“core system of face perception”). Illusory face detection tasks can be used to study these top-down influences. In the present functional magnetic resonance imaging study, we showed that illusory face perception activated just as real faces the core system, albeit with atypical left-lateralization of the occipital face area. The core system was coupled with two distinct brain regions in the lateral prefrontal (inferior frontal gyrus, IFG) and orbitofrontal cortex (OFC). A dynamic causal modeling (DCM) analysis revealed that activity in the core system during illusory face detection was upregulated by a modulatory face-specific influence of the IFG, not as previously assumed by the OFC. Based on these findings, we were able to develop the most comprehensive neuroanatomical framework of illusory face detection until now.
... It is worth noting that we were able to decode familiarity of faces from the left but not from the right FFA. This asymmetry in the involvement of FFA during visual imagery of faces has been reported by previous studies, some indicating stronger recruitment within the right ( O'Craven and Kanwisher, 2000 ) and others within the left hemisphere ( Ishai, 2002 ). Note also that the left hemisphere has been reported to be more frequently activated by visual imagery tasks ( Winlove et al., 2018 ). ...
Article
Visual imagery relies on a widespread network of brain regions, partly engaged during the perception of external stimuli. Beyond the recruitment of category-selective areas (FFA, PPA), perception of familiar faces and places has been reported to engage brain areas associated with semantic information, comprising the precuneus, temporo-parietal junction (TPJ), medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Here we used multivariate pattern analysis (MVPA) to examine to which degree areas of the visual imagery network, category-selective and semantic areas contain information regarding the category and familiarity of imagined stimuli. Participants were instructed via auditory cues to imagine personally familiar and unfamiliar stimuli (i.e. faces and places). Using region-of-interest (ROI)-based MVPA, we were able to distinguish between imagined faces and places within nodes of the visual imagery network (V1, SPL, aIPS), within category-selective inferotemporal regions (FFA, PPA) and across all brain regions of the extended semantic network (i.e. precuneus, mPFC, IFG and TPJ). Moreover, we were able to decode familiarity of imagined stimuli in the SPL and aIPS, and in some regions of the extended semantic network (in particular, right precuneus, right TPJ), but not in V1. Our results suggest that posterior visual areas - including V1 - host categorical representations about imagined stimuli, and that stimulus familiarity might be an additional aspect that is shared between perception and visual imagery.
... In this vein, further research on the juxtaposition between memory and self on the person domain was highlighted as an important forefront in neurocognition (66,67). Notably, the region identified in the RSA is adjacent to visual areas, involved also in visual imagery (68,69). The identified region may therefore relate to the appearance of the people Topics associated with the peak voxel at the left retrosplenial cortex (MNI coordinates: À9, À55, 11) found in our searchlight. ...
Article
"Mental travel" is a cognitive concept embodying the human capacity to intentionally disengage from the here and now, and mentally experience the world from different perspectives. We explored how individuals mentally "travel" to the point-of-view (POV) of other people in varying levels of personal closeness and from these perspectives process these people's social network. Under fMRI, participants were asked to "project" themselves to the POVs of four different people: a close other, a non-close other, a famous-person, and their own-self, and rate the level of affiliation (closeness) to different individuals in the respective social network. Participants were always faster making judgments from their own POV compared to other POVs (self-projection effect) and for people who were personally closer to their adopted POV (social-distance effect). Brain activity at the medial prefrontal and anterior cingulate cortex in the self-POV was higher, compared to all other conditions. Activity at the right temporoparietal junction and medial parietal cortex was found to distinguish between the personally related (self, close and non-close others) and unrelated (famous-person) people. No difference was found between mental travel to the POVs of close and non-close others. Regardless of POV, the precuneus, anterior cingulate cortex, prefrontal cortex, and temporoparietal junction distinguished between close and distant individuals within the different social networks. Representational similarity analysis implicated the left retrosplenial cortex as crucial for social distance processing across all POVs. These distinctions suggest several constraints regarding our ability to adopt others' POV, and process not only ours but also other people's social networks.
... Furthermore, sharing similar neuro-cognitive circuits with perception (Dijkstra et al. 2017(Dijkstra et al. , 2019, MI has been shown as capable of modulating it (Fazekas and Nanay 2017;Andrade et al. 2020). For example, the frontal cortex has been implied in selective attention during both perception and MI (Nobre et al. 2004;Ishai et al. 2002). ...
Article
Full-text available
Focus of attention (FOA) has been shown to affect human motor performance. Research into FOA has mainly posited it as either external or internal to-the-body (EFOA and IFOA, respectively). However, this binary paradigm overlooks the dynamic interactions among the individual, the task, and the environment, which are core to many disciplines, including dance. This paper reviews the comparative effects of EFOA and IFOA on human motor performance. Next, it identifies challenges within this EFOA–IFOA binary paradigm at the conceptual, definitional, and functional levels, which could lead to misinterpretation of research findings thus impeding current understanding of FOA. Building on these challenges and in effort to expand the current paradigm into a non-binary one, it offers an additional FOA category—dynamic interactive FOA—which highlights the dynamic interactions existing between EFOA and IFOA. Mental imagery is then proposed as a suitable approach for separately studying the different FOA subtypes. Lastly, clinical and research applications of a dynamic interactive FOA perspective for a wide range of domains, from motor rehabilitation to sports and dance performance enhancement, are discussed.
... Celebrities pictures have been widely applied as stimuli in studies of varied areas, with applications in forensic psychology (Greene & Fraser, 2002), neurosciences (Ishai et al., 2002), and cognitive psychology (Cleary & Specker, 2007). For example, these stimuli have been used to understand which facial characteristics are essential for facial identification (e.g., the presence or absence of eyebrows; Sadr et al., 2003). ...
Article
Research on familiar faces has been conducted in different countries and resort to celebrities faces, stimuli that are highly constrained by geographic context and cultural peculiarities, since many celebrities are only famous in particular countries. Despite their relevance to psychological research, there are no normative studies of celebrities’ facial recognition in Portugal. We developed a database with 160 black and white pictures of famous persons' faces in this work. The data collection took place in two different studies. In study 1, participants were asked to recognize and name celebrity faces; while in study 2, celebrity names were rated for AoA, familiarity, and distinctiveness. Data were gathered from two different samples of Portuguese young adults aged between 18 and 25 years old, and both procedures were performed online through a questionnaire created in Qualtrics software. This database provides ratings of AoA, familiarity, facial distinctiveness, recognition rate, and naming rate for each celebrity, which will allow further selection of celebrities, based on these five attributes, for studies using Portuguese samples. Also, possible relationships between these five variables were analyzed and presented, highlighting facial distinctiveness as a predictor for both naming and recognition rate of celebrity faces.
... However, there are some important differences between imagery and occlusion. Imagery can be prompted from short-or long-term memory, which involve different brain regions (Ishai, 2002). Mental imagery can be considered to encompass situations in which there is a visual percept that is not produced via current sensation. ...
Article
Full-text available
Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain ‘fills-in’ information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.
... Une étude précoce en EEG intracérébral rapporte ainsi une plus forte amplitude de la réponse évoquée dans l'amygdale droite par la présentation de visages célèbres, comparativement à la présentation de visages inconnus (Seeck et al., 1993). Certaines études en neuroimagerie fonctionnelle mettent également en évidence une activation significative de cette région, avec prédominance droite, lors de la présentation de visages célèbres (Ishai et al., 2002(Ishai et al., , 2005Avidan et al., 2014), tandis que d'autres montrent une modulation de l'activité de cette région selon le niveau de familiarité (Dubois et al., 1999;Gobbini et al., 2004;Pierce et al., 2004;Ishai et al., 2005;Sugiura et al., 2011;. ...
Thesis
Full-text available
La voie visuelle ventrale, s’étendant des régions occipitales aux régions temporales antérieures, est spécialisée dans la reconnaissance, par la modalité visuelle, des objets et personnes rencontrés au quotidien. De nombreuses études en imagerie par résonance magnétique fonctionnelle se sont intéressées aux bases cérébrales de la reconnaissance visuelle. Toutefois, la susceptibilité de cette technique aux artefacts magnétiques dans les régions du lobe temporal antérieur a conduit à sous-estimer le rôle de ces régions au sein de la voie ventrale. Le but de cette thèse est de mieux comprendre les mécanismes de reconnaissance visuelle au sein du cortex ventral occipito-temporal, et notamment de clarifier la contribution des structures temporales postérieures et antérieures dans la mise en œuvre des mécanismes de reconnaissance visuelle et de mise en lien avec la mémoire sémantique. Pour cela, nous nous appuyons sur une approche multimodale combinant neuropsychologie, stimulation visuelle périodique rapide (FPVS) et enregistrements en EEG de scalp et en EEG intracérébral (SEEG), chez des participants neurotypiques et des participants épileptiques. Nous rapportons cinq études empiriques dans lesquelles nous démontrons que (1) les patients avec une épilepsie temporale antérieure (i.e., le type d’épilepsie focale le plus fréquemment concerné par une procédure en SEEG) présentent des performances typiques en discrimination individuelle de visages, (2) la stimulation électrique du gyrus fusiforme antérieur droit peut entraîner un déficit transitoire spécifique à la reconnaissance des visages, même lorsqu’aucune dénomination n’est requise, (3) le processus de discrimination de visages familiers parmi des visages inconnus sollicite l’engagement d’un large réseau de structures ventrales bilatérales incluant les régions temporales antérieures et médiales, (4) certaines structures du lobe temporal antérieur ventral gauche sont impliquées dans l’intégration d’un visage familier et de son nom en une représentation unifiée, et (5) les régions temporales antérieures ventrales bilatérales sont engagées dans la mise en œuvre de représentations sémantiques associées à des mots écrits. Dans l’ensemble, nos travaux montrent que (1) le réseau de reconnaissance visuelle s’organise le long de la voie visuelle ventrale en suivant une hiérarchisation progressive selon l’axe postéro-antérieur, au sein duquel une transition graduelle s’effectue entre représentations majoritairement perceptives et représentations sémantiques de plus en plus abstraites, et (2) les régions impliquées dans la reconnaissance visuelle sont fortement latéralisées dans les régions postérieures ventrales, et deviennent bilatérales dans les régions temporales antérieures ventrales.
... Zeman et al. (2010) then asked MX and a group of matched controls to perform a simple perceptual task and an imaging task in an fMRI study. Using a paradigm originally developed by Ishai, Haxby, and Ungerleider (2002), Zeman et al. presented MX and controls with pictures of famous faces that participants simply had to look at, and with the names of famous people with the instruction to try and imagine the face of the person named. No behavioral data were collected, but it was clear that all participants were motivated to perform the task. ...
Article
Full-text available
General Audience Summary Researchers who study human memory often assume that the general principles that govern how memory works are the same for all healthy adults. The contents of memory—that is, what we remember—obviously varies from person to person based on their individual experience. However, why we remember some things and not others, how quickly we forget, why we forget, how we remember the order in which to do things, how we remember our ATM number, what the limits on memory are, and how our memory ability changes as we grow older are all thought to follow the same general pattern for everyone. So, the researchers look at the average pattern across groups of people and assume that the average pattern reflects how everyone in the group is doing the memory task that is designed for the research. This article claims that what is not always recognized is that there are different ways in which our brains can remember. That is, the brain has a range of memory tools available, and how each memory tool works might be the same for everyone, but people could vary in which combination of memory tools they use when remembering in everyday life or when taking part as volunteers in research studies of memory. For example, when trying to learn and remember words in a new language, some people might say the word over and over to themselves, others might try to think about whether it sounds like a word they know, and still others might try to visualize what the word looks like and what it means. So, different people might do the same task in very different ways, and if we only look at the average over groups of people, we will miss important insights into the different memory tools that people have available and how they use them.
... However, there are some important differences between imagery and occlusion. Imagery can be prompted from short-or long-term memory, which involve different brain regions (Ishai, 2002). Mental imagery can be considered to encompass situations in which there is a visual percept that is not produced via current sensation. ...
Preprint
Full-text available
Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain 'fills-in' information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.
... There is a lack of a general consensus regarding the role of V1 during visual imagery. A number of positron emission tomography (PET; Kosslyn et al., 1993) and functional magnetic resonance imaging (fMRI) studies (Amedi, Malach, & Pascual-Leone, 2005;Ishai, 2002;Klein, Paradis, Poline, Kosslyn, & Le Bihan, 2000;Slotnick, Thompson, & Kosslyn, 2005) demonstrated a recruitment of V1 during visual mental imagery tasks. By contrast, other studies failed to observe any reliable recruitment of V1 Ishai, Ungerleider, & Haxby, 2000;Sack et al., 2002), or found a deactivation of V1 (Mellet et al., 2000; for a review, see; Kosslyn & Thompson, 2003). ...
Article
In the absence of input from the external world, humans are still able to generate vivid mental images. This cognitive process, known as visual mental imagery, involves a network of prefrontal, parietal, inferotemporal, and occipital regions. Using multivariate pattern analysis (MVPA), previous studies were able to distinguish between the different orientations of imagined gratings, but not between more complex imagined stimuli, such as common objects, in early visual cortex (V1). Here asked whether letters, simple shapes, and objects can be decoded in early visual areas during visual mental imagery. In a delayed spatial judgment task, we asked participants to observe or imagine stimuli. To examine whether it is possible to discriminate between neural patterns during perception and visual mental imagery, we performed ROI-based and whole-brain searchlight-based MVPA. We were able to decode imagined stimuli in early visual (V1, V2), parietal (SPL, IPL, aIPS), inferotemporal (LOC) and prefrontal (PMd) areas. In a subset of these areas (i.e. V1, V2, LOC, SPL, IPL and aIPS), we also obtained significant cross-decoding across visual imagery and perception. Moreover, we observed a linear relationship between behavioral accuracy and the amplitude of the BOLD signal in parietal and inferotemporal cortices, but not in early visual cortex, in line with the view that these areas contribute to the ability to perform visual imagery. Together, our results suggest that in the absence of bottom-up visual inputs, patterns of functional activation in early visual cortex allow distinguishing between different imagined stimulus exemplars, most likely mediated by signals from parietal and inferotemporal areas.
Article
Full-text available
Visual imagery, or the mental simulation of visual information from memory, could serve as an effective control paradigm for a brain-computer interface (BCI) due to its ability to directly convey the user’s intention with many natural ways of envisioning an intended action. However, multiple initial investigations into using visual imagery as a BCI control strategies have been unable to fully evaluate the capabilities of true spontaneous visual mental imagery. One major limitation in these prior works is that the target image is typically displayed immediately preceding the imagery period. This paradigm does not capture spontaneous mental imagery as would be necessary in an actual BCI application but something more akin to short-term retention in visual working memory. Results from the present study show that short-term visual imagery following the presentation of a specific target image provides a stronger, more easily classifiable neural signature in EEG than spontaneous visual imagery from long-term memory following an auditory cue for the image. We also show that short-term visual imagery and visual perception share commonalities in the most predictive electrodes and spectral features. However, visual imagery received greater influence from frontal electrodes whereas perception was mostly confined to occipital electrodes. This suggests that visual perception is primarily driven by sensory information whereas visual imagery has greater contributions from areas associated with memory and attention. This work provides the first direct comparison of short-term and long-term visual imagery tasks and provides greater insight into the feasibility of using visual imagery as a BCI control strategy.
Article
Nearly 50 years of research has focused on faces as a special visual category, especially during development. Yet it remains unclear how spatial patterns of neural similarity of faces and places relate to how information processing supports subsequent recognition of items from these categories. The current study uses representational similarity analysis and functional imaging data from 9- and 10-year-old youth during an emotional n-back task from the Adolescent Brain and Cognitive DevelopmentSM Study® 3.0 data release to relate spatial patterns of neural similarity during working memory to subsequent out-of-scanner task performance on a recognition memory task. Specifically, we examine how similarities in representations within face categories (neutral, happy, and fearful faces) and representations between visual categories (faces and places) relate to subsequent recognition memory of these visual categories. Although working memory performance was higher for faces than places, subsequent recognition memory was greater for places than faces. Representational similarity analysis revealed category-specific patterns in face-and place-sensitive brain regions (fusiform gyrus, parahippocampal gyrus) compared with a nonsensitive visual region (pericalcarine cortex). Similarity within face categories and dissimilarity between face and place categories in the parahippocampus was related to better recognition of places from the n-back task. Conversely, in the fusiform, similarity within face categories and their relative dissimilarity from places was associated with better recognition of new faces, but not old faces. These findings highlight how the representational distinctiveness of visual categories influence what information is subsequently prioritized in recognition memory during development.
Article
Our ability to remember the past depends on neural processes set in train in the moment an event is experienced. These processes can be studied by segregating brain activity according to whether an event is later remembered or forgotten. The present review integrates a large number of studies examining this differential brain activity, labeled subsequent memory effect (SME), with the ERP technique, into a functional organization and discusses routes for further research. Based on the reviewed literature, we suggest that memory encoding is implemented by multiple processes, typically reflected in three functionally different subcomponents of the ERP SME elicited by study stimuli, which presumably interact with preparatory SME activity preceding the to be encoded event. We argue that ERPs are a valuable method in the SME paradigm because they have a sufficiently high temporal resolution to disclose the subcomponents of encoding-related brain activity. Implications of the proposed functional organization for future studies using the SME procedure in basic and applied settings will be discussed.
Article
Purpose: To comprehensively explore the potential brain activity abnormalities affected by MRI-negative temporal lobe epilepsy (TLE) and to detect whether the changes were associated with cognition and help in the diagnosis or lateralization. Method: Six static intrinsic brain activity (IBA) indicators (ALFF, fALFF, ReHo, DC, GSCorr, VMHC) and their corresponding six temporal dynamic indicators in 39 unilateral MRI-negative TLE patients and 42 healthy volunteers were compared. Correlation analyses were performed between these indicators in areas displaying group differences, cognitive function, and epilepsy duration. ROC analyses were performed to test the diagnostic and lateralization ability of the IBA parameters. Results: Considerable overlap was present among the abnormal brain regions detected by different static and dynamic indicators, including (1) alteration of fALFF, Reho, DC, VMHC, dfALFF, dReHo, and dDC in the temporal neocortex (predominately ipsilateral to the epileptogenic foci); (2) decreased dGSCorr and dVMHC in the occipital lobe. Meanwhile, the ReHo and VMHC values in the temporal neocortex correlated with the cognition scores or epilepsy duration (P < 0.01). The ROC analysis results revealed moderate diagnosis or lateralization efficiency of several IBA indicators (fALFF, dfALFF, ReHo, DC, dDC, and VMHC). Conclusion: The abnormal condition of neuronal activity in the temporal neocortex, predominately lateralized to the epileptic side, was a crucial feature in patients with MRI-negative TLE and might offer diagnosis or lateralization information. The application of dynamic intrinsic brain activity indicators played a complementary role, further revealing the temporal variability decline of the occipital lobe in MRI-negative TLE patients.
Chapter
The autism spectrum disorder (ASD) is a group of conditions with the main pathophysiologic process based in the brain, turning the task of obtaining tissue for morphologic analysis dependent on post-mortem donation, resulting in small samples in most of the studies. Nonetheless, progress has been made into elucidating some of the morphological changes and their genetic background in many situations. Almost all the morphological and immunohistochemical changes found coexisting with ASD subjects in are not specific for this group of disorders, like the alterations found in the cerebellum in ASD and depression, anxiety disorder, bipolar disorder, panic disorder, and attention-deficit hyperactivity disorder. Whole brain alterations are described, such as increase in brain size and head circumference. Some of the studies lack reproducibility, as seen in some of the brainstem alterations described. Immunologic-driven alterations have also been described, including a consistent increase in CD8+ T-cell lymphocytes in ASD subjects, and alterations involving the blood–brain barrier. Given that ASD is well established as a heritable psychiatric disorder in up to 50% of the cases, animal models based on the altered genes found in this group of patients could be established, as seen in FMR1, TSC1/2, MECP2, SHANK3, NRXN, NLGN, CDH8, SynGAP1, ARID1B, GRIN2B, and TBR1 genes.
Article
Background Insomnia is one of the major symptom relevant factors in major depressive disorder (MDD), but the neurological mechanisms underlying the multiple effect between insomnia and depression have not been well interpreted. This study aimed at exploring the potential mechanisms between insomnia and depression based on amygdala-based resting-state functional connectivity (RSFC). Methods In total 56 MDD patients with low insomnia (MDD-LI) patients, 46 MDD patients with high insomnia (MDD-HI) patients, and 57 healthy controls (HCs) were employed and underwent a resting-state functional magnetic resonance imaging (fMRI) scan. ANOVA test was performed on RSFC value for three groups. Correlation analysis was conducted to evaluate the relationship between abnormal RSFC values and clinical features. Results We found that MDD-HI mainly showed increased RSFC in (bilateral superior temporal gyrus (STG), and decreased RSFC in left supplementary motor area (SMA) and bilateral postcentral gyrus (PoCG) compared with MDD-LI. Correlation analysis indicated that RSFC of the bilateral amygdala with STG were positively associated with the sleep disturbance score and adjust HAMD score. Conclusion Our findings suggest that RSFC in temporal lobe and other specifically activated regions may be associated with neural circuits involved with insomnia in MDD. These provide new evidence for understanding the potential mechanisms of major depression and insomnia from the perspective of functional connectivity.
Article
Like the Tigris-Euphrates rivers in the Middle East, Yellow-Yangtze rivers in China, and Mississippi-Colorado rivers in the United States, our brain has two neural pathways (or rivers) where we perceive two distinct values in marketing exchange processes. We identify two neural pathways of reward and information value (RIV) perceptions in the consumer brain leading to engagement, recommendation, and sharing (ERS) behavior in social media marketing. Using fMRI, we show that the first river in the brain (i.e., reward value area of the nucleus accumbens) is activated when consumers are shown visually aesthetic and appealing (versus unappealing) objects in social media advertisements. The second river in the brain (i.e., information value area of the prefrontal cortex) is activated when consumers are shown new (versus outdated) products in social media advertisements. This paper is the first attempt in marketing to provide an integrative brain map for customer value perception in SNS marketing. The conceptual model presented in this paper can be traced back to the traditional consumer attitude change theory as dual processing models of persuasion, yet it can explain the underlying mechanisms of consumer value in the social media context. Using this two rivers brain map, marketers can better identify how their offerings can satisfy the diverse and unique needs of consumers based on RIV.
Article
The fusiform face area (FFA) is a core cortical region for face information processing. Evidence suggests that its sensitivity to faces is largely innate and tuned by visual experience. However, how experience in different time windows shape the plasticity of the FFA remains unclear. In this study, we investigated the role of visual experience at different time points of an individual's early development in the cross-modal face specialization of the FFA. Participants (n = 74) were classified into five groups: congenital blind, early blind, late blind, low vision, and sighted control. Functional magnetic resonance imaging data were acquired when the participants haptically processed carved faces and other objects. Our results showed a robust and highly consistent face-selective activation in the FFA region in the early blind participants, invariant to size and level of abstraction of the face stimuli. The cross-modal face activation in the FFA was much less consistent in other groups. These results suggest that early visual experience primes cross-modal specialization of the FFA, and even after the absence of visual experience for more than 14 years in early blind participants, their FFA can engage in cross-modal processing of face information.
Article
Full-text available
There is increasing evidence that imagination relies on similar neural mechanisms as externally triggered perception. This overlap presents a challenge for perceptual reality monitoring: deciding what is real and what is imagined. Here, we explore how perceptual reality monitoring might be implemented in the brain. We first describe sensory and cognitive factors that could dissociate imagery and perception and conclude that no single factor unambiguously signals whether an experience is internally or externally generated. We suggest that reality monitoring is implemented by higher-level cortical circuits that evaluate first-order sensory and cognitive factors to determine the source of sensory signals. According to this interpretation, perceptual reality monitoring shares core computations with metacognition. This multi-level architecture might explain several types of source confusion as well as dissociations between simply knowing whether something is real and actually experiencing it as real. We discuss avenues for future research to further our understanding of perceptual reality monitoring, an endeavour that has important implications for our understanding of clinical symptoms as well as general cognitive function.
Article
Background The neural mechanisms of sleep beliefs and attitudes in primary insomnia (PI) patients at resting state remain unclear. The aim of this study was to investigate the features of regional homogeneity (ReHo) in PI using resting-state functional magnetic resonance imaging (rsfMRI). Methods Thirty-two PI patients and 34 normal controls (NC) underwent rsfMRI using a 3 T scanner at Tongde Hospital of Zhejiang Province. Participants were assessed with the Dysfunctional Beliefs and Attitudes about Sleep scale (DBAS-16) and Pittsburgh Sleep Quality Index (PSQI). Statistical analyses were performed to determine the regions in which ReHo differed between the two groups. Correlation analyses were performed between the ReHo index of each of these regions and DBAS-16 in PI patients. Results PI patients showed increased ReHo values in the right superior frontal gyrus, and decreased ReHo values in the left cerebellar gyrus, left inferior occipital gyrus (IOG) and left amygdala compared with those of NC. ReHo values in the left IOG were negatively correlated with total DBAS-16 scores, and scores for “consequences of insomnia” and“worry/helplessness about sleep”in PI patients. Conclusions These results suggest that ReHo alterations in the left IOG may play an important role in the dysfunctional beliefs and attitudes about sleep in PI.
Article
Facilitating information search to support decision making is one of the core purposes of information technology. In both personal and workplace environments, advances in information technology and the availability of information have enabled people to perform much more search and access much more information for decision making than ever before. Because of this abundance of information, there is an increasing need to develop an improved understanding of how people stop search, since information available for most decisions is now almost infinite. Our goal in this paper is to further our understanding of information search and stopping, and we do so by examining the neurocorrelates of stopping information search. This is a process that involves both stopping the search and the decision to stop the search. We asked subjects to search for information about consumer products and to stop when they believed they had enough information to make a subsequent decision about whether to purchase those products while in a functional Magnetic Resonance Imaging (fMRI) chamber. Brain activation patterns revealed an extensive distributed network of areas that are engaged in the decision to stop searching for information that are not engaged in search itself, suggesting that stopping is a complex and cognitively demanding neurological activity. Implications for theory, particularly information overconsumption, and for IT design are discussed.
Preprint
Half a century ago, Donald Hebb posited that mental imagery is a constructive process that emulates perception. Specifically, Hebb claimed that visual imagery results from the reactivation of neural activity associated with viewing images. He also argued that neural reactivation and imagery benefit from the re-enactment of eye movement patterns that first occurred at viewing (fixation reinstatement). To investigate these claims, we applied multivariate pattern analyses to functional MRI (fMRI) and eye-tracking data collected while healthy human participants repeatedly viewed and visualized complex images. We observed that the specificity of neural reactivation correlated positively with vivid imagery and with memory for stimulus image details. Moreover, neural reactivation correlated positively with fixation reinstatement, meaning that image-specific eye movements accompanied image-specific patterns of brain activity during visualization. These findings support the conception of mental imagery as a simulation of perception, and provide evidence of the supportive role of eye-movement in neural reactivation.
Article
This study was to investigate college student's brain connectivity in life sciences learning with VR contents using resting state fMRI. Thirty-six college students were administered resting state fMRI during their learning life science with non-interactive and interactive VR contents, and paper-based contents. CONN17 software was applied to analyze the specific brain-based connecti-vities of each contents. The results were shown the brain functional connectivity related to acquiring abstract information through visual observation and categorization of information about familiar visual objects in the learning with non-interactive VR contents. On the other hand, the learning with interactive VR contents were involved in procedural understanding of life science knowledge by functional connections of procedural thinking that implement perceptual and cognitive expressions of learners centered on interactions. However, the learning with book-based contents were largely the language-based processing, and the immersion through the familiar learning was not shown. Based on the result, it might be suggested that the learning with VR content can be useful as learning content in life science classroom. In addition, the development of life sciences VR content should be proceeded toward considering the brain stimulation characteristics of the corresponding VR content.
Article
Full-text available
Recent evidence points to a role of the primary visual cortex that goes beyond visual processing into high-level cognitive and motor-related functions, including action planning, even in absence of feedforward visual information. It has been proposed that, at the neural level, motor imagery is a simulation based on motor representations, and neuroimaging studies have shown overlapping and shared activity patterns for motor imagery and action execution in frontal and parietal cortices. Yet, the role of the early visual cortex in motor imagery remains unclear. Here we used multivoxel pattern analyses on functional magnetic resonance imaging (fMRI) data to examine whether the content of motor imagery and action intention can be reliably decoded from the activity patterns in the retinotopic location of the object stimulus in the early visual cortex. Further, we investigated whether the discrimination between specific actions generalizes across imagined and intended movements. Eighteen right-handed human participants (11 females) imagined or performed delayed hand actions towards a centrally located object composed of a small shape attached on a large shape. Actions consisted of grasping the large or small shape, and reaching to the center of the object. We found that despite comparable fMRI signal amplitude for different planned and imagined movements, activity patterns in the early visual cortex, as well as dorsal premotor and anterior intraparietal cortex accurately represented action plans and action imagery. However, movement content is similar irrespective of whether actions are actively planned or covertly imagined in parietal but not early visual or premotor cortex, suggesting a generalized motor representation only in regions that are highly specialized in object directed grasping actions and movement goals. In sum, action planning and imagery have overlapping but non identical neural mechanisms in the cortical action network.
Article
Full-text available
We present a multi-voxel analytical approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations from a neural network to decode neural reactivation in fMRI data collected while participants performed an episodic visual recall task. We show that neural reactivation associated with low-level (e.g. edges), high-level (e.g. facial features), and semantic (e.g. “terrier”) features occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the contributions of low- and high-level features to the vividness of visual memories and challenge a strict interpretation the posterior-to-anterior visual hierarchy.
Preprint
Full-text available
Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain "fills-in" information about invisible objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics (Kwon et al., 2015). In the present study, we used electroencephalography (EEG) and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and invisible objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/invisible positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Monitoring the position of invisible objects thus utilises similar perceptual processes as processing objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus. All data and analysis code for this study are available at https://osf.io/8v47t/.
Article
Visual imagery, like vision as such, is widely thought to be supported by two distinct and dissociable processing streams, dedicated to object representation and spatial analysis respectively. However, this simple dichotomy has been contested, with recent studies suggesting that impairments in perception-for-action and visuo-spatial imagery may reflect a more general deficit in space-based attention. Although previous studies have revealed the impact of brain damage on artistic expression, few have examined the impact on artistic expression in terms of the perceptual and spatial components of either visual processing or visual imagery. Here we present the case of an artist whose artistic expression was dramatically affected following devastating posterior brain damage. Of particular interest, we demonstrate how these changes relate to impairments in integrating and aligning different spatial features in both visual processing and visual imagery, suggestive of a general simultanagnosia not previously described.
Article
Full-text available
The hippocampus, amygdala and entorhinal cortex receive convergent input from temporal neocortical regions specialized for processing complex visual stimuli and are important in the representation and recognition of visual images. Recording from 427 single neurons in the human hippocampus, entorhinal cortex and amygdala, we found a remarkable degree of category-specific firing of individual neurons on a trial-by-trial basis. Of the recorded neurons, 14% responded selectively to visual stimuli from different categories, including faces, natural scenes and houses, famous people and animals. Based on the firing rate of individual neurons, stimulus category could be predicted with a mean probability of error of 0.24. In the hippocampus, the proportion of neurons responding to spatial layouts was greater than to other categories. Our data provide direct support for the role of human medial temporal regions in the representation of different categories of visual stimuli.
Article
Full-text available
Neural activity was measured in 10 healthy volunteers by functional MRI while they viewed familiar and unfamiliar faces and listened to familiar and unfamiliar voices. The familiar faces and voices were those of people personally known to the subjects; they were not people who are more widely famous in the media. Changes in neural activity associated with stimulus modality irrespective of familiarity were observed in modules previously demonstrated to be activated by faces (fusiform gyrus bilaterally) and voices (superior temporal gyrus bilaterally). Irrespective of stimulus modality, familiarity of faces and voices (relative to unfamiliar faces and voices) was associated with increased neural activity in the posterior cingulate cortex, including the retrosplenial cortex. Our results suggest that recognizing a person involves information flow from modality-specific modules in the temporal cortex to the retrosplenial cortex. The latter area has recently been implicated in episodic memory and emotional salience, and now seems to be a key area involved in assessing the familiarity of a person. We propose that disturbances in the information flow described may underlie neurological and psychiatric disorders of the recognition of familiar faces, voices and persons (prosopagnosia, phonagnosia and Capgras delusion, respectively).
Article
Full-text available
Sixteen subjects closed their eyes and visualized uppercase letters of the alphabet at two sizes, as small as possible or as large as possible while remaining "visible." Subjects evaluated a shape characteristic of each letter (e.g., whether it has any curved lines), and responded as quickly as possible. Cerebral blood flow was normalized to the same value for each subject, and relative blood flow was computed for a set of regions of interest. The mean response time for each subject in the task was regressed onto the blood flow values. Blood flow in area 17 was negatively correlated with response time (r = -0.65), as was blood flow in area 19 (r = -0.66), whereas blood flow in the inferior parietal lobe was positively correlated with response time (r = 0.54). The first two effects persisted even when variance due to the other correlations was removed. These findings suggest that individual differences in the activation of specific brain loci are directly related to performance of tasks that rely on processing in those loci.
Article
Full-text available
Visual imagery is the invention or recreation of a perceptual experience in the absence of retinal input.The degree to which the same neural representations are involved in both visual imagery and visual perception is unclear. Previous studies have shown that visual imagery interferes with perception (Perky effect). We report here psychophysical data showing a direct facilitatory effect of visual imagery on visual perception. Using a lateral masking detection paradigm of a Gabor target, flanked by peripheral Gabor masks, observers performed imagery tasks that were preceded by perceptual tasks. We found that both perceived and imaginary flanking masks can reduce contrast detection threshold. At short target-to-mask distances imagery induced a threshold reduction of 50% as compared with perception, while at long target-to-mask distances imagery and perception had similar facilitatory effect. The imagery-induced facilitation was specific to the orientation of the stimulus, as well as to the eye used in the task. These data indicate the existence of a stimulus-specific short-term memory system that stores the sensory trace and enables reactivation of quasi-pictorial representations by topdown processes. We suggest that stimulus parameters dominate the imagery-induced facilitation at short target-to-mask distances, yet the topdown component contributes to the effect at long target-to-mask distances.
Article
Full-text available
Visual imagery and perception share several functional properties and apparently share common underlying brain structures. A main approach to the scientific study of visual imagery is exploring the effects of mental imagery on perceptual processes. Previous studies have shown that visual imagery interferes with perception (Perky effect). Recently we have shown a direct facilitatory effect of visual imagery on visual perception. In an attempt to differentiate the conditions under which visual imagery interferes or facilitates visual perception, we designed new experimental paradigms, using detection tasks of a Gabor target. We found that imagery-induced interference and facilitation are memorydependent: Visual recall of common objects from long-term memory can interfere with perception, while on short-term memory tasks facilitation can be obtained. These results support the distinction between low-level and structural representations in visual memory.
Article
Full-text available
The perception of faces is sometimes regarded as a specialized task involving discrete brain regions. In an attempt to identi$ face-specific cortex, we used functional magnetic resonance imaging (fMRI) to measure activation evoked by faces presented in a continuously changing montage of common objects or in a similar montage of nonobjects. Bilateral regions of the posterior fusiform gyrus were activated by faces viewed among nonobjects, but when viewed among objects, faces activated only a focal right fusiform region. To determine whether this focal activation would occur for another category of familiar stimuli, subjects viewed flowers presented among nonobjects and objects. While flowers among nonobjects evoked bilateral fusiform activation, flowers among objects evoked no activation. These results demonstrate that both faces and flowers activate large and partially overlapping regions of inferior extrastriate cortex. A smaller region, located primarily in the right lateral fusiform gyrus, is activated specifically by faces.
Article
Full-text available
Summary PET was used to image the neural system underlying agreed closely with the cortical regions recently proposed to form the core of a neural network for spatial attention. The visuospatial attention. Analysis of data at both the group and individual-subject level provided anatomical resolution two attention tasks evoked largely overlapping patterns of neural activation, supporting the existence of a general neural superior to that described to date. Six right-handed male subjects were selected from a pilot behavioural study in which system for visuospatial attention with regional functional specialization. Specifically, neocortical activations were behavioural responses and eye movements were recorded. The attention tasks involved covert shifts of attention, where observed in the right anterior cingulate gyrus (Brodmann area 24), in the intraparietal sulcus of right posterior parietal peripheral cues indicated the location of subsequent target stimuli to be discriminated. One attention condition cortex, and in the mesial and lateral premotor cortices (Brodmann area 6). emphasized reflexive aspects of spatial orientation, while the other required controlled shifts of attention. PET activations
Article
Full-text available
visual exploration, we found bilateral activations of primary vc'e me;isured normalized regional cerebral blood flow (NrCBF) using positron emission tomography (PET) and ox)'- visual areas, superior and inferior occipital gyri? fusiforrn and lingual gyri, cuneus and prrcuneus, bilateral superior parietal, gem1 5-l;iheled water in eight young right-handed healthy vol- and angular gyri. The right lateral premotor area was also unteers \elected as high-imagers. during 2 runs of 3 different activated during this task while superior temporal gyri and conditicins: I, rest in total darkness 2; visual exploration of a map 3; mental exploration of the Same map in total darkness. Uroca's ;ma were deactivated. Hy contrast. mental exploration activated the right superior occipital cortex, the supplemen- NrCBF images were aligned with individual magnetic reso- tary motor area, and the cerebellar vermis. No activation was nance images (MKI), and NrCBF variations between pairs of observed in the primary visual area. These results argue for a measurvnients (N = 15) were computed in regions of interest specific participation of the superior occipital cortex in the having ;inatoniical boundaries that were defined using a three- generation and maintenance of visual mental images. W dimensional (3-D) reconstruction of each subject MKI. During
Article
Full-text available
SuperLab is a general-purpose psychology testing package for the Macintosh. SuperLab presents static visual and auditory stimuli in blocks of trials, each trial consisting of a user-specified sequence of stimuli. Responses can be recorded from the keyboard or from switches connected to an I/O board. Stimuli can be contingent on subjects’ responses, allowing feedback based on response accuracy. Timing uses Time Manager routines from the Macintosh Toolbox. Data are recorded in a text format with tabs delimiting fields, allowing analysis and presentation by other Macintosh spreadsheet, statistics, and graph-making applications. SuperLab has a Macintosh user interface for developing experiments. Psychological tasks can also be designed and modified with any application that generates a text format file.
Article
Full-text available
Face perception requires representation of invariant aspects that underlie identity recognition as well as representation of changeable aspects, such as eye gaze and expression, that facilitate social communication. Using functional magnetic resonance imaging (fMRI), we investigated the perception of face identity and eye gaze in the human brain. Perception of face identity was mediated more by regions in the inferior occipital and fusiform gyri, and perception of eye gaze was mediated more by regions in the superior temporal sulci. Eye-gaze perception also seemed to recruit the spatial cognition system in the intraparietal sulcus to encode the direction of another's gaze and to focus attention in that direction.
Article
Full-text available
Does mental imagery involve the activation of representations in the visual system? Systematic effects of imagery on visual signal detection performance have been used to argue that imagery and the perceptual processing of stimuli interact at some common locus of activity (Farah, 1985). However, such a result is neutral with respect to the question of whether the interaction occurs during modality-specific visual processing of the stimulus. If imagery affects stimulus processing at early, modality-specific stages of stimulus representation, this implies that the shared stimulus representations are visual, whereas if imagery affects stimulus processing only at later, amodal stages of stimulus representation, this implies that imagery involves more abstract, postvisual stimulus representations. To distinguish between these two possibilities, we repeated the earlier imagery-perception interaction experiment while recording event-related potentials (ERPs) to stimuli from 16 scalp electrodes. By observing the time course and scalp distribution of the effect of imagery on the ERP to stimuli, we can put constraints on the locus of the shared representations for imagery and perception. An effect of imagery was seen within 200 ms following stimulus presentation, at the latency of the first negative component of the visual ERP, localized at the occipital and posterior temporal regions of the scalp, that is, directly over visual cortex. This finding provides support for the claim that mental images interact with percepts in the visual system proper and hence that mental images are themselves visual representations.
Article
Full-text available
It has been proposed that visual-memory traces are located in the temporal lobes of the cerebral cortex, as electric stimulation of this area in humans results in recall of imagery. Lesions in this area also affect recognition of an object after a delay in both humans and monkeys, indicating a role in short-term memory of images. Single-unit recordings from the temporal cortex have shown that some neurons continue to fire when one of two or four colours are to be remembered temporarily. But neuronal responses selective to specific complex objects, including hands and faces, cease soon after the offset of stimulus presentation. These results led to the question of whether any of these neurons could serve the memory of complex objects. We report here a group of shape-selective neurons in an anterior ventral part of the temporal cortex of monkeys that exhibited sustained activity during the delay period of a visual short-term memory task. The activity was highly selective for the pictorial information to be memorized and was independent of the physical attributes such as size, orientation, colour or position of the object. These observations show that the delay activity represents the short-term memory of the categorized percept of a picture.
Article
Full-text available
Previous studies have shown that sensory stimulation and voluntary motor activity increase regional cerebral glucose consumption and regional cerebral blood flow (rCBF). The present study had 3 purposes: (1) to examine whether pure mental activity changed the oxidative metabolism of the brain and, if so, (2) to examine which anatomical structures were participating in the mental activity; and to examine whether there was any coupling of the rCBF to the physiological changes in the regional cerebral oxidative metabolism (rCMRO2). With a positron-emission tomograph (PET), we measured the rCMRO2, rCBF, and regional cerebral blood volume (rCBV) in independent sessions lasting 100 sec each. A dynamic method was used for the measurement of rCMRO2. The rCMRO2, rCBF, and rCBV were measured in 2 different states in 10 young, healthy volunteers: at rest and when visually imagining a specific route in familiar surroundings. The rCBF at rest was linearly correlated to the rCMRO2: rCBF (in ml/100 gm/min) = 11.4 rCMRO2 + 11.9. The specific mental visual imagery increased the rCMRO2 in 25 cortical fields, ranging in size from 2 to 10 cm3, located in homotypical cortex. Active fields were located in the superior and lateral prefrontal cortex and the frontal eye fields. The strongest increase of rCMRO2 appeared in the posterior superior lateral parietal cortex and the posterior superior medial parietal cortex in precuneus. Subcortically, the rCMRO2 increased in neostriatum and posterior thalamus. These focal metabolic increases were so strong that the CMRO2 of the whole brain increased by 10%. The rCBF increased proportionally in these active fields and structures, such that d(rCBF) in ml/100 gm/min = 11.1 d(rCMRO2). Thus, a dynamic coupling of the rCBF to the rCMRO2 was observed during the physiological increase in neural metabolism. On the basis of previous functional activation studies and our knowledge of anatomical connections in man and other primates, the posterior medial and lateral parietal cortices were classified as remote visual-association areas participating in the generation of visual images of spatial scenes from memory, and the posterior thalamus was assumed to participate in the retrieval of such memories.
Article
Full-text available
We report here the use of positron emission tomography (PET) to reveal that the primary visual cortex is activated when subjects close their eyes and visualize objects. The size of the image is systematically related to the location of maximal activity, which is as expected because the earliest visual areas are spatially organized. These results were only evident, however, when imagery conditions were compared to a non-imagery baseline in which the same auditory cues were presented (and hence the stimuli were controlled); when a resting baseline was used (and hence brain activation was uncontrolled), imagery activation was obscured because of activation in visual cortex during the baseline condition. These findings resolve a debate in the literature about whether imagery activates early visual cortex and indicate that visual mental imagery involves 'depictive' representations, not solely language-like descriptions. Moreover, the fact that stored visual information can affect processing in even the earliest visual areas suggests that knowledge can fundamentally bias what one sees.
Article
Full-text available
Detection of a visual target can be facilitated by flanking visual masks. A similar enhancement in detection thresholds was obtained when observers imagined the previously perceived masks. Imagery-induced facilitation was detected for as long as 5 minutes after observation of the masks by the targeted eye. These results indicated the existence of a low-level (monocular) memory that stores the sensory trace for several minutes and enables reactivation of early representations by higher processes. This memory, with its iconic nature, may subserve the interface between mental images and percepts.
Article
Full-text available
A dissociation between human neural systems that participate in the encoding and later recognition of new memories for faces was demonstrated by measuring memory task-related changes in regional cerebral blood flow with positron emission tomography. There was almost no overlap between the brain structures associated with these memory functions. A region in the right hippocampus and adjacent cortex was activated during memory encoding but not during recognition. The most striking finding in neocortex was the lateralization of prefrontal participation. Encoding activated left prefrontal cortex, whereas recognition activated right prefrontal cortex. These results indicate that the hippocampus and adjacent cortex participate in memory function primarily at the time of new memory encoding. Moreover, face recognition is not mediated simply by recapitulation of operations performed at the time of encoding but, rather, involves anatomically dissociable operations.
Article
Full-text available
Positron emission tomography (PET) was used to monitor regional cerebral blood flow variations while subjects were constructing mental images of objects made of three-dimensional cube assemblies from auditorily presented instructions. This spatial mental imagery task was contrasted with both passive listening (LIST) of phonetically matched nonspatial word lists and a silent rest (REST) condition. All three tasks were performed in total darkness. Mental construction (CONS) specifically activated a bilateral occipitoparietal-frontal network, including the superior occipital cortex, the inferior parietal cortex, and the premotor cortex. The right inferior temporal cortex also was activated specifically during this condition, and no activation of the primary visual areas was observed. Bilateral superior and middle temporal cortex activations were common to CONS and LIST tasks when both were compared with the REST condition. These results provide evidence that the so-called dorsal route known to process visuospatial features can be recruited by auditory verbal stimuli. They also confirm previous reports indicating that some mental imagery tasks may not involve any significant participation of early visual areas.
Article
Full-text available
The amygdala is thought to play a crucial role in emotional and social behaviour. Animal studies implicate the amygdala in both fear conditioning and face perception. In humans, lesions of the amygdala can lead to selective deficits in the recognition of fearful facial expressions and impaired fear conditioning, and direct electrical stimulation evokes fearful emotional responses. Here we report direct in vivo evidence of a differential neural response in the human amygdala to facial expressions of fear and happiness. Positron-emission tomography (PET) measures of neural activity were acquired while subjects viewed photographs of fearful or happy faces, varying systematically in emotional intensity. The neuronal response in the left amygdala was significantly greater to fearful as opposed to happy expressions. Furthermore, this response showed a significant interaction with the intensity of emotion (increasing with increasing fearfulness, decreasing with increasing happiness). The findings provide direct evidence that the human amygdala is engaged in processing the emotional salience of faces, with a specificity of response to fearful facial expressions.
Article
Full-text available
Working memory involves the short-term maintenance of an active representation of information so that it is available for further processing. Visual working memory tasks, in which subjects retain the memory of a stimulus over brief delays, require both the perceptual encoding of the stimulus and the subsequent maintenance of its representation after the stimulus is removed from view. Such tasks activate multiple areas in visual and prefrontal cortices. To delineate the roles these areas play in perception and working memory maintenance, we used functional magnetic resonance imaging (fMRI) to obtain dynamic measures of neural activity related to different components of a face working memory task-non-selective transient responses to visual stimuli, selective transient responses to faces, and sustained responses over memory delays. Three occipitotemporal areas in the ventral object vision pathway had mostly transient responses to stimuli, indicating their predominant role in perceptual processing, whereas three prefrontal areas demonstrated sustained activity over memory delays, indicating their predominant role in working memory. This distinction, however, was not absolute. Additionally, the visual areas demonstrated different degrees of selectivity, and the prefrontal areas demonstrated different strengths of sustained activity, revealing a continuum of functional specialization, from occipital through multiple prefrontal areas, regarding each area's relative contribution to perceptual and mnemonic processing.
Article
Full-text available
PET was used to image the neural system underlying visuospatial attention. Analysis of data at both the group and individual-subject level provided anatomical resolution superior to that described to date. Six right-handed male subjects were selected from a pilot behavioural study in which behavioural responses and eye movements were recorded. The attention tasks involved covert shifts of attention, where peripheral cues indicated the location of subsequent target stimuli to be discriminated. One attention condition emphasized reflexive aspects of spatial orientation, while the other required controlled shifts of attention. PET activations agreed closely with the cortical regions recently proposed to form the core of a neural network for spatial attention. The two attention tasks evoked largely overlapping patterns of neural activation, supporting the existence of a general neural system for visuospatial attention with regional functional specialization. Specifically, neocortical activations were observed in the right anterior cingulate gyrus (Brodmann area 24), in the intraparietal sulcus of right posterior parietal cortex, and in the mesial and lateral premotor cortices (Brodmann area 6).
Article
Full-text available
The relation between imagery and perception was investigated in face priming. Two experiments are reported in which subjects either saw or imagined the faces of celebrities. They were later given a speeded perceptual test (familiarity judgement to pictures of celebrities) or a speeded imagery test (in which they were told the names of celebrities and asked to make a decision about their appearance). Seeing faces primed the perceptual test, and imaging faces primed the imagery test; however, there was no priming between seeing and imaging faces. These results show that perception and imagery can be dissociated in normal subjects. In two further experiments, we examined the effects of imaging faces on a subsequent face-naming task and on a task requiring familiarity judgements to partial faces. Both these tasks were facilitated by prior imaging of faces. These results are discussed in relation to those of McDermott & Roediger (1994), who found that imagery promoted object priming in a perceptual test involving naming partial line drawings. The implications for models of face recognition are also discussed.
Article
Full-text available
Functional magnetic resonance imaging was used to quantify the effects of changes in spatial and featural attention on brain activity in the middle temporal visual area and associated motion processing regions (hMT+) of normal human subjects. When subjects performed a discrimination task that directed their spatial attention to a peripherally presented annulus and their featural attention to the speed of points in the annulus, activity in hMT+ was maximal. If subjects were instead asked to discriminate the color of points in the annulus, the magnitude and volume of activation in hMT+ fell to 64 and 35%, respectively, of the previously observed maximum response. In another experiment, subjects were asked to direct their spatial attention away from the annulus toward the fixation point to detect a subtle change in luminance. The response magnitude and volume dropped to 40 and 9% of maximum. These experiments demonstrate that both spatial and featural attention modulate hMT+ and that their effects can work in concert to modulate cortical activity. The high degree of modulation by attention suggests that an understanding of the stimulus-driven properties of visual cortex needs to be complemented with an investigation of the effects of task-related factors on visual processing.
Article
Full-text available
Working memory is the process of maintaining an active representation of information so that it is available for use. In monkeys, a prefrontal cortical region important for spatial working memory lies in and around the principal sulcus, but in humans the location, and even the existence, of a region for spatial working memory is in dispute. By using functional magnetic resonance imaging in humans, an area in the superior frontal sulcus was identified that is specialized for spatial working memory. This area is located more superiorly and posteriorly in the human than in the monkey brain, which may explain why it was not recognized previously.
Article
The tools needed for analysis and visualization of three-dimensional human brain functional magnetic resonance image results are outlined, covering the processing categories of data storage, interactive vs batch mode operations, visualization, spatial normalization (Talairach coordinates, etc.), analysis of functional activation, integration of multiple datasets, and interface standards. One freely available software package is described in some detail. The features and scope that a generally useful and extensible fMRI toolset should have are contrasted with what is available today. The article ends with a discussion of how the fMRI research community can cooperate to create standards and develop software that meets the community's needs.
Article
In human long-term memory, ideas and concepts become associated in the learning process. No neuronal correlate for this cognitive function has so far been described, except that memory traces are thought to be localized in the cerebral cortex; the temporal lobe has been assigned as the site for visual experience because electric stimulation of this area results in imagery recall and lesions produce deficits in visual recognition of objects. We previously reported that in the anterior ventral temporal cortex of monkeys, individual neurons have a sustained activity that is highly selective for a few of the 100 coloured fractal patterns used in a visual working-memory task. Here I report the development of this selectivity through repeated trials involving the working memory. The few patterns for which a neuron was conjointly selective were frequently related to each other through stimulus-stimulus association imposed during training. The results indicate that the selectivity acquired by these cells represents a neuronal correlate of the associative long-term memory of pictures.
Article
Abstract Cerebral blood flow was measured using positron emission tomography (PET) in three experiments while subjects performed mental imagery or analogous perceptual tasks. In Experiment 1, the subjects either visualized letters in grids and decided whether an X mark would have fallen on each letter if it were actually in the grid, or they saw letters in grids and decided whether an X mark fell on each letter. A region identified as part of area 17 by the Talairach and Tournoux (1988) atlas, in addition to other areas involved in vision, was activated more in the mental imagery task than in the perception task. In Experiment 2, the identical stimuli were presented in imagery and baseline conditions, but subjects were asked to form images only in the imagery condition; the portion of area 17 that was more active in the imagery condition of Experiment 1 was also more activated in imagery than in the baseline condition, as was part of area 18. Subjects also were tested with degraded perceptual stimuli, which caused visual cortex to be activated to the same degree in imagery and perception. In both Experiments 1 and 2, however, imagery selectively activated the extreme anterior part of what was identified as area 17, which is inconsistent with the relatively small size of the imaged stimuli. These results, then, suggest that imagery may have activated another region just anterior to area 17. In Experiment 3, subjects were instructed to close their eyes and evaluate visual mental images of upper case letters that were formed at a small size or large size. The small mental images engendered more activation in the posterior portion of visual cortex, and the large mental images engendered more activation in anterior portions of visual cortex. This finding is strong evidence that imagery activates topographically mapped cortex. The activated regions were also consistent with their being localized in area 17. Finally, additional results were consistent with the existence of two types of imagery, one that rests on allocating attention to form a pattern and one that rests on activating stored visual memories.
Article
Cortical areas associated with selective attention to the color and identity of faces were located using functional magnetic resonance imaging (fMRI). Six subjects performed tasks which required selective attention to face identity or color similarity using the same color-washed face stimuli. Performance of the color attention task but not the face attention task was associated with a region of activity in the collateral sulcus and nearby regions of the lingual and fusiform gyri. Performance of both tasks was associated with a region of activity in ventral occipitotemporal cortex that was lateral to the color responsive area and had a greater spatial extent. These fMRI results converge with results obtained from PET and ERP studies to demonstrate similar anatomical locations of functional areas for face and color processing across studies. Hum. Brain Mapping5:293–297, 1997. Published 1997 Wiley-Liss, Inc.This article was prepared by a group consisting of both United States government employees and non-United States government employees, and as such is subject to 17 U.S.C. Sec. 105.
Article
This paper presents a general approach to the analysis of functional MRI time-series from one or more subjects. The approach is predicated on an extension of the general linear model that allows for correlations between error terms due to physiological noise or correlations that ensue after temporal smoothing. This extension uses the effective degrees of freedom associated with the error term. The effective degrees of freedom are a simple function of the number of scans and the temporal autocorrelation function. A specific form for the latter can be assumed if the data are smoothed, in time, to accentuate hemodynamic responses with a neural basis. This assumption leads to an expedient implementation of a flexible statistical framework. The importance of this small extension is that, in contradistinction to our previous approach, any parametric statistical analysis can be implemented. We demonstrate this point using a multiple regression analysis that tests for effects of interest (activations due to word generation), while taking explicit account of some obvious confounds.
Article
Recognition of facial expressions is critical to our appreciation of the social and physical environment, with separate emotions having distinct facial expressions. Perception of fearful facial expressions has been extensively studied, appearing to depend upon the amygdala. Disgust-literally 'bad taste'-is another important emotion, with a distinct evolutionary history, and is conveyed by a characteristic facial expression. We have used functional magnetic resonance imaging (fMRI) to examine the neural substrate for perceiving disgust expressions. Normal volunteers were presented with faces showing mild or strong disgust or fear. Cerebral activation in response to these stimuli was contrasted with that for neutral faces. Results for fear generally confirmed previous positron emission tomography findings of amygdala involvement. Both strong and mild expressions of disgust activated anterior insular cortex but not the amygdala; strong disgust also activated structures linked to a limbic cortico-striatal-thalamic circuit. The anterior insula is known to be involved in responses to offensive tastes. The neural response to facial expressions of disgust in others is thus closely related to appraisal of distasteful stimuli.
Article
Studies of brain-damaged patients have revealed the existence of a selective impairment of face processing, prosopagnosia, resulting from lesions at different loci in the occipital and temporal lobes. The results of such studies have led to the identification of several cortical areas underlying the processing of faces, but it remains unclear what functional aspects of face processing are served by these areas and whether they are uniquely devoted to the processing of faces. The present study addresses these questions in a positron emission tomography (PET) study of regional cerebral blood flow in normal adults, using the 15 oxygen water bolus technique. The subjects participated in six tasks (with gratings, faces and objects), and the resulting level of cerebral activation was mapped on images of the subjects' cerebral structures obtained through magnetic resonance and was compared between tasks using the subtraction method. Compared with a fixation condition, regional cerebral blood flow (rCBF) changes were found in the striate and extrastriate cortex when subjects had to decide on the orientation of sine-wave gratings. A face-gender categorization resulted in activation changes in the right extrastriate cortex, and a face-identity condition produced additional activation of the fusiform gyrus and anterior temporal cortex of both hemispheres, and of the right parahippocampal gyrus and adjacent areas. Cerebral activation during an object-recognition task occurred essentially in the left occipito-temporal cortex and did not involve the right hemisphere regions specifically activated during the face-identity task. The results provide the first empirical evidence from normal subjects regarding the crucial role of the ventro-medial region of the right hemisphere in face recognition, and they offer new information about the dissociation between face and object processing.
Article
Positron emission tomographic (PET) studies of human attention have begun to dissect isolable components of this complex higher brain function, including a midline attentional system in a region of the anterior cingulate cortex. The right hemisphere may play a special part in human attention; neglect, an important phenomenon associated with damage to attentional systems, is more severe, extensive and long-lasting after lesions to the right hemisphere. Here we use PET measurements of brain blood flow in healthy subjects to identify changes in regional brain activity during simple visual and somatosensory tasks of sustained attention or vigilance. We find localized increases in blood flow in the prefrontal and superior parietal cortex primarily in the right hemisphere, regardless of the modality or laterality of sensory input. The anterior cingulate was not activated during either task. These data localize the vigilance aspects of normal human attention to sensory stimuli, thereby clarifying the biology underlying asymmetries of attention to such stimuli that have been reported in clinical lesions.
Article
Positron emission tomography (PET) was used to measure changes in regional cerebral blood flow of normal subjects, while they were discriminating different attributes (shape, color, and velocity) of the same set of visual stimuli. Psychophysical evidence indicated that the sensitivity for discriminating subtle stimulus changes was higher when subjects focused attention on one attribute than when they divided attention among several attributes. Correspondingly, attention enhanced the activity of different regions of extrastriate visual cortex that appear to be specialized for processing information related to the selected attribute.
Article
The distribution of regional cerebral blood flow (rCBF) was assessed by single photon emission computerized tomography (SPECT) in subjects during a resting state and during imagining either colours or faces or a route on a map. Twelve out of 30 subjects reported the spontaneous occurrence of mental visual images during the resting state. In these subjects flow in both orbitofrontal regions was higher than in those subjects who had not experienced spontaneous imagery. Voluntary imagery led to an increase of regional flow indices in basal temporal regions of both hemispheres and to a rightwards shift of global hemispheric asymmetry. The local changes were distinctly more marked with faces than with any of the other two stimuli. Imagining faces was also the only condition that led to an increase of activity in the left inferior occipital region which has been suggested by previous studies as being a crucial area for visual imagery. It is concluded that the observed differences of rCBF patterns between imagery conditions are related to the amount of information conveyed by the mental image. In contrast to the results of a companion study on DC-shifts accompanying imagery there was no effect of the visual versus spatial character of the images.
Article
A neural model is presented, based largely on evidence from studies in monkeys, postulating that coded representation of stimuli are stored in the higher-order sensory (i.e. association) areas of the cortex whenever stimulus activation of these areas also triggers a cortico-limbo-thalamo-cortical circuit. This circuit, which could act as either an imprinting or rehearsal mechanism, may actually consist of two parallel circuits, one involving the amygdala and the dorsomedial nucleus of the thalamus, and the other the hippocampus and the anterior nuclei. The stimulus representation stored in cortex by action of these circuits is seen as mediating three different memory processes: recognition, which occurs when the stored representation is reactivated via the original sensory pathway; recall, when it is reactivated via any other pathway; and association, when it activates other stored representations (sensory, affective, spatial, motor) via the outputs of the higher-order sensory areas to the relevant structures.
Article
Neuropsychological studies on visuospatial dysfunction are reviewed and a model is described that characterizes hemispheric specialization of this function. It is proposed that the right hemisphere is dominant for configural processing and the left hemisphere for detail processing. Past efforts of visuospatial rehabilitation, which have focused primarily on one type of impairment, "left-neglect", are discussed. Future directions for rehabilitation of visuospatial dysfunction associated with both right- and left-hemisphere pathology are suggested.
Article
Unilateral neglect reflects a disturbance in the spatial distribution of directed attention. A review of unilateral neglect syndromes in monkeys and humans suggests that four cerebral regions provide an integrated network for the modulation of directed attention within extrapersonal space. Each component region has a unique functional role that reflects its profile of anatomical connectivity, and each gives rise to a different clinical type of unilateral neglect when damaged. A posterior parietal component provides an internal sensory map and perhaps also a mechanism for modifying the extent of synaptic space devoted to specific portions of the external world; a limbic component in the cingulate gyrus regulates the spatial distribution of motivational valence; a frontal component coordinates the motor programs for exploration, scanning, reaching, and fixating; and a reticular component provides the underlying level of arousal and vigilance. This hypothetical network requires at least three complementary and interacting representations of extrapersonal space: a sensory representation in posterior parietal cortex, a schema for distributing exploratory movements in frontal cortex, and a motivational map in the cingulate cortex. Lesions in only one component of this network yield partial unilateral neglect syndromes, while those that encompass all the components result in profound deficits that transcend the mass effect of the larger lesion. This network approach to the localization of complex functions offers an alternative to more extreme approaches, some of which stress an exclusive concentration of function within individual centers in the brain and others which advocate a more uniform (equipotential or holistic) distribution.
Article
We measured the regional cerebral blood flow (rCBF) in 11 healthy volunteers with PET (positron emission tomography). The main purpose was to map the areas of the human brain that changed rCBF during (1) the storage, (2) retrieval from long-term memory, and (3) recognition of complex visual geometrical patterns. A control measurement was done with subjects at rest. Perception and learning of the patterns increased rCBF in V1 and 17 cortical fields located in the cuneus, the lingual, fusiform, inferior temporal, occipital, and angular gyri, the precuneus, and the posterior part of superior parietal lobules. In addition, rCBF increased in the anterior hippocampus, anterior cingulate gyrus, and in several fields in the prefrontal cortex. Recognition of the patterns increased rCBF in 18 identically located fields overlapping those activated in learning. In addition, recognition provoked differentially localized increases in the pulvinar, posterior hippocampus, and prefrontal cortex. Learning and recognition of the patterns thus activated identical visual regions, but different extravisual regions. A surprising finding was that the hippocampus was also active in recognition. Recall of the patterns from long-term memory was associated with rCBF increases in yet different fields in the prefrontal cortex, and the anterior cingulate cortex. In addition, the posterior inferior temporal lobe, the precuneus, the angular gyrus, and the posterior superior parietal lobule were activated, but not any spot within the occipital cortex. Activation of V1 or immediate visual association areas is not a prerequisite for visual imagery for the patterns. The only four fields activated in storage recall and recognition were those in the posterior inferior temporal lobe, the precuneus, the angular gyrus, and the posterior superior parietal lobule. These might be the storage sites for such visual patterns. If this is true, storage, retrieval, and recognition of complex visual patterns are mediated by higher-level visual areas. Thus, visual learning and recognition of the same patterns make use of identical visual areas, whereas retrieval of this material from the storage sites activates only a subset of the visual areas. The extravisual networks mediating storage, retrieval, and recognition differ, indicating that the ways by which the brain accesses the storage sites are different.
Article
A package of computer programs for analysis and visualization of three-dimensional human brain functional magnetic resonance imaging (FMRI) results is described. The software can color overlay neural activation maps onto higher resolution anatomical scans. Slices in each cardinal plane can be viewed simultaneously. Manual placement of markers on anatomical landmarks allows transformation of anatomical and functional scans into stereotaxic (Talairach-Tournoux) coordinates. The techniques for automatically generating transformed functional data sets from manually labeled anatomical data sets are described. Facilities are provided for several types of statistical analyses of multiple 3D functional data sets. The programs are written in ANSI C and Motif 1.2 to run on Unix workstations.
Article
We measured amygdala activity in human volunteers during rapid visual presentations of fearful, happy, and neutral faces using functional magnetic resonance imaging (fMRI). The first experiment involved a fixed order of conditions both within and across runs, while the second one used a fully counterbalanced order in addition to a low level baseline of simple visual stimuli. In both experiments, the amygdala was preferentially activated in response to fearful versus neutral faces. In the counterbalanced experiment, the amygdala also responded preferentially to happy versus neutral faces, suggesting a possible generalized response to emotionally valenced stimuli. Rapid habituation effects were prominent in both experiments. Thus, the human amygdala responds preferentially to emotionally valenced faces and rapidly habituates to them.
Article
Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate "face area" also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area "FF") that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
Article
The neural substrates of mental image generation were investigated with functional MRI. Subjects listened to words under two different instructional conditions: to generate visual mental images of the words' referents, or to simply listen to each word and wait for the next word. Analyses were performed which directly compared the regional brain activity during each condition, with the goal of discovering whether mental image generation engages modality-specific visual areas, whether it engages primary visual cortex, and whether it recruits the left hemisphere to a greater extent than the right. Results revealed that visual association cortex, and not primary visual cortex, was engaged during the mental image generation condition. Left inferior temporal lobe (Brodmann's area 37) was the most reliably and robustly activated area across subjects, had activity which extended superiorly into occipital association cortex (area 19). The results of this experiment support the hypothesis that visual mental imagery is a function of visual association cortex, and that image generation is asymmetrically localized to the left.
Article
Human lesion data indicate that an intact left hippocampal formation is necessary for auditory-verbal memory. By contrast, functional neuroimaging has highlighted the role of the left prefrontal cortex but has generally failed to reveal the predicted left hippocampal activation. Here we describe an experiment involving learning category-exemplar word pairs (such as 'dog...boxer') in which we manipulate the novelty of either individual elements or the entire category-exemplar pairing. We demonstrate both left medial temporal (including hippocampal) and left prefrontal activation and show that these activations are dissociable with respect to encoding demands. Left prefrontal activation is maximal with a change in category-exemplar pairings, whereas medial temporal activation is sensitive to the overall degree of novelty. Thus, left prefrontal cortex is sensitive to processes required to establish meaningful connections between a category and its exemplar, a process maximized when a previously formed connection is changed. Conversely, the left medial temporal activation reflects processes that register the overall novelty of the presented material. Our results provide striking evidence of functionally dissociable roles for the prefrontal cortex and hippocampal formation during learning of auditory-verbal material.
Article
Using a model of the functional MRI (fMRI) impulse response based on published data, we have demonstrated that the form of the fMRI response to stimuli of freely varied timing can be modeled well by convolution of the impulse response with the behavioral stimulus. The amplitudes of the responses as a function of parametrically varied behavioral conditions are fitted well using a piecewise linear approximation. Use of the combined model, in conjunction with correlation analysis, results in an increase in sensitivity for the MRI study. This approach, based on the well-established methods of linear systems analysis, also allows a quantitative comparison of the response amplitudes across subjects to a broad range of behavioral conditions. Fit parameters, derived from the amplitude data, are relatively insensitive to a variety of MRI-related artifacts and yield results that are compared readily across subjects.
Article
We examined brain activity associated with visual imagery at episodic memory retrieval using positron emission tomography (PET). Twelve measurements of regional cerebral blood flow (rCBF) were taken in six right-handed, healthy, male volunteers. During six measurements, they were engaged in the cued recall of imageable verbal paired associates. During the other six measurements, they recalled nonimageable paired associates. Memory performance was equalized across all word lists. The subjects' use of an increased degree of visual imagery during the recall of imageable paired associates was confirmed using subjective rating scales after each scan. Memory-related imagery was associated with significant activation of a medial parietal area, the precuneus. This finding confirms a previously stated hypothesis about the precuneus and provides strong evidence that it is a key part of the neural substate of visual imagery occurring in conscious memory recall.
Article
The functional anatomy of the interactions between spoken language and visual mental imagery was investigated with PET in eight normal volunteers during a series of three conditions: listening to concrete word definitions and generating their mental images (CONC), listening to abstract word definitions (ABST) and silent REST. The CONC task specifically elicited activations of the bilateral inferior temporal gyri, of the left premotor and left prefrontal regions, while activations in the bilateral superior temporal gyri were smaller than during the ABST task, during which an additional activation of the anterior part of the right middle temporal gyrus was observed. No activation of the occipital areas was observed during the CONC task when compared either to the REST or to the ABST task. The present study demonstrates that a network including part of the bilateral ventral stream and the frontal working memory areas is recruited when mental imagery of concrete words is performed on the basis of continuous spoken language.