Figure 1 - uploaded by Guillaume Thierry
Content may be subject to copyright.
Verbal and nonverbal stimulus types used in the auditory and visual components of the experiment. (A) In the auditory version, semantic decisions on speech samples (speech) were contrasted to equivalent decisions on environmental sound samples (sounds). The auditory baseline entailed meaningless sequences generated for each meaningful sequence by scrambling words (speech control) or sounds (sound control). (B) In the visual version, sequences of written words (text) were contrasted to mute video clips (videos). The visual baseline entailed pseudorandom strings of X s and T s (text control) and the mute videos passed through distortion filters making the videos meaningless (video control). 

Verbal and nonverbal stimulus types used in the auditory and visual components of the experiment. (A) In the auditory version, semantic decisions on speech samples (speech) were contrasted to equivalent decisions on environmental sound samples (sounds). The auditory baseline entailed meaningless sequences generated for each meaningful sequence by scrambling words (speech control) or sounds (sound control). (B) In the visual version, sequences of written words (text) were contrasted to mute video clips (videos). The visual baseline entailed pseudorandom strings of X s and T s (text control) and the mute videos passed through distortion filters making the videos meaningless (video control). 

Source publication
Article
Full-text available
Functional neuroimaging has highlighted a left-hemisphere conceptual system shared by verbal and nonverbal processing despite neuropsychological evidence that the ability to recognize verbal and nonverbal stimuli can doubly dissociate in patients with left- and right-hemisphere lesions, respectively. Previous attempts to control for perceptual diff...

Contexts in source publication

Context 1
... sounds during repetition and naming (Giraud & Price, 2001), as indicated by a significant interaction between task and stimulus type when data from both studies were combined. Although conceptual processing is likely to take place in repetition and naming tasks, the nature of conceptual operations is uncontrolled, whereas in the Thierry et al. (2003) study, conceptual require- ment of the tasks was equated for spoken words and environmental sounds. We therefore concluded that differences in the Thierry et al. study did not reflect perceptual differences between verbal and nonverbal sources and instead proposed that they arose at the level of accessing meaning (i.e., at a conceptual level). In the current study, we conducted a visual experiment to establish verbal/nonverbal dissociations that occur irrespective of sensory modality. By combining data from this new experiment with that from the auditory version previously reported (Thierry et al., 2003), we were able to determine whether the verbal versus nonverbal dissociation was seen in both stimulus modalities, or in the auditory modality only. In other words, the aim was to dissociate amodal from modality- specific verbal/nonverbal dissociations. In the auditory version of the experiment, 12 participants listened to sequences of environmental sounds and spoken words. In the visual version of the experiment, another 12 participants viewed mute videos and sequences of written words (Figure 1). All four sets of stimuli were matched for meaning and interpretability. On the basis of the neuropsychological evidence we predicted that (a) verbal processing (common to spoken words and text displays) as compared to nonverbal processing (common to environmental sounds and mute videos) would yield greater activation in left superior temporal regions (Thierry et al., 2003; Scott et al., 2000) and (b) nonverbal as compared to verbal processing would yield greater activation in the right hemisphere (Thierry et al., 2003; Coltheart, 1980). Twenty-four native speakers of English (mean age = 26.3 ± 8.4 years, all men) gave written consent to participate in 12 positron emission tomography (PET) scans (Siemens CTI III camera) involving intravenous injection of water labeled with O. The dose received was 9 mCi per measurement. The study was approved by the joint ethics committee of the Institute of Neurology (University College London [UCL]) and the National Hospital for Neurology and Neurosurgery (UCLH NHS Trust) and the UK Administration of Radioactive Sub- stances Advisory Committee (ARSAC). PET experiments on healthy individuals cannot include women of child- bearing age. All subjects were strongly right-handed, wrote with their right hand, kicked with their right foot, and had no family history of left-handedness. Twelve participants were presented with the auditory stimuli (Experiment 1) and the other 12 were presented with visual stimuli (Experiment 2). In both experiments, participants were exposed to verbal stimuli (speech or text) and nonverbal stimuli (environmental sounds or mute video clips) and had to perform two different conceptual tasks (categorization or sequence interpretation) on each stimulus type; see below for details. Activation conditions therefore conformed to a fully balanced 2 (between subjects) Â 2 Â 2 (within subject) design with two stimulus modalities (auditory and visual), two types of stimuli (verbal and nonverbal), and two tasks (categorization and sequence interpretation). The activation stimuli were verbal or nonverbal sequences lasting 17 sec, on average (range 15–20 sec), and ending with a distinctive signal (200-msec beep in the auditory conditions and white disk flashed for 200 msec at the center of the display in the visual conditions). Auditory sequences were made of digitized spoken words (e.g., drink story: ‘‘pulling cork out . . . popping noise . . . pouring liquid over ice cubes . . . someone sipping . . . someone sighing’’) and digitized environmental sounds. The phrases used provided min- imal syntactic information (determinants were systemat- ically excluded) and excluded superfluous semantic information (e.g., adjectives) to minimize differences between speech and sound conditions. Environmental sounds are considered a good nonverbal counterpart of human speech because they are auditory, they have a complex spectral structure, their duration can be adjust- ed to match word duration, and common sounds are easily identifiable (see Thierry et al., 2003). Visual sequences consisted of written words and mute video clips. Written word sequences were the direct transcrip- tion of the auditory phrases. Video clips were preferred to static images to avoid ambiguity in interpreting actions and to mimic the temporal structure of the spoken word/environmental sound sequences used in the auditory tasks. Sequences were matched within and across modality as closely as possible in terms of meaning, number of events, rhythm, and duration (Figure 1). Fifty percent of sequences included a stimulus that referred to an animal and 50% were logically ordered (events occurring in the most expected order). An unintelligible control sequence was generated for each meaningful one: Sound files (individual spoken words and sounds) were scrambled by using a random splicing procedure (Thierry et al., 2003); letters of the written words were replaced by pseudorandom strings of X s and T s; and videos were distorted using combined polar coordinates and twirling transformations (Figure 1). None of these baseline conditions contained any recog- nizable stimulus. Baselines were included to control for low-level perceptual differences between verbal and nonverbal contexts. Eight out of 14 original sequences yielding comparable behavioral performance were selected on the basis of a pilot behavioral screening involving 15 participants in the auditory version and 12 in the visual version (data not reported). Participants performed two tasks while either listening to words and sounds or viewing text and images. In a categorization task, they indicated with a key press whether a reference to an animal was present within each sequence (animal/no animal). This task was selected because conceptual categorization is equally achievable in a verbal and nonverbal context and it is easy. In a sequence interpretation task, participants indicated whether each sequence was logically ordered (ordered/ disordered). For instance, when the event ‘‘someone sighing’’ was presented before ‘‘someone sipping’’ at the end of the drink story, the trial required a ‘‘disor- dered’’ judgment. This task was selected because it is equally achievable in a verbal and nonverbal context but is ambiguous and difficult. It was therefore more chal- lenging and placed additional demands on attention and working memory. Any interaction between condition (verbal vs. nonverbal) and task (categorization vs. sequence interpretation) would indicate a modulation by difficulty and/or verbalization strategies. In the baseline conditions (scrambled speech, scrambled sounds, pseudotext displays, or distorted videos), participants pressed a key at the end of the meaningless sequence to control for finger movement. PET scans involved eight activation scans (verbal categorization, nonverbal categorization, verbal sequence interpretation, and nonverbal sequence interpretation) and four baseline scans (meaningless sequences derived by scrambling each verbal and nonverbal stimulus). Each participant heard/viewed a total of 16 different sequences (4 per scan) for both the verbal and the nonverbal conditions. The order of scans was counterbalanced over participants; the order of sequences within scans was pseudorandomized; and the order of stimuli within each sequence was never repeated. In all conditions, participants were instructed to respond as quickly and accu- rately as possible with a mouse button press after seeing the signal ending a sequence. Finger responses were alternated within subjects across blocks, and behavioral data were recorded online. Realignment of images, normalization, and statistics were performed with SPM99 (www.fil.ion.ucl.ac.uk/spm; Friston et al., 1995). Images were spatially smoothed with a 6-mm gaussian filter. The statistical model partitioned the two groups of 12 subjects with six conditions per group. Summing over conceptual task (categorization and sequence interpretation), we computed the following ...
Context 2
... The evidence is primarily drawn from the visual modality because ad hoc neurological syndromes (e.g., transcortical sensory aphasia [Lichtheim, 1885] and semantic refractory access dyspha- sia [see Warrington & Crutch, 2004]) are characterized by severe auditory comprehension deficits incompati- ble with auditory verbal testing. However, even in the visual modality, the level (perceptual or conceptual) at which the dissociations occur is often difficult to determine on the basis of neuropsychological testing because disruption at the perceptual level will have conceptual repercussions. Therefore, the multiplicity of conceptual stores remains highly debated (e.g., Lambon Ralph, Graham, Patterson, & Hodges, 1999; Caramazza, Hillis, Rapp, & Romani, 1990; Riddoch, Humphreys, Coltheart, & Funnel, 1988; for a review, see Saffran & Schwartz, 1994). Functional imaging evidence for a verbal/nonverbal dissociation is implied when the results from verbal and nonverbal activation studies are compared in either the auditory modality (von Kriegstein & Giraud, 2004; von Kriegstein, Eger, Kleinschmidt, & Giraud, 2003; Zatorre, Belin, & Penhune, 2002; Zatorre & Belin, 2001; Belin, Zatorre, Lafaille, Ahad, & Pike, 2000;) or the visual modality (Cohen, Lehericy, et al., 2002; Trojano et al., 2002; Cohen, Dehaene, et al., 2000; Kanwisher, McDermott, & Chun, 1997; Smith, Jonides, & Koeppe, 1996; Sergent, Ohta, & MacDonald, 1992). However, no clear double dissociation between verbal and nonverbal conceptual processing has yet been demonstrated when perceptual differences are controlled (e.g., by subtracting activations elicited by meaningless, scrambled stimuli). For example, the study by Vandenberghe, Price, et al. (1996) controlled for perceptual confounds between words and pictures by comparing conceptual tasks (e.g., Which of these objects is the biggest in real life?) to perceptual tasks (e.g., Which of these objects is the biggest on the screen?) on one type of stimulus relative to the other (i.e., by characterizing a stimulus by task interaction). The difficulty with this approach is that it excludes verbal or nonverbal conceptual processing that may have occurred irrespective of task. Thus, to fully characterize differences between verbal and nonverbal processing, we propose that three conditions are met: (i) Verbal and nonverbal stimuli must be compared directly to identify differences at all levels of processing; (ii) tasks performed on verbal and nonverbal stimuli must be matched in structure, cognitive requirements, and difficulty; (iii) modality- and task- independent verbal and nonverbal differences must be established. In a previous neuroimaging study involving normal volunteers (Thierry et al., 2003), we directly compared processing of spoken words (e.g., ‘‘cow mooing’’) and environmental sounds (e.g., cow mooing) matched for meaning using semantic categorization and sequence interpretation tasks. We controlled for some of the low- level perceptual differences between verbal and nonverbal stimuli by using baseline conditions that involved unintelligible noise bursts created by scrambling each speech and environmental sound stimulus. A shared, left-lateralized conceptual system was observed for both types of stimuli but, in addition, left anterior superior temporal activation was greater for spoken words than sounds and, conversely, right posterior superior temporal activation was greatest for sounds. More importantly, this functional dissociation was not observed for words and sounds during repetition and naming (Giraud & Price, 2001), as indicated by a significant interaction between task and stimulus type when data from both studies were combined. Although conceptual processing is likely to take place in repetition and naming tasks, the nature of conceptual operations is uncontrolled, whereas in the Thierry et al. (2003) study, conceptual require- ment of the tasks was equated for spoken words and environmental sounds. We therefore concluded that differences in the Thierry et al. study did not reflect perceptual differences between verbal and nonverbal sources and instead proposed that they arose at the level of accessing meaning (i.e., at a conceptual level). In the current study, we conducted a visual experiment to establish verbal/nonverbal dissociations that occur irrespective of sensory modality. By combining data from this new experiment with that from the auditory version previously reported (Thierry et al., 2003), we were able to determine whether the verbal versus nonverbal dissociation was seen in both stimulus modalities, or in the auditory modality only. In other words, the aim was to dissociate amodal from modality- specific verbal/nonverbal dissociations. In the auditory version of the experiment, 12 participants listened to sequences of environmental sounds and spoken words. In the visual version of the experiment, another 12 participants viewed mute videos and sequences of written words (Figure 1). All four sets of stimuli were matched for meaning and interpretability. On the basis of the neuropsychological evidence we predicted that (a) verbal processing (common to spoken words and text displays) as compared to nonverbal processing (common to environmental sounds and mute videos) would yield greater activation in left superior temporal regions (Thierry et al., 2003; Scott et al., 2000) and (b) nonverbal as compared to verbal processing would yield greater activation in the right hemisphere (Thierry et al., 2003; Coltheart, 1980). Twenty-four native speakers of English (mean age = 26.3 ± 8.4 years, all men) gave written consent to participate in 12 positron emission tomography (PET) scans (Siemens CTI III camera) involving intravenous injection of water labeled with O. The dose received was 9 mCi per measurement. The study was approved by the joint ethics committee of the Institute of Neurology (University College London [UCL]) and the National Hospital for Neurology and Neurosurgery (UCLH NHS Trust) and the UK Administration of Radioactive Sub- stances Advisory Committee (ARSAC). PET experiments on healthy individuals cannot include women of child- bearing age. All subjects were strongly right-handed, wrote with their right hand, kicked with their right foot, and had no family history of left-handedness. Twelve participants were presented with the auditory stimuli (Experiment 1) and the other 12 were presented with visual stimuli (Experiment 2). In both experiments, participants were exposed to verbal stimuli (speech or text) and nonverbal stimuli (environmental sounds or mute video clips) and had to perform two different conceptual tasks (categorization or sequence interpretation) on each stimulus type; see below for details. Activation conditions therefore conformed to a fully balanced 2 (between subjects) Â 2 Â 2 (within subject) design with two stimulus modalities (auditory and visual), two types of stimuli (verbal and nonverbal), and two tasks (categorization and sequence interpretation). The activation stimuli were verbal or nonverbal sequences lasting 17 sec, on average (range 15–20 sec), and ending with a distinctive signal (200-msec beep in the auditory conditions and white disk flashed for 200 msec at the center of the display in the visual conditions). Auditory sequences were made of digitized spoken words (e.g., drink story: ‘‘pulling cork out . . . popping noise . . . pouring liquid over ice cubes . . . someone sipping . . . someone sighing’’) and digitized environmental sounds. The phrases used provided min- imal syntactic information (determinants were systemat- ically excluded) and excluded superfluous semantic information (e.g., adjectives) to minimize differences between speech and sound conditions. Environmental sounds are considered a good nonverbal counterpart of human speech because they are auditory, they have a complex spectral structure, their duration can be adjust- ed to match word duration, and common sounds are easily identifiable (see Thierry et al., 2003). Visual sequences consisted of written words and mute video clips. Written word sequences were the direct transcrip- tion of the auditory phrases. Video clips were preferred to static images to avoid ambiguity in interpreting actions and to mimic the temporal structure of the spoken word/environmental sound sequences used in the auditory tasks. Sequences were matched within and across modality as closely as possible in terms of meaning, number of events, rhythm, and duration (Figure 1). Fifty percent of sequences included a stimulus that referred to an animal and 50% were logically ordered (events occurring in the most expected order). An unintelligible control sequence was generated for each meaningful one: Sound files (individual spoken words and sounds) were scrambled by using a random splicing procedure (Thierry et al., 2003); letters of the written words were replaced by pseudorandom strings of X s and T s; and videos were distorted using combined polar coordinates and twirling transformations (Figure 1). None of these baseline conditions contained any recog- nizable stimulus. Baselines were included to control for low-level perceptual differences between verbal and nonverbal contexts. Eight out of 14 original sequences yielding comparable behavioral performance were selected on the basis of a pilot behavioral screening involving 15 participants in the auditory version and 12 in the visual version (data not reported). Participants performed two tasks while either listening to words and sounds or viewing text and images. In a categorization task, they indicated with a key press whether a reference to an animal was present within each sequence (animal/no animal). This task was selected because conceptual categorization is equally achievable in a verbal and nonverbal context and it is easy. In a sequence interpretation task, participants indicated whether each sequence was logically ordered (ordered/ ...
Context 3
... perceptual differences between verbal and nonverbal sources and instead proposed that they arose at the level of accessing meaning (i.e., at a conceptual level). In the current study, we conducted a visual experiment to establish verbal/nonverbal dissociations that occur irrespective of sensory modality. By combining data from this new experiment with that from the auditory version previously reported (Thierry et al., 2003), we were able to determine whether the verbal versus nonverbal dissociation was seen in both stimulus modalities, or in the auditory modality only. In other words, the aim was to dissociate amodal from modality- specific verbal/nonverbal dissociations. In the auditory version of the experiment, 12 participants listened to sequences of environmental sounds and spoken words. In the visual version of the experiment, another 12 participants viewed mute videos and sequences of written words (Figure 1). All four sets of stimuli were matched for meaning and interpretability. On the basis of the neuropsychological evidence we predicted that (a) verbal processing (common to spoken words and text displays) as compared to nonverbal processing (common to environmental sounds and mute videos) would yield greater activation in left superior temporal regions (Thierry et al., 2003; Scott et al., 2000) and (b) nonverbal as compared to verbal processing would yield greater activation in the right hemisphere (Thierry et al., 2003; Coltheart, 1980). Twenty-four native speakers of English (mean age = 26.3 ± 8.4 years, all men) gave written consent to participate in 12 positron emission tomography (PET) scans (Siemens CTI III camera) involving intravenous injection of water labeled with O. The dose received was 9 mCi per measurement. The study was approved by the joint ethics committee of the Institute of Neurology (University College London [UCL]) and the National Hospital for Neurology and Neurosurgery (UCLH NHS Trust) and the UK Administration of Radioactive Sub- stances Advisory Committee (ARSAC). PET experiments on healthy individuals cannot include women of child- bearing age. All subjects were strongly right-handed, wrote with their right hand, kicked with their right foot, and had no family history of left-handedness. Twelve participants were presented with the auditory stimuli (Experiment 1) and the other 12 were presented with visual stimuli (Experiment 2). In both experiments, participants were exposed to verbal stimuli (speech or text) and nonverbal stimuli (environmental sounds or mute video clips) and had to perform two different conceptual tasks (categorization or sequence interpretation) on each stimulus type; see below for details. Activation conditions therefore conformed to a fully balanced 2 (between subjects) Â 2 Â 2 (within subject) design with two stimulus modalities (auditory and visual), two types of stimuli (verbal and nonverbal), and two tasks (categorization and sequence interpretation). The activation stimuli were verbal or nonverbal sequences lasting 17 sec, on average (range 15–20 sec), and ending with a distinctive signal (200-msec beep in the auditory conditions and white disk flashed for 200 msec at the center of the display in the visual conditions). Auditory sequences were made of digitized spoken words (e.g., drink story: ‘‘pulling cork out . . . popping noise . . . pouring liquid over ice cubes . . . someone sipping . . . someone sighing’’) and digitized environmental sounds. The phrases used provided min- imal syntactic information (determinants were systemat- ically excluded) and excluded superfluous semantic information (e.g., adjectives) to minimize differences between speech and sound conditions. Environmental sounds are considered a good nonverbal counterpart of human speech because they are auditory, they have a complex spectral structure, their duration can be adjust- ed to match word duration, and common sounds are easily identifiable (see Thierry et al., 2003). Visual sequences consisted of written words and mute video clips. Written word sequences were the direct transcrip- tion of the auditory phrases. Video clips were preferred to static images to avoid ambiguity in interpreting actions and to mimic the temporal structure of the spoken word/environmental sound sequences used in the auditory tasks. Sequences were matched within and across modality as closely as possible in terms of meaning, number of events, rhythm, and duration (Figure 1). Fifty percent of sequences included a stimulus that referred to an animal and 50% were logically ordered (events occurring in the most expected order). An unintelligible control sequence was generated for each meaningful one: Sound files (individual spoken words and sounds) were scrambled by using a random splicing procedure (Thierry et al., 2003); letters of the written words were replaced by pseudorandom strings of X s and T s; and videos were distorted using combined polar coordinates and twirling transformations (Figure 1). None of these baseline conditions contained any recog- nizable stimulus. Baselines were included to control for low-level perceptual differences between verbal and nonverbal contexts. Eight out of 14 original sequences yielding comparable behavioral performance were selected on the basis of a pilot behavioral screening involving 15 participants in the auditory version and 12 in the visual version (data not reported). Participants performed two tasks while either listening to words and sounds or viewing text and images. In a categorization task, they indicated with a key press whether a reference to an animal was present within each sequence (animal/no animal). This task was selected because conceptual categorization is equally achievable in a verbal and nonverbal context and it is easy. In a sequence interpretation task, participants indicated whether each sequence was logically ordered (ordered/ disordered). For instance, when the event ‘‘someone sighing’’ was presented before ‘‘someone sipping’’ at the end of the drink story, the trial required a ‘‘disor- dered’’ judgment. This task was selected because it is equally achievable in a verbal and nonverbal context but is ambiguous and difficult. It was therefore more chal- lenging and placed additional demands on attention and working memory. Any interaction between condition (verbal vs. nonverbal) and task (categorization vs. sequence interpretation) would indicate a modulation by difficulty and/or verbalization strategies. In the baseline conditions (scrambled speech, scrambled sounds, pseudotext displays, or distorted videos), participants pressed a key at the end of the meaningless sequence to control for finger movement. PET scans involved eight activation scans (verbal categorization, nonverbal categorization, verbal sequence interpretation, and nonverbal sequence interpretation) and four baseline scans (meaningless sequences derived by scrambling each verbal and nonverbal stimulus). Each participant heard/viewed a total of 16 different sequences (4 per scan) for both the verbal and the nonverbal conditions. The order of scans was counterbalanced over participants; the order of sequences within scans was pseudorandomized; and the order of stimuli within each sequence was never repeated. In all conditions, participants were instructed to respond as quickly and accu- rately as possible with a mouse button press after seeing the signal ending a sequence. Finger responses were alternated within subjects across blocks, and behavioral data were recorded online. Realignment of images, normalization, and statistics were performed with SPM99 (www.fil.ion.ucl.ac.uk/spm; Friston et al., 1995). Images were spatially smoothed with a 6-mm gaussian filter. The statistical model partitioned the two groups of 12 subjects with six conditions per group. Summing over conceptual task (categorization and sequence interpretation), we computed the following ...

Similar publications

Article
Full-text available
A recent perceptual imaging experiment uses a rare 2×2 design to dissociate selective visual attention from visual consciousness. Its conclusions support the hypothesis that visual consciousness does not arise from neurons in primary visual cortex and forces a reinterpretation of numerous prior studies.
Article
Full-text available
Recent functional neuroimaging studies have shown that reflecting on representations of the present self versus temporally distant selves is associated with higher activity in the medial prefrontal cortex (MPFC). In the current fMRI study, we investigated whether this effect of temporal perspective is symmetrical between the past and future. The ma...
Article
Full-text available
A central aim in cognitive neuroscience is to explain how neural activity gives rise to perception and behavior; the causal link of paramount interest is thus from brain to behavior. Functional neuroimaging studies, however, tend to provide information in the opposite direction by informing us how manipulation of behavior may affect neural activity...
Article
Full-text available
Humans and monkeys can learn to classify perceptual information in a statistically optimal fashion if the functional groupings remain stable over many hundreds of trials, but little is known about categorization when the environment changes rapidly. Here, we used a combination of computational modeling and functional neuroimaging to understand how...
Article
Full-text available
Functional neuroimaging studies in which the cortical organization for semantic knowledge has been addressed have revealed interesting dissociations in the recognition of different object categories, such as faces, natural objects, and manufactured objects. The present paper critically reviews these studies and performs a meta-analysis of stereotac...

Citations

... Previous behavioral studies and lesion-symptom mapping studies indicated that left hemisphere injuries impaired verbal knowledge, while right hemisphere damage affected pictorial memory (Grossman and Wilson 1987;Gainotti et al. 1994;Acres et al. 2009;Butler et al. 2009). Neuroimaging investigations further support this view, showing increased involvement of left temporal regions in processing verbal stimuli and right temporal cortex in understanding environmental sounds and images (Thierry et al. 2003;Thierry and Price 2006;Hocking and Price 2009). ...
Article
Full-text available
Semantic knowledge includes understanding of objects and their features and also understanding of the characteristics of events. The hub-and-spoke theory holds that these conceptual representations rely on multiple information sources that are integrated in a central hub in the ventral anterior temporal lobes. The dual-hub theory expands this framework with the claim that the ventral anterior temporal lobe hub is specialized for object representation, while a second hub in angular gyrus is specialized for event representation. To test these ideas, we used representational similarity analysis, univariate and psychophysiological interaction analyses of fMRI data collected while participants processed object and event concepts (e.g. “an apple,” “a wedding”) presented as images and written words. Representational similarity analysis showed that angular gyrus encoded event concept similarity more than object similarity, although the left angular gyrus also encoded object similarity. Bilateral ventral anterior temporal lobes encoded both object and event concept structure, and left ventral anterior temporal lobe exhibited stronger coding for events. Psychophysiological interaction analysis revealed greater connectivity between left ventral anterior temporal lobe and right pMTG, and between right angular gyrus and bilateral ITG and middle occipital gyrus, for event concepts compared to object concepts. These findings support the specialization of angular gyrus for event semantics, though with some involvement in object coding, but do not support ventral anterior temporal lobe specialization for object concepts.
... ; https://doi.org/10.1101/2023.10.13.562253 doi: bioRxiv preprint 1994; Grossman & Wilson, 1987). Neuroimaging investigations further support this view, showing increased involvement of left temporal regions in processing verbal stimuli and right temporal cortex in understanding environmental sounds and images (Hocking & Price, 2009;Thierry et al., 2003;Thierry & Price, 2006). ...
Preprint
Semantic knowledge includes understanding of objects and their features and also understanding of the characteristics of events. The hub-and-spoke theory holds that these conceptual representations rely on multiple information sources that are integrated in a central hub in the ventral anterior temporal lobes (vATL). Dual-hub theory expands this framework with the claim that the vATL hub is specialized for object representation, while a second hub in angular gyrus (AG) is specialized for event representation. To test these ideas, we used RSA, univariate and PPI analyses of fMRI data collected while participants processed object and event concepts (e.g., an apple, a wedding) presented as images and written words. RSA showed that AG encoded event concept similarity more than object similarity, although the left AG also encoded object similarity. Bilateral vATLs encoded both object and event concept structure, and left vATL exhibited stronger coding for events. PPI analysis revealed greater connectivity between left vATL and right pMTG, and between right AG and bilateral ITG and middle occipital gyrus, for event concepts compared to object concepts. These findings support the specialization of AG for event semantics, though with some involvement in object coding, but do not support vATL specialization for object concepts.
... Neuroimaging data supporting the different formats of semantic information processed by the right and left ATLs have been obtained by some authors, e.g., refs. [44][45][46], whereas functional neuroimaging data supporting the existence of brain regions representing amodal conceptual knowledge have been reported by other authors (e.g., refs. [32,33,47]). ...
Article
Full-text available
The aim of this study was to shed light on the neural substrate of conceptual representations starting from the construct of higher-order convergence zones and trying to evaluate the unitary or non-unitary nature of this construct. We used the ‘Thematic and Taxonomic Semantic (TTS) task’ to investigate (a) the neural substrate of stimuli belonging to biological and artifact categories, (b) the format of stimuli presentation, i.e., verbal or pictorial, and (c) the relation between stimuli, i.e., categorial or contextual. We administered anodal transcranial direct current stimulation (tDCS) to different brain structures during the execution of the TTS task. Twenty healthy participants were enrolled and divided into two groups, one investigating the role of the anterior temporal lobes (ATL) and the other the temporo-parietal junctions (TPJ). Each participant underwent three sessions of stimulation to facilitate a control condition and to investigate the role of both hemispheres. Results showed that ATL stimulation influenced all conceptual representations in relation to the format of presentation (i.e., left-verbal and right-pictorial). Moreover, ATL stimulation modulated living categories and taxonomic relations specifically, whereas TPJ stimulation did not influence semantic task performances.
... Finally, recognition of a meaningful object involves matching of the visual information with prior semantic knowledge of the object's features and retrieving the object's name correctly which is a language skill. As previous studies have shown that semantic processing and language lateralization are predominantly localised in the left hemisphere [53][54][55], it seems plausible that we clearly observed dominance of left hemispheric activity during object recognition [56]. ...
Article
Full-text available
Functional integration between two hemispheres is crucial for perceptual binding to occur when visual stimuli are presented in the midline of the visual field. Mima and colleagues (2001) showed using EEG that midline object recognition was associated with task-related decrease in alpha band power (alpha desynchronisation) and a transient increase in interhemispheric coherence. Our objective in the current study was to replicate the results of Mima et al. and to further evaluate interhemispheric effective connectivity during midline object recognition in source space. We recruited 11 healthy adult volunteers and recorded EEG from 64 channels while they performed a midline object recognition task. Task-related power and coherence were estimated in sensor and source spaces. Further, effective connectivity was evaluated using Granger causality. While we were able to replicate the alpha desynchronisation associated with midline object recognition, we could not replicate the coherence results of Mima et al. The data-driven approach that we employed in our study localised the source of alpha desynchronisation over the left occipito-temporal region. In the alpha band, we further observed significant increase in imaginary part of coherency between bilateral occipito-temporal regions during object recognition. Finally, Granger causality analysis between the left and right occipito-temporal regions provided an insight that even though there is bidirectional interaction, the left occipito-temporal region may be crucial for integrating the information necessary for object recognition. The significance of the current study lies in using high-density EEG and applying more appropriate and robust measures of connectivity as well as statistical analysis to validate and enhance our current knowledge on the neural basis of midline object recognition.
... This would suggest that a language network emerged out of a neural basis dedicated to event apprehension, perhaps explaining why agents and patients are processed by distinct neural populations during sentence comprehension (56). However, conflicting results present an as yet unresolved picture as to the extent of this overlap (45,57) and as to how and when representations are processed, beyond "where." ...
Article
Full-text available
Languages tend to encode events from the perspective of agents, placing them first and in simpler forms than patients. This agent bias is mirrored by cognition: Agents are more quickly recognized than patients and generally attract more attention. This leads to the hypothesis that key aspects of language structure are fundamentally rooted in a cognition that decomposes events into agents, actions, and patients, privileging agents. Although this type of event representation is almost certainly universal across languages, it remains unclear whether the underlying cognition is uniquely human or more widespread in animals. Here, we review a range of evidence from primates and other animals, which suggests that agent-based event decomposition is phylogenetically older than humans. We propose a research program to test this hypothesis in great apes and human infants, with the goal to resolve one of the major questions in the evolution of language, the origins of syntax.
... In a large-scale case-series study, these authors found that patients with predominantly left atrophy obtained significantly lower scores in picture naming, whereas patients with right ATL atrophy were significantly more impaired on the picture version of the PPT test. The hypothesis assuming that the left and right hemispheres may underlie verbal and non-verbal forms of semantic knowledge was also confirmed by results of functional imaging studies (e.g., [176][177][178]) that addressed the question of material-specificity in conceptual and person-specific semantic knowledge. Thierry et al. [176] compared semantic processing of spoken words to the equivalent processing of environmental sounds and showed that words enhance activation in left superior temporal regions, while environmental sounds enhance activation in a right posterior superior temporal region. ...
... Thierry et al. [176] compared semantic processing of spoken words to the equivalent processing of environmental sounds and showed that words enhance activation in left superior temporal regions, while environmental sounds enhance activation in a right posterior superior temporal region. Thierry and Price [177] compared conceptual processing of verbal and non-verbal stimuli in both visual and auditory modalities and found that left temporal regions were more involved in comprehending words (heard or read), whereas the right temporal cortex was more involved in the comprehension of environmental sounds and images. Hocking and Price [178] presented simultaneously to their subjects one visual (written object name or picture) and one auditory (spoken object name or object sound) stimulus and instructed them to decide whether these stimuli referred to the same object or not. ...
Article
Full-text available
This review evaluated if the hypothesis of a causal link between the left lateralization of language and other brain asymmetries could be supported by a careful review of data gathered in patients with unilateral brain lesions. In a short introduction a distinction was made between brain activities that could: (a) benefit from the shaping influences of language (such as the capacity to solve non-verbal cognitive tasks and the increased levels of consciousness and of intentionality); (b) be incompatible with the properties and the shaping activities of language (e.g., the relations between language and the automatic orienting of visual-spatial attention or between cognition and emotion) and (c) be more represented on the right hemisphere due to competition for cortical space. The correspondence between predictions based on the theoretical impact of language on other brain functions and data obtained in patients with lesions of the right and left hemisphere was then assessed. The reviewed data suggest that different kinds of hemispheric asymmetries observed in patients with unilateral brain lesions could be subsumed by common mechanisms, more or less directly linked to the left lateralization of language.
... Participants in the current study engaged in rigorous verbal and nonverbal search tasks under comparable conditions, while eye movements were monitored. Participants were highly accurate in all task conditions, suggesting that the verbal and nonverbal tasks were matched in difficulty (Thierry and Price, 2006). There were, however, subtle but significant differences in performance between platforms. ...
Article
Full-text available
In addition to “nonverbal search” for objects, modern life also necessitates “verbal search” for written words in variable configurations. We know less about how we locate words in novel spatial arrangements, as occurs on websites and menus, than when words are located in passages. In this study we leveraged eye tracking technology to examine the hypothesis that objects are simultaneously screened in parallel while words can only be found when each are directly foveated in serial fashion. Participants were provided with a cue (e.g. rabbit) and tasked with finding a thematically-related target (e.g. carrot) embedded within an array including a dozen distractors. The cues and arrays were comprised of object pictures on nonverbal trials, and of written words on verbal trials. In keeping with the well-established “picture superiority effect,” picture targets were identified more rapidly than word targets. Eye movement analysis showed that picture superiority was promoted by parallel viewing of objects, while words were viewed serially. Different factors influenced performance in each stimulus modality; lexical characteristics such as word frequency modulated viewing times during verbal search, while taxonomic category affected viewing times during nonverbal search. In addition to within-platform task conditions, performance was examined in cross-platform conditions where picture cues were followed by word arrays, and vice versa. Although taxonomically-related words did not capture gaze on verbal trials, they were viewed disproportionately when preceded by cross-platform picture cues. Our findings suggest that verbal and nonverbal search are associated with qualitatively different search strategies and forms of distraction, and cross-platform search incorporates characteristics of both.
... Of interest, progressively after cochlear implantation, during visuo-auditory speech processing, there's a broader involvement of the middle/posterior temporal gyrus known as a locus of multisensory processing ( Beauchamp et al., 2010 ;Belin et al., 20 0 0 , 20 04 ;Belin and Zatorre, 2003 ;Strelnikov et al., 2009. The middle/posterior temporal gyrus is also involved in creating audio-visual verbal concepts by matching auditory and visual speech stimuli information or when the auditory speech information is degraded ( Hocking and Price, 2009 ;Ozker et al., 2017 ;Thierry and Price, 2006 ). Therefore, an experienced CI patient would mobilize the multimodal STS/STG regions involved in multisensory linguistic and emotional prosody compared to NH individuals in relation to their higher behavioral performances ( Watson et al., 2014 ). ...
Article
Full-text available
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
... Based on the close connection between odors and memory, we expect that choices regarding certain products may also occur through affective and semantic priming processes. We also predict that scent may unconsciously arouse semantically congruent concepts speeding up categorization of odor-congruent verbal stimuli (e.g., words) even when consumers are not able to identify the scent since there is evidence (Thierry & Price, 2006) of significant performance differences in accessing the meanings of pictures (more concrete) compared with words (more abstract). Thus To test the hypotheses, we conducted five experiments, as detailed in the following sessions. ...
Article
Full-text available
Consumer choices are mostly regulated by pleasurable experiences that arise outside of the individuals' awareness in response to sensory attributes. This research examines the unconscious mechanism underlying consumers' behavior in response to odors applying the priming approach. Five experiments show that individuals' responses to odors involve two mechanisms, one affective (affective priming) and one associative (semantic priming) that impact consumers' categorization, recall, and choice. We found that when individuals perceive an odor as pleasant, their memory for odor‐congruent brand logos (Experiment 1), and categorization of odor‐congruent visual objects (Experiment 2) is improved. Unpleasant odors, instead, improve the categorization of odor‐congruent visual objects only when they are made salient (Experiments 3 and 4). A pleasant odor diffused in the environment also drives consumers toward odor‐congruent choices (Experiment 5), providing evidence that the incidental exposure to odors may induce affective and semantic associations with unrelated objects and behaviors. We also demonstrate that olfactory cues might be more effective than other modality (visual) stimuli to drive consumer responses. An implication for marketing is that odors employed in retail settings not only may induce an experience of pleasure but also promote specific consumer responses, such as categorization, recall, and choice.
... Table 1 ============= The results of correlations found by Snowden and colleagues (2004), Butler and colleagues (2009) verbal stimuli was associated with equivalent bilateral ATL activation. On the contrary, results consistent with a similar degree of verbal knowledge in the left-and of non-verbal knowledge in the right-temporal lobes were obtained by Thierry and colleagues (2003), Thierry and Price (2006) and ...
... In fact, all these studies showed that a clear link exists in normal subjects and SD patients between left temporal lobe and verbal semantic processing and between right ATL and non-verbal semantic processing. Furthermore, the PET studies of Thierry and colleagues (2003), Thierry and Price (2006) and Hocking and Price (2009) also showed that right ATL semantic specialization concerns both auditory and visual non-verbal material because the right posterior temporal regions are selectively more involved in making sense of environmental sounds and images. ...
... (5) Thierry and Price (2006) This functional neuroimaging study showed that left middle and superior temporal regions are more involved in comprehending words, whereas right midfusiform and middle temporal cortices are more involved in comprehending environmental sounds and images. ...
Article
According to the original “hub-and-spoke” model of conceptual representations, the neural network for semantic memory requires a single convergence zone located in the anterior temporal lobes (ATLs). However, a more recent version of this model acknowledges that a graded specialization of the left and right ATLs might emerge as a consequence of their differential connectivity with language and sensory-motor regions. A recent influential paper maintained that both the format of semantic representations (representational account) and their differential connectivity (connectivity account) could contribute to the cognitive consequences of atrophy to the left versus the right ATL atrophy. That paper, however, also raised questions as to whether the distinction between representational and connectivity accounts is a meaningful question. I argue that an important theoretical difference exists between the representational and the connectivity-based models and that investigations, based on this difference, should allow to choose between these alternative accounts.