ArticlePDF Available

ERP Manifestations of Processing Printed Words at Different Psycholinguistic Levels: Time Course and Scalp Distribution

Authors:

Abstract and Figures

The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.
Content may be subject to copyright.
ERP
Manifestations
of
Processing
Printed
Words
at
Different
Psycholinguistic
Levels:
Time
Course
and
Scalp
Distribution
S. Bentin
Hebrew University
Y. Mouchetant-Rostaing, M. H. Giard, J. F. Echallier, and J. Pernier
INSERM-U280, Lyon, France.
Abstract
n The aim of the present study was to examine the time
course and scalp distribution of electrophysiological manifes-
tations of the visual word recognition mechanism. Event-
related potentials (ERPs) elicited by visually presented lists of
words were recorded while subjects were involved in a series
of oddball tasks. The distinction between the designated target
and nontarget stimuli was manipulated to induce a different
level of processing in each session (visual, phonological/pho-
netic, phonological/lexical, and semantic). The ERPs of main
interest in this study were those elicited by nontarget stimuli.
In the visual task the targets were twice as big as the nontar-
gets. Words, pseudowords, strings of consonants, strings of
alphanumeric symbols, and strings of forms elicited a sharp
negative peak at 170 msec (N170); their distribution was lim-
ited to the occipito-temporal sites. For the left hemisphere
electrode sites, the N170 was larger for orthographic than for
nonorthographic stimuli and vice versa for the right hemi-
sphere. The ERPs elicited by all orthographic stimuli formed a
clearly distinct cluster that was different from the ERPs elicited
by nonorthographic stimuli. In the phonological/phonetic de-
cision task the targets were words and pseudowords rhyming
with the French word
vitrail
, whereas the nontargets were
words, pseudowords, and strings of consonants that did not
rhyme with
vitrail
. The most conspicuous potential was a
negative peak at 320 msec, which was similarly elicited by
pronounceable stimuli but not by nonpronounceable stimuli.
The N320 was bilaterally distributed over the middle temporal
lobe and was signicantly larger over the left than over the
right hemisphere. In the phonological/lexical processing task
we compared the ERPs elicited by strings of consonants
(among which words were selected), pseudowords (among
which words were selected), and by words (among which
pseudowords were selected). The most conspicuous potential
in these tasks was a negative potential peaking at 350 msec
(N350) elicited by phonologically legal but not by phonologi-
cally illegal stimuli. The distribution of the N350 was similar to
that of the N320, but it was broader and including temporo-pa-
rietal areas that were not activated in therhyme” task. Finally,
in the semantic task the targets were abstract words, and the
nontargets were concrete words, pseudowords, and strings of
consonants. The negative potential in this task peaked at 450
msec. Unlike the lexical decision, the negative peak in this task
signicantly distinguished not only between phonologically
legal and illegal words but also between meaningful (words)
and meaningless (pseudowords) phonologically legal struc-
tures. The distribution of the N450 included the areas activated
in the lexical decision task but also areas in the fronto-central
regions. The present data corroborated the functional neuro-
anatomy of word recognition systems suggested by other
neuroimaging methods and described their timecourse, sup-
porting a cascade-type process that involves different but in-
terconnected neural modules, each responsible for a different
level of processing word-related information. n
INTRODUCTION
Levels
of
Processing
in
Visual
Word
Recognition
Visual word recognition is a complex process that in-
volves several cognitive operations, such as visual encod-
ing of letters, translation of the letters’ shapes into a
sequence of graphemes and orthographic patterns, and
activation of lexical/phonological structures and their
meanings. All these processes have been shown to be
involved in reading words through many experiments
© 1999 Massachusetts Institute of Technology Journal of Cognitive Neuroscience 11:3, pp. 235–260
with normal subjects as well as by neuropsychological
investigations of patients with different types of dyslexia.
For example, the importance of visual processing for
word recognition was highlighted by patients with ne-
glect dyslexia, who have difculty identifying letters
while keeping track of their order in the word (e.g., Ellis,
Flude, & Young, 1987) and by patients with attentional
dyslexia who correctly identify the letters while misplac-
ing them within or across words (e.g., Shallice & War-
rington, 1977). The need for efcient orthographic
integration is demonstrated by patients with simultag-
nosia who are letter-by-letter readers (e.g., Patterson &
Kay, 1982). The phonological dyslexia syndrome indi-
cates that reading without phonology is decient and, in
conjunction with surface dyslexia and deep dyslexia,
demonstrates the importance of the lexical access for
normal reading. (For a detailed discussion of these syn-
dromes, see Coltheart, Patterson, & Marshall, 1980, and
Patterson, Marshall, & Coltheart, 1985.) On the basis of
such evidence, the model used as a framework for the
present study posits that visual word recognition in-
volves different levels at which printed information is
processed. These levels are (1) an orthographic level at
which visual features are integrated to represent ortho-
graphic patterns, (2) a lexical level at which the
phonological (and possibly the whole-word ortho-
graphic) representation of the printed word is activated,
and (3) a semantic level at which the meaning of the
word is accessed. In addition, tasks in which attention is
directed to the phonetic features of the words (such as
rhyming judgments) may induce phonetic activity that
may or may not be involved in the word recognition
process.
Although the exact nature of the processes involved
in visual word recognition is still a matter of debate, the
notion of levels (either of processing or of repre-
sentation) is accepted and incorporated into most theo-
ries (e.g., Ellis & Young, 1996, ch. 8; McClelland &
Rumelhart, 1981; Seidenberg & McClelland, 1989). More-
over, there is evidence that the level at which a word is
processed is task-dependent and can be controlled. For
example, several studies did not nd semantic priming
when the prime was processed at a letter level, at least
if the stimulus onset asychrony (SOA) between the
prime and the target was longer than a few hundred
milliseconds (e.g., Henik, Friedrich, & Kellogg, 1983;
Smith, Theodor, & Franklin, 1983). In fact, semantic prim-
ing was absent even at a short SOA (200 msec) if the
prime task was letter-search (Henik, Friedrich, Tzelgov, &
Tramer, 1994). This result suggests that it is possible to
control the putatively automatic activation of the seman-
tic system by directing subjects’ attention to the letter
level. Other studies, however, challenged this interpreta-
tion, suggesting that the absence of priming in the Henik
et al. (1994) experiment was an epiphenomenon caused
by the difculty of the task, which prevented the activa-
tion of the prime’s semantic representation within the
short SOA time range. When easier letter-level tasks were
used, semantic priming was obtained, suggesting that the
activation of meaning, although not resource-free, is the
default in visual word perception (Smith, Bentin, &
Spalek, submitted). It appears, therefore, that the ques-
tions of whether the processing of printed words may
be restricted to a shallow level and whether the cogni-
tive system involved in visual word recognition can be
inuenced and shaped by the purpose of reading the
words are still open. A related question addresses the
word-related information that is processed at each level.
For example, although traditional models of word recog-
nition assume the existence of a mental lexicon in which
word-related information is represented (but see Hinton
& Shallice, 1991, and Seidenberg & McClelland, 1989, for
alternative views), there is no consensus regarding the
characteristics of this representation. According to some
models, the lexicon contains only structural word-related
information (phonologic and orthographic), whereas
others see no evidence requiring separation between
the structure of the word and its meaning.
An additional, major question concerning the levels of
processing printed words is how the level-specic proc-
esses interact among themselves. One traditional view
suggested a series of stages. Accordingly, the printed
word should be processed rst at the orthographic level.
The output of this stage addresses a visual lexicon acti-
vating a word pattern and, subsequently, its semantic
representation (e.g., Morton, 1969). Other models sug-
gested that the various visual word perception opera-
tions are exerted in cascade;” that is, a processing stage
can begin before the previous stage is nished (McClel-
land, 1979). More recent models of reading suggest par-
allel, interactive processes by which the visual stimulus
is processed in parallel at all levels and different words
are represented by different patterns of activity in a
neural network (Carr & Pollatsek, 1985; Coltheart, 1985;
McClelland & Rumelhart, 1981; Seidenberg & McClel-
land, 1989; see also Jared & Seidenberg, 1991). Investi-
gating the dynamics of visual word recognition has been
partly hampered by the difculty of disentangling proc-
esses by the use of discrete measures of performance
such as the reaction time (RT). Some of these impedi-
ments can be overcome by studying the neurophys-
iological mechanisms that subserve this cognitive
function. In addition to providing ways to distinguish
between cognitive mechanisms by relating them to the
neuroanatomically distinct structures that mediate them,
some neurophysiological measures (such as ERPs) pro-
vide an on-line and time-continuous index of processing.
Neuroimaging
and
Electrophysiology
of
Word
Recognition
Using positron emission tomography (PET), several stud-
ies have identied a number of brain structures activated
during language processing (Beauregard et al., 1997; De-
monet et al., 1992; Frith, Friston, Liddle, & Frackowiak,
1991; Frith, Kapur, Friston, Liddle, & Frackowiak, 1995;
Petersen & Fiez, 1993; Petersen, Fox, Posner, Mintun, &
Raichle, 1989; Petersen, Fox, Snyder, & Raichle, 1990;
Wise et al., 1991; Zatorre, Meyer, Gjedde, & Evans, 1996).
The tasks typically used in those studies required either
visual processing of words and wordlike stimuli during
silent reading or phonetic” processing of words, syn-
thetic syllables, pure tones, and clicks while listening to
speech. The activity elicited in these “low-level” process-
236 Journal of Cognitive Neuroscience Volume 11, Number 3
ing stages was subtracted from that elicited when sub-
jects were instructed to perform higher-level processing
such as phonologic (e.g., reading aloud) or semantic
(e.g., generating the verbs associated with presented
nouns). Similar tasks were also used in functional mag-
netic resonance imaging (fMRI) studies (e.g., McCarthy,
Blamire, Rothman, Gruetter, & Shulman, 1993). These
neuroimaging studies have contributed to locating brain
areas involved in different aspects of processing words
and wordlike stimuli, but they do not reveal the time
course of the different types of brain activation. The
recording of the on-line electrophysiological manifesta-
tions of the different levels of visual word processing
may provide information about the time course of those
processes. Moreover, topographic analyses of the scalp
potentials and of the current densities may provide con-
verging information about brain regions activated at the
different processing levels.
Several main families of ERP components associated
with language processing have been described in
the electrophysiological literature. These families are
represented by the N200, the N400, and the P600
components. In the following brief review of the litera-
ture, we will only address the rst two of the above
components, those elicited by the processing of single
words.
An N200 specic to orthographic stimuli was revealed
in a study in which ERPs were recorded using intra-
cranial implanted electrodes (Nobre, Allison, & McCarthy,
1994). In this study the authors compared the ERPs
elicited by strings of letters with those elicited by other
complex visual stimuli such as human faces. They found
that although all the visual stimuli elicited negative com-
ponents peaking around 200 msec from stimulus onset,
the intracranial distribution of the N200 elicited by letter
strings (pronounceable words and pseudowords, and
unpronounceable nonwords) was distinct from the dis-
tribution of N200 elicited by nonorthographic stimuli.
Both letter strings and faces elicited activity in the
posterior fusiform gyrus, but the regions activated by
the two types of stimuli never overlapped within a sub-
ject (Allison, McCarthy, Nobre, Puce, & Belger, 1994).
Furthermore, the potentials elicited by words were more
negative in the left than in the right hemisphere,
whereas those elicited by faces were either similar
across hemispheres or were more negative in the right
than in the left. The fact that the intracranial N200 did
not distinguish between pronounceable and nonpro-
nounceable letter strings indicates that this component
is elicited by a shallow-level process, one that is not
affected by phonology. On the other hand, the distinc-
tion between the N200 distribution elicited by letter
strings compared to that elicited by other visual com-
plex stimuli suggests that this component may by asso-
ciated with a mechanisms of processing letters. Thus
there are data suggesting the existence of a visual mecha-
nism tuned to process orthographic stimuli whose activ-
ity is reected by a negative component peaking around
200 msec.
Higher-level analysis of words seems to be associated
with negative potentials peaking later than 200 msec
(see reviews by Bentin, 1989; Hillyard & Kutas, 1983;
Kutas & Van Petten, 1988). Among those, the most exten-
sively investigated potential is the N400 component,rst
described by Kutas and Hillyard (1980). Initially, the
N400 was linked with the processing of semantically
anomalous words placed in nal sentence position
either in reading (Kutas & Hillyard, 1980) or in speech
perception (McCallum, Farmer, & Pocock, 1984). It was
found that its amplitude can be modulated by the degree
of expectancy (cloze probability) as well as by the
amount of overlap between the semantic characteristics
of the expected and the actually presented words (Kutas,
Lindamood, & Hillyard, 1984; see also Kutas & Hillyard,
1989). Therefore, it was assumed to reect a postlexical
process of semantic integration and to be modulated by
the difculty of integrating the word into its sentential
context (e.g., Rugg, 1990). Other studies, however, re-
vealed that the N400 can also be elicited by isolated
printed or spoken words and pseudowords presented in
sequential lists and modulated by semantic priming out-
side the sentential context (Bentin, Kutas, & Hillyard,
1993; Bentin, McCarthy, & Wood, 1985; Holcomb, 1986;
Holcomb & Neville, 1990). Consequently, the semantic
integration process that may modulate the N400 has
been extended to include semantic priming between
single words. It is unlikely, however, that simple lexical
activation is a major factor eliciting or modulating the
N400 because closed-class words, although represented
in the lexicon, neither elicit nor modulate this compo-
nent (Nobre & McCarthy, 1994). Furthermore, unlike the
letter-processing-specic N200, the N400 is not elicited
by letter strings that do not obey the rules of phonology
and cannot be pronounced (i.e., illegal nonwords).
This pattern of results suggests that the N400 is not
associated with a visual mechanism dedicated to proc-
essing of letters, but rather with a higher-level word-
processing system. In particular, the absence of an N400
in response to illegal nonwords suggests that it is
sensitive to the phonologic structure of the stimulus.
However, it is probably not elicited by phonological
processing per se because negative waveforms peaking
at about 400 msec were modulated by the immediate
repetition of unfamiliar faces (Bentin & McCarthy, 1994)
and other pictorial stimuli (Barrett & Rugg, 1989, 1990).
Hence, the currently existing evidence indicates that the
N400 is elicited only by stimuli that allow deep (se-
mantic) processing and that its amplitude is enhanced
by semantic incongruity and attenuated by semantic
priming and repetition. This pattern is consistent with
the assumption that the N400 reects a link search
process between a stimulus and its semantic repre-
sentation. It is possible, however, that different aspects
of semantic activity in general, and language comprehen-
Bentin et al. 237
sion processes in particular, are associated with different
negativities elicited during the same time epoch. The
scalp distribution of the N400 may support this sugges-
tion.
The description of the N400 scalp distribution seems
to vary according to the task. Elicited by semantic incon-
gruities in sentences, the N400 is largest over the cen-
tro-parietal regions and slightly larger over the right
hemisphere than over the left (Kutas & Hillyard, 1982;
Kutas, Hillyard, & Gazzaniga, 1988). In contrast, when
elicited by single words, the N400 has a more anterior
distribution, with maxima over frontal or central sites
(Bentin, 1987; Bentin, McCarthy, & Wood, 1985; McCarthy
& Nobre, 1993) and a larger amplitude over the left than
over the right hemisphere (Nobre & McCarthy, 1994). In
a recent study, using intracranial ERP recordings,
McCarthy, Nobre, Bentin, and Spencer (1995) found large
medio- and antero-temporal distributions of the N400,
suggesting the existence of one, or several, deep neural
generators bilaterally distributed in the anterior medial
temporal lobe and associated with semantic processing.
The
Current
Study:
Goals,
Rationale,
and
Working
Hypotheses
The above review suggests that different components of
scalp-recorded ERP, which are generated in different
brain structures, may be differently sensitive to the level
at which words are processed. Previous research of lev-
els-of-processing effects on ERPs focused primarily on
the N400, providing inconclusive results. On the one
hand, several studies reported that the N400 was not
elicited or not modulated under shallow-processing con-
ditions (Bentin, Kutas, & Hillyard, 1993; Chwilla, Brown,
& Hagoort, 1995; Deacon, Breton, Ritter, & Vaughan,
1991). Other studies, however, reported N400 priming
effects with shallow-processing tasks (Besson, Fischler,
Boaz, & Raney, 1992; Kutas & Hillyard, 1989). Therefore,
a more systematic, within-subject manipulation of levels
of processing is required, in which the task effects on
different ERP components are assessed. To the best of
our knowledge, no such studies have been published. A
major goal of the present study was to bridge this gap.
In particular we sought (1) to investigate the neurophysi-
ological manifestations of processing words at different
levels (2) to assess the time course of processing within
each of those levels, and (3) to test the hypothesis that
lexical processes can be temporally and functionally
dissociated from semantic processes.
To achieve our goals we have asked participants to
perform several tasks, each designed to promote activity
at each of the levels of processing implied by the word-
recognition model we adopted. The activity associated
with the visual/orthographic analysis of the stimulus was
assessed comparing the ERPs elicited by letter strings to
those elicited by strings of alphanumeric symbols and
nonorthographic ASCII forms, in a font-size discrimina-
tion task. We hypothesized that orthographic analysis is
automatically induced by letter strings but not by nonor-
thographic stimuli. The possibility that phonological or
semantic activity would account for the differences be-
tween ERPs elicited by orthographic versus nonortho-
graphic stimuli was controlled by comparing words,
pseudowords, and unpronounceable strings of con-
sonants (hereafter labeled nonwords”). Words and
pseudowords are distinct from nonwords by being
phonologically legal and differ from each other in their
semantic value. We assumed that the onset of ortho-
graphic processing would precede the onset of any
other activity related to the recognition of printed stim-
uli. The second and third levels would be phonologi-
cal/phonetic and phonological/lexical. We had no a
priori predictions regarding the relative timing of these
two levels. Phonetic processing was promoted by a
rhyme-detection task, whereas the lexical processing
was induced using a series of lexical decision tasks.
Phonology is probably involved in both rhyme detection
and lexical decision for letter strings. However, in the
former task it mediates the activation of phonetic struc-
tures that are necessary for detecting the rhyme,
whereas in the latter we presumed that phonetic struc-
tures are not needed and probably not generated. There-
fore, the phonology in the lexical decision task leads to
word recognition and may entail other linguistic proc-
esses than the shallower” rhyme-detection task. Finally,
the fourth level of processing words was semantic. Se-
mantic processing was induced by asking the partici-
pants to distinguish abstract from concrete words. It is
important to realize that none of these tasks could sepa-
rately provide evidence for a particular level (or kind) of
processing. Obviously, words can be (and probably were)
processed at all levels, regardless of task. We hoped,
however, that the demand characteristics of each task
would intensify the activity at the respective levels and
that across-task comparisons in the timing and scalp
distribution of the ERPs might help disentangle one
process from another.
To avoid speeded response-related processes, we did
not measure RTs. Rather, we used an “oddball” paradigm
in which the distinction among the targets and the
distractors was based on processing the words at the
above described levels. Thus, in the font-size task the
subjects were instructed to keep a silent count of tar-
gets” that were characterized as being twice the size of
the distractors.” The type of stimulus (words, pseudo-
words, illegal nonwords, alphanumeric symbols, or
forms) was irrelevant to the task. In the rhyme task,
subjects were instructed to keep a silent count of stimuli
(words and pseudowords) that rhymed with a predesig-
nated French word, while disregarding other words,
pseudowords, and nonwords. In the lexical decision
tasks, subjects were instructed to keep a silent count of
words either presented among nonwords (a relatively
shallow discrimination) or among pseudowords (a
238 Journal of Cognitive Neuroscience Volume 11, Number 3
deeper discrimination). In a third lexical decision condi-
tion, the subjects were instructed to keep a silent count
of pseudowords interspersed among words. Finally, in
the semantic decision task, subjects were instructed to
keep a silent count of abstract words, disregarding con-
crete words, pseudowords, or nonwords that were pre-
sent in the same list. Table 1 describes the experimental
paradigm. Note that, both within and across lists, our
relevant comparisons were among the distractors. The
targets were expected to elicit a late positive component
(P300), whose latency and amplitude were presumed to
reect the different levels of discrimination difculty
between targets and distractors in each task.
RESULTS
ERPs
Elicited
by
Nontarget
Stimuli
As is common in ERP studies in which the electroen-
cephalogram (EEG) is recorded from more than a few
scalp sites, the entire data set was used to describe
spatial scalp distributions, whereas statistical analyses
were performed on selected sets of scalp sites. The
analyzed sites were chosen to cover the distribution of
each component as observed in the topographic maps,
as well as to cover an area sufciently large to allow a
distribution-based distinction among components and
comparisons across tasks. With slight variations among
tasks (specied where relevant), the dependent variables
were (1) mean amplitudes calculated for time ranges
during which the ERPs elicited by different stimulus
types were distinct by visual observation, (2) mean am-
plitudes calculated for more restricted time ranges that
encompassed the relevant component in each task, (3)
the peak latency (dened as the latency of the most
negative point within the same time range), and (4) the
latency to the onset of these components. The onset was
dened as the rst latency at which the distinction
between conditions was signicant, determined by
point-by-point t tests. On the basis of the observed dis-
tributions, the statistical analysis of ERPs elicited in the
visual task were limited to posterior and posterior tem-
poral areas (OM1/2, O1/2, PO3/4, and T5/6), whereas in
all other tasks the sites of interest covered the middle
and anterior temporal lobes as well as lateral aspects of
the precentral and frontal areas (TP7/8, T3/4, C3/4,
FC1/2, FC5/6, F3/4, and F7/8).
Visual Processing (Size) Task
The ERPs elicited by the ve stimulus types in the size
discrimination task revealed two distinct categories of
responses. One included all the three types of ortho-
graphic stimuli (words, pseudowords, and nonwords);
the other included the two types of nonorthographic
stimuli (symbols and forms) (Figure 1). This difference
began in the latency range of a negative wave peaking
at T5 and T6 around 170 msec (N170) and lasted for
about 600 msec, throughout the stimulus exposure time
(Figure 1A). The initial statistical evaluation of this pat-
tern compared the mean amplitude elicited between
140 and 600 msec by each Stimulus Type (words,
pseudowords, nonwords, symbols, forms) at four poste-
rior sites (OM1/2, O1/2, PO3/4, T5/6) on each Hemi-
sphere (left, right). The analysis of variance (ANOVA)
showed that the stimulus type and the site effects were
signicant (F(4, 92) = 29.3, MSE = 6.6, p < 0.0001, GG
epsilon = 0.87, and F(3, 69) = 28.3, MSE = 6.2, p <
0.0001, GG epsilon = 0.53, respectively), whereas the
hemisphere effect was not (F(1, 23) < 1.00). Post hoc
Table 1. Summary of the Experimental Design
Level of Processing Task Nontarget Stimulus Type (N) Target Stimuli (N)
Visual/orthographic processing Size decision Concrete words (84)
Pseudowords (84)
Illegal nonwords (84)
Alphanumeric symbols (84)
Forms (84)
Double-sized stimuli
(16 of each type)
Phonetic processing Rhyme decision Concrete words (84)
Pseudowords (84)
Illegal nonwords (84)
Stimuli rhyming with
vitrail
Concrete words (16)
Pseudowords (16)
Lexical/phonological processing Lexical decision
LD-1
LD-2
LD-3
Illegal nonwords (84)
Pseudowords (84)
Concrete words (84)
Concrete words (16)
Concrete words (16)
Pseudowords (16)
Semantic processing Semantic decision Concrete words (84)
Pseudowords (84)
Illegal nonwords (84)
Abstract words (32)
Bentin et al. 239
univariate ANOVAs revealed that the mean amplitudes
elicited by words, pseudowords, and nonwords did not
differ among themselves (F(2, 46) < 1.00), nor did the
mean amplitude elicited by symbols differ from that
elicited by forms (F(1, 23) < 1.00). The average mean
amplitude of the three orthographic stimuli was sig-
nicantly less positive than the average of the mean
amplitude of the two nonorthographic stimuli (F(1, 23)
= 81.1, MSE = 0.5, p < 0.0001) (Table 2). As revealed by
signicant interactions, the difference between ortho-
graphic and nonorthographic stimuli was larger at the
left than the right hemisphere sites (2.01 and 1.65
m
V,
respectively) (F(4, 92) = 7.69, MSE = 0.37, p < 0.001, GG
epsilon = 0.56) and larger at the PO (2.26
m
V) and O
(1.91
m
V) sites than at the T (1.73
m
V) and OM (1.43
m
V)
sites (F(12, 276) = 14.3, MSE = 0.21, p < 0.0001, GG
epsilon = 0.37). No other interactions were signicant.
Because no differences were found among the three
orthographic stimulus types or between the two nonor-
thographic stimulus types, for the subsequent statistical
analyses the responses to the ve stimulus types were
grouped into two distinct categories: orthographic stim-
uli including words, pseudowords, and nonwords (252
stimuli) and nonorthographic
stimuli including strings
of alphanumeric symbols and strings of forms (168 stim-
uli) (Figure 1B).
A series of point-by-point t tests comparing the
waveforms elicited by orthographic and nonorthog-
raphic stimuli showed that the difference between
the two categories became signicant (p < 0.01) at
140 msec at T5 (left hemisphere) and at 210 msec at
T6 (right hemisphere). Because the latency range
of the N170 wave (140 to 200 msec) was the earliest
time window where the responses to orthographic
and nonorthographic stimuli differed (Figure 1B), and
because previous studies suggested that the N170 is
the earliest informationalspecic ERP component elic-
ited by visual stimuli (Bentin, Allison, Puce, Perez, &
McCarthy, 1996; George, Evans, Fiori, Davidoff, & Renault,
1996), we focused the analysis on the inuence of stimu-
Figure
1. ERPs in the vis-
ual/orthographic task. (A)
ERPs elicited by nontarget
stimuli (strings of forms,
strings of alphanumeric sym-
bols, words, pseudowords, and
nonwords) at lateral posterior
sites (T5, T6). (B) ERPs col-
lapsed across orthographic
stimuli (words, pseudowords,
and nonwords) and nonortho-
graphic stimuli (strings of al-
phanumeric symbols and
strings of forms) in the vis-
ual/orthographic task. The
N170 wave was largest at the
lateral posterior sites T5 and
T6 and peaked around 170-
msec latency. The negative
peak at about 600 msec is
probably the off” response of
the stimuli (which lasted on
the screen for 500 msec).
240 Journal of Cognitive Neuroscience Volume 11, Number 3
lus-type category on N170 latency, amplitude, and topog-
raphy.
N17
0
Amplitude
and
Scalp
Distribution
Figure 2A shows the scalp potential (SP) and scalp-cur-
rent-density (SCD) distributions of the responses to or-
thographic and nonorthographic stimuli 170 msec
poststimulus. For both stimulus categories, the N170
shows bilateral activation centered between PO3, T5, O1,
and OM1 over the left hemisphere and PO4, T6, O2, and
OM2 on the right hemisphere (Figure 2B).
The mean amplitude of the N170 calculated between
140 and 200 msec, on the left and right hemispheres was
compared by a three-way ANOVA with the Stimulus
Category (orthographic, nonorthographic), Site (OM, O,
PO, T), and Hemisphere (left right) as within-subject
factors. This analysis showed no signicant main effect
for either Stimulus Category or Hemisphere (for both,
F(1, 23) < 1.0) but a signicant effect of Site (F(3, 69) =
15.6, MSE = 4.2, p < 0.0001, GG epsilon = 0.59). How-
ever, the interaction between Stimulus Category and
Hemisphere effect was signicant (F(1, 23) = 11.2, MSE
= 1.1, p < 0.005), as was the interaction between Stimu-
lus Category and Site (F(3, 69) = 2.9, MSE = 0.4, p < 0.5,
GG epsilon = 0.45). Posthoc univariate ANOVAs showed
that the mean amplitude of the N170 was larger at the
temporal sites (
-
3.22
m
V) and at OM (
-
3.13
m
V) than at
the parietooccipital (
-
1.46
m
V) and occipital (
-
2.25
m
V)
sites. The Stimulus Category
´
Hemisphere interaction
was due to the fact that, at all sites, the N170 elicited by
orthographic stimuli was larger over the left than over
the right hemisphere sites, whereas the N170 elicited by
nonorthographic stimuli was larger over the right than
over the left hemisphere. However, the difference be-
tween the N170 elicited by orthographic and nonortho-
graphic stimuli was not signicant, except at T5 (left
hemisphere), where the N170 elicited by orthographic
stimuli (
-
3.53
m
V) was signicantly larger than that elic-
ited by nonorthographic stimuli (
-
2.67
m
V) (F(1, 23) =
6.86 MSE = 2.55, p < 0.02).
Similar analyses performed on the SCD waveforms led
to similar results: There was no effect of Stimulus Cate-
gory or Hemisphere on the mean current amplitude of
the N170 (averaged across the four sites), but a sig-
nicant interaction between the two factors (F(1, 23) =
11.96, p < 0.01). The mean current amplitude of N170
tended to be larger over the left occipito-temporal areas
for orthographic stimuli (
-
0.73
m
A/m
3
) than for nonor-
thographic stimuli (
-
0.61
m
A/m
3
) (p < 0.10) and was
larger over the right occipito-temporal areas for nonor-
thographic stimuli (
-
0.78
m
A/m
3
) than for orthographic
stimuli (
-
0.54
m
A/m
3
) (p < 0.05).
These results thus show a double dissociation be-
tween the interhemispheric distribution of orthographic
and nonorthographic stimuli. Orthographic stimuli elic-
ited the largest N170 at posterior left hemisphere sites,
whereas the N170 elicited by nonorthographic stimuli
was largest at posterior right hemisphere sites. However,
only at the left posterior temporal site (T5) was the
difference between the N170 elicited by orthographic
and nonorthographic stimuli signicant.
Although only 8 out of the 24 subjects were males,
given the interhemispheric asymmetrical distribution
of the N170 on the one hand, and the recent contro-
versy regarding gender differences in the interhemi-
spheric asymmetry for language processing (Pugh et al.,
1996; Shaywitz et al., 1995; but see Frost et al., 1997),
on the other, we compared the pattern of the interhemi-
spheric asymmetry of the N170 amplitude between
the male and the female participants. This comparison
was based on a mixed model ANOVA with Gender as
the between-subject factor and Stimulus Category and
Hemisphere as the within-subject factor. This analysis
revealed that, although the pattern of interhemispheric
asymmetry tended to be different for men and women,
1
neither the interaction between Gender and Hemi-
sphere nor the interaction between all three factors
were statistically signicant (F(1, 22) < 1.0 for both
interactions).
N17
0
Latency
A two-way ANOVA was performed on the N170 peak
latency measured between 140 and 200 msec at T5 and
T6 (where the N170 was most conspicuous), with Stimu-
lus Category (orthographic, nonorthographic) and Hemi-
sphere (left, right) as within-subject factors. This analysis
showed that the N170 latency was similar for ortho-
graphic and nonorthographic categories (F(1, 23) <
1.00) and signicantly shorter at T5 (168 msec) than at
T6 (175 msec) (F(1, 23) = 5.96, p < 0.025). The interac-
tion between Stimulus Category and Hemisphere was
not signicant (F(1, 23) = 0.05).
Table 2. Mean Amplitudes (OM1, OM2, O1, O2, PO3, PO4, T5 and T6) between 140 and 600 msec, for Each Stimulus in the
Size-Decision Task
Words Pseudowords Nonwords Symbols Forms
Mean Amplitude 0.30 0.35 0.38 2.16 2.20
SEm
a
0.37 0.37 0.42 0.40 0.40
a
(SEm = standard error of the mean).
Bentin et al. 241
Figure
2. Scalp distribution of the negative potentials elic-
ited in each task. Pink-purple hues represent negative volt-
ages, yellow-green hues represent positive voltages.
(A) Back view of the scalp potential (rst row) and current
density (second row) distributions of the N170s to ortho-
graphic (left) and nonorthographic (right) stimuli in the size
task. (B) Lateral view of the N170s scalp potential distribu-
tions to orthographic and nonorthographic stimuli in the size
task. (C) Scalp potential distributions of the N320s to pro-
nounceable and nonpronounceable stimuli in the rhyme task.
(D) Scalp potential distributions of the N350s to phonologi-
cal legal and phonological illegal stimuli in the lexical deci-
sion task. (E) Scalp potential distributions of the N450s to
pseudowords and words stimuli in the semantic decision task.
242 Journal of Cognitive Neuroscience Volume 11, Number 3
Phonological/Phonetic (Rhyme) Task
The ERPs elicited by the three stimulus types in the
rhyme discrimination task displayed two distinct catego-
ries of responses. One included the pronounceable stim-
uli (words and pseudowords); the other included the
nonpronounceable stimuli (nonwords). This distinction
started at about 290 msec from stimulus onset and lasted
for about 330 msec (Figure 3). During that period, a
negative potential peaking at about 320 msec after
stimulus onset (N320) was most evident over the tem-
poral and temporo-parietal regions, particularly in the
ERP elicited by pronounceable stimuli; the ERPs elicited
by nonpronounceable stimuli during that period were
dominated by a positive potential that was interrupted
by a shoulder” in the region of the N200 (see Figures
2C and 3). An initial ANOVA comparing the mean ampli-
tude of the ERPs elicited by each stimulus type at all
seven lateral electrodes over each hemisphere between
270 and 500 msec supported the categorization be-
tween pronounceable and nonpronounceable stimuli.
This analysis showed a signicant main effect of stimulus
type (F(2, 46) = 20.0, MSE = 0.44, p < 0.001, GG epsi-
lon = 0.98) that, as revealed by post hoc univariate
contrasts, reected only the fact that the mean amplitude
elicited by the nonwords (1.43
m
V) was signicantly
more positive than that elicited by either words (0.45
m
V) or pseudowords (0.33
m
V), which did not differ one
from another (F(1, 23) < 1.0). Therefore, for the sub-
sequent analyses, the responses to the three stimulus
types were grouped into two distinct categories: pro-
nounceable
stimuli including words and pseudowords
(168 stimuli) and nonpronounceable stimuli, which
were the nonwords (84 stimuli). A series of point-by-
point t test analyses revealed that the difference be-
tween these two categories was signicant (p < 0.01),
starting at 295 msec at T3 (left hemisphere) and at 305
msec at T4 (right hemisphere).
Because the latency range of the N320 wave (270 to
370 msec) was the earliest time window where pro-
nounceable and nonpronounceable stimuli elicited dif-
ferent ERPs, we focused the analysis of the inuence of
stimulus category on the N320 latency, amplitude, and
scalp distribution.
N32
0
Amplitude
and
Scalp
Distribution
Figure 2C shows the scalp potential distribution of the
responses to pronounceable and nonpronounceable
stimuli at 320 msec poststimulus onset on the left and
right hemispheres. A wide positive eld on the occipito-
central areas characterized the responses to nonpro-
nounceable stimuli. The potential distribution to
pronounceable stimuli displayed two voltage patterns: a
negative potential eld over the temporal areas and a
Figure
3. ERPs to nontarget
stimuli (words, pseudowords,
and nonwords) at the sites of
interest in the phonologi-
cal/phonetic task. The N320
wave was largest at T3on
the left temporal hemi-
sphere—and was much
smaller for nonwords than for
pseudowords and words.
Bentin et al. 243
negative/positive pattern over the occipito-parietal re-
gion, slightly larger at the left than at the right hemi-
sphere sites. N320 shows a larger amplitude over the left
temporal areas (around T3) than over the right temporal
areas (around T4) and larger for pronounceable stimuli
than for nonpronounceable ones.
The statistical analysis of these differences was based
on a three-way ANOVA with Stimulus Category (pro-
nounceable, nonpronounceable), Site (TP7/8, T3/4, C3/4,
FC1/2, FC5/5, F3/4, F7/8), and Hemisphere (left, right) as
within-subject factors. The dependent variable was the
mean amplitude of the N320 between 270 and 370 msec
from stimulus onset. This analysis revealed that the N320
was larger (i.e., more negative) for pronounceable (
-
0.18
m
V) than for nonpronounceable stimuli (which in fact
elicited a positive waveform in the same latency range,
0.99
m
V) (F(1, 23) = 15.7, MSE = 14.66, p < 0.001) and
at the left (0.04
m
V) than at the right hemisphere sites
(0.77
m
V) (F(1, 23) = 19.92, MSE = 4.53, p < 0.001). The
main effect of site was also signicant (F(6, 138) = 16.92,
MSE = 3.57 p < 0.001). The interaction between the
Stimulus Category and the Site effects was signicant,
suggesting that the difference between the pronounce-
able and the nonpronounceable stimuli was larger at
some sites than at others (F(6, 138) = 8.82, MSE = 0.37,
p < 0.001). No other interactions were signicant. The
scalp distribution of the N320 was analyzed by a one-
way ANOVA in which the dependent variable was the
amplitude of N320 elicited by pronounceable stimuli
averaged across hemispheres. This analysis showed that
the N320 varied signicantly with site (F(6, 138) = 13.4,
p < 0.001, GG epsilon = 0.35). Post hoc univariate con-
trasts revealed that the amplitude of the N320 was sig-
nicantly larger at T3/4 (
-
1.1
m
V) than at any other
location and that it was negative at F7/8, TP, and FC56
(
-
0.68,
-
0.65, and
-
0.53
m
V, respectively) and positive at
the more central and frontal electrodes, C, FC1/2, and
F3/4 (0.59, 0.76, and 0.31
m
V, respectively). This distribu-
tion statistically validates the lower midtemporal distri-
bution of the N320. A post hoc analysis of the interaction
between the stimulus category and the site showed that
the difference between pronounceable and nonpro-
nounceable stimuli was largest at F7/8 (1.85
m
V).
The possible interaction of the hemispheric differ-
ences with gender was examined for the N320 as for the
N170 potential. This analysis showed that neither the
Gender
´
Hemisphere nor the Gender
´
Stimulus Type
by Hemisphere interactions were signicant (for both
F(1, 22) < 1).
N32
0
Latency
A Stimulus Category by Hemisphere ANOVA was per-
formed on the N320 peak latency measured at T3 and
T4 (where the amplitude of the N320 elicited by pro-
nounceable stimuli was maximal). This analysis showed
Figure
4. ERPs elicited by
nontarget stimuli (words,
pseudowords, and nonwords)
at the sites in the lexical deci-
sion task. The N350 wave was
largest at T3—on the left tem-
poral hemisphere—and was
elicited only by words and
pseudowords. Unlike legal
phonological stimuli, non-
words elicited a large positive
deection.
244 Journal of Cognitive Neuroscience Volume 11, Number 3
that N320 latency was signicantly shorter for nonpro-
nounceable stimuli (303 msec) than for pronounceable
stimuli (326 msec) (F(1, 23) = 16.55, p < 0.001), without
a signicant main effect of hemisphere ((F(1, 23) < 1.00).
The interaction between Stimulus Category and Hemi-
sphere, however, was signicant (F(1, 23) = 5.68, p <
0.025), revealing that in response to pronounceable stim-
uli the N320 peaked earlier at T3 (321 msec) than at T4
(331 msec), whereas in response to nonpronounceable
stimuli it peaked earlier at T4 (297 msec) than at T3 (309
msec). Hence, the left hemisphere responded faster to
pronounceable than to nonpronounceable stimuli,
whereas the opposite pattern was found for the right
hemisphere.
Phonological/Lexical Task
As in the phonological/phonetic task, the ERPs elicited
by the three stimulus types in the lexical decision tasks
revealed two distinct categories of responses. One in-
cluded the words and the pseudowords and the other
included the nonwords. This distinction was evident
starting at about 270 msec from stimulus onset and
lasting for about 250 msec, an epoch that encompassed
a negative positive deection for the words and
pseudowords, but a positive peak for the nonwords
(Figure 4). An initial ANOVA compared the mean ampli-
tude of the ERPs elicited between 270 and 500 msec by
each Stimulus Type (words, pseudowords, nonwords) at
the midtemporal and anterior-temporal Sites (T3/4, C3/4,
FC5/6, F7/8, see Figure 2D) and over each Hemisphere
(left, right). This ANOVA showed that the main effects of
Stimulus Type and Site were signicant (F(2, 46) = 47.1,
p < 0.001, GG epsilon = 0.84 and F(3, 69) = 14.7, p <
0.001, GG epsilon = 0.58, respectively) and that, across
all condition and sites, the ERPs elicited at the left hemi-
sphere sites were more negative (
-
0.33
m
V) than at the
right hemisphere sites, which were actually positive
(0.17
m
V) (F(1, 23) = 9.9, p < 0.005). The interaction
between Stimulus Type and Site was also signicant,
suggesting that the effect of Stimulus Type was different
at different scalp locations
2
(F(6, 138) = 14.5, p < 0.001,
GG epsilon = 0.54). Post hoc univariate contrasts
showed that the mean amplitude of the ERP elicited by
words (
-
0.63
m
V) and by pseudowords (
-
1.00
m
V) dur-
ing this period did not differ signicantly (F(1, 23) = 3.2,
p = 0.085), both being signicantly more negative than
the mean amplitude of the ERP elicited by the nonwords
at this time (1.38
m
V) (F(1, 23) = 62.6, p < 0.001 and F(1,
23) = 52.2, p < 0.001 for pseudowords and words,
respectively). Consequently, the words and pseudowords
(168 stimuli) were collapsed to form one category of
phonologically legal stimuli to be compared with the
nonwords, which were phonologically illegal stimuli (84
stimuli). A series of point-by-point t tests between the
waveforms elicited by the pronounceable and the non-
pronounceable stimuli showed that this difference be-
came signicant (p < 0.01) at 270 msec over the left
hemisphere (T3) and at 300 msec over the right hemi-
sphere (T4).
The difference between the two stimulus categories
was most evident about 350 msec from stimulus onset,
at the peak of the negative deection elicited primarily
by the phonologically legal stimuli (N350). Around that
latency, the increase in the positivity elicited by non-
words was interrupted by a shoulder” (i.e., a decrease
in the magnitude of the positive derivative of the wave-
form) and even a short-lasting change in its direction at
some locations. Hence, it appears that phonologically
legal and illegal stimuli were processed signicantly dif-
ferently, at least as these processes were reected by
ERPs. Because the present task was designed to examine
the difference between the deeper, lexical processes that
may be required to distinguish between words and
pseudowords, and the more supercial processes that
are probably sufcient to distinguish nonwords (Balota
& Chumbley, 1984), our analyses focused on the N350,
which was most conspicuous at the temporal and
fronto-central sites.
N35
0
Amplitude
and
Scalp
Distribution
Figure 2D shows the scalp-potential distribution of the
responses to phonologically legal and illegal stimuli, 350
msec poststimulus over the left and right hemispheres,
respectively. Although the scalp distribution of the N350
was apparently more anterior and central than that of
the N320, for the purpose of intertask comparisons we
have analyzed the same subset of scalp sites as in the
phonological/phonetic task. Hence, the mean amplitude
of the N350 was calculated between 300 and 400 msec
separately for legal and illegal stimuli at TP7, T3, C3, FC5,
FC1, F3, and F7 over the left hemisphere and the corre-
sponding sites over the right hemisphere. These data
were analyzed using a Stimulus Category
´
Site
´
Hemi-
sphere within-subject ANOVA. The analysis showed that
all three main effects were signicant (F(1, 23) = 85.6,
p < 0.001, F(6, 138) = 12.8, p < 0.001, GG epsilon = 0.41
and F(1, 23) = 7.1, p < 0.015 for the Stimulus Category,
Site, and Hemisphere, respectively). The interaction be-
tween Stimulus Type and Site and that between Stimulus
Type and Hemisphere and the three-way interaction
between Stimulus Type, Site, and Hemisphere were also
signicant (F(6, 138) = 16.5, p < 0.001, GG epsilon =
0.37, F(1, 23) = 8.5, p < 0.01, and F(6, 138) = 8.5, p <
0.025, GG epsilon = 0.38, respectively). The distribution
of the N350 was examined with a Site
´
Hemisphere
ANOVA. This analysis showed that the N350 was larger
(i.e., more negative) at left (
-
1.58
m
V) than at right
(
-
0.91
m
V) hemisphere sites (F(91, 23) = 15.6, p <
0.005). Across hemispheres, its amplitude varied sig-
nicantly (F(6, 138) = 2.9 p < 0.01, GG epsilon = 0.4).
Post hoc univariate contrasts revealed that, like the
N320, the N350 was largest at T3/4 (
-
1.75
m
V). However,
Bentin et al. 245
in contrast to the N320, its amplitude was not sig-
nicantly smaller at FC5/6 (
-
1.6
m
V) than at T3/4, and it
was only slightly reduced at F7/8 (
-
1.34
m
V). The differ-
ence between the amplitude of the N350 at these three
sites was not signicant. In contrast, the amplitude of the
N350 at the TP5/6 (
-
1.24
m
V) sites, which were imme-
diately posterior to the T3/4, was signicantly smaller
than at T3/4 (F(1, 23 = 12.9, p < 0.01). These results
conrmed a midtemporal and dorsotemporal scalp dis-
tribution of the N350, with ramications in the anterior
temporal lobes. This distribution is different from that of
the N320. Yet, given the spatial and temporal proximity
of the N320 and the N350, we cannot exclude the
possibility that the topography observed in Figure 3D
was inuenced by an overlap of N320 and N350. As
previously, none of the interactions with the partici-
pant’s gender were signicant.
N35
0
Latency
The N350 latency for the phonologically legal category
did not differ signicantly between the T3 (340 msec)
and the T4 (345 msec) electrode sites (F(1, 23) = 0.22).
Semantic Task
In the semantic task, all the stimulus types elicited dis-
tinguishable ERPs. In particular, the semantic task dif-
fered from the phonological tasks in that the responses
to words and pseudowords were also distinct. However,
at most sites, the period during which the ERPs elicited
by words seem to be different from those elicited by
pseudowords began later and was shorter than the pe-
riod during which the ERPs elicited by nonwords were
distinct from the other two categories. Therefore, for the
initial analysis of the differences among stimuli types we
divided the entire period during which differences were
noticeable (270 to 600 msec) into two epochs. The rst
was from 270 to 350 msec and the second was from 350
to 600 msec. The differential activity was distributed at
the fronto-central and anterior-temporal sites (Figure 2E).
Consequently, the initial ANOVA compared the mean
amplitude of the ERPs elicited by each stimulus type,
during each epoch, at T3, FC5, FC1, F3, F7, and at the
correspondent sites over the right hemisphere. This
analysis showed signicant main effects of Stimulus Type
(F(2, 46) = 16.9, p, GG epsilon = 0.90), Site (F(4, 92) =
15.0, p < 0.001, GG epsilon = 0.50), and Hemisphere
(F(1, 23) = 29.6, p < 0.001). There was no signicant
main effect of the epoch (F(1, 23) < 1.00). The interac-
tion between the Stimulus Type effect and the epoch
was signicant (F(2, 46) = 15.7, p < 0.001, GG epsilon =
0.69). The source of this interaction was revealed by
separate analyses for each epoch. These analyses, fol-
lowed by univariate contrasts, revealed that during the
rst epoch the ERPs elicited by words and pseudowords
were not signicantly different (F(1, 23) = 2.35, p =
0.14), both being more negative than those elicited by
nonwords (F(1, 23) = 5.0 p < 0.05, for words versus
nonwords). During the second epoch, however, the three
stimulus conditions differed signicantly from one an-
other (F(1, 23) = 4.9, p < 0.05 for pseudowords vs. words
and F(1, 23) = 45.6, for words vs. nonwords).
The most conspicuous event that distinguished words
from pseudowords during the epoch of interest was a
negative potential, that peaked at about 450 msec from
stimulus onset. At that time the nonwords elicited a
positive potential which resembled the potentials elic-
ited by nonwords in the phonological discrimination
tasks. Because no N450 was elicited by nonwords, and
assuming that a supercial analysis was sufcient to
decide that nonwords were not targets, we analyzed the
characteristics and the scalp distribution of the N450,
including only the ERPs elicited by pseudowords and
words.
N45
0
Amplitude
and
Scalp
Distribution
Figure 2E shows the scalp potential distribution of the
responses to pseudowords and words at 450 msec post-
stimulus onset, at the left and right hemisphere sites.
Words elicited a well-circumscribed bilateral negativity
peaking at more anterior sites than that elicited by
words in the lexical decision task. Pseudowords display
two negative maxima, more evident over the left than
over the right hemisphere: one, centered between F7,
FC5, and F3 had a topography that contained the areas
activated by the N350 but also more anterior regions
(Figure 2D and 2E). The second, centered around FC1,
corresponds to the N450 shown in Figure 5 and was not
observed in the lexical decision task. As in the previous
experimental sessions, a negative activity was also ob-
served above the occipital areas.
The scalp potential distribution of the N450 was as-
sessed by a Stimulus Type (words, pseudowords)
´
Site
(TP, T, C, FC5/6, FC1/2, F3/4, and F7/8)
´
Hemisphere (left,
right) ANOVA. This analysis showed that the N450 elic-
ited by pseudowords (
-
1.0
m
V) was signicantly more
negative than that elicited by words, which, across sites,
was positive (0.15
m
V) (F(1, 23) = 18.4, p < 0.001); it was
signicantly more negative over the left hemisphere
(
-
0.60
m
V) than over the right hemisphere (
-
0.22
m
V)
(F(1, 23) = 4.9, p < 0.05) and differed signicantly among
scalp sites (F(6, 138) = 9.5, p < 0.001, GG epsilon = 0.42).
The interaction between Stimulus Type and Site effects
was signicant (F(6, 138) = 13.2, p < 0.001, GG epsi-
lon = 0.33), revealing that the Stimulus Type effect was
not signicant at the most anterior electrode sites (F7
and F8), whereas it was signicant at all other sites,
which did not differ among themselves. No other inter-
actions were signicant. Post hoc contrasts examining
the site effect revealed that, across words and
pseudowords, the N450 was signicantly larger (more
negative) at F7/8 (
-
1.2
m
V) than at all other sites (the
246 Journal of Cognitive Neuroscience Volume 11, Number 3
difference between F7/8 and the second largest N450 at
FC5/6 was signicant, F(1, 23) = 8.1, p < 0.01), negative
at the anterior supratemporal FC5/6 (
-
0.68
m
V), F3/4
(
-
0.65
m
V) FC1/2 (
-
0.22
m
V) and midtemporal sites T3/4
(
-
0.36
m
V) (which did not differ signicantly among
themselves), and positive at the centro-lateral C3/4 (0.06
m
V) and posterior-temporal sites TP7/8 (0.20
m
V). The
difference between the N450 elicited at FC5/6 and F3/4
was signicant (F(1, 23) = 5.7 p < 0.05). This distribution
validates the anterior-temporal and anterior-supratempo-
ral scalp distribution of the N450.
3
N45
0
Latency
A Stimulus Type (words, pseudowords) by Hemisphere
ANOVA performed on the N450 peak latency measured
at FC1 and FC2 showed no signicant main effects and
no interaction (for the stimulus type effect F(1, 23) = 2.0,
p = 0.17, and all other f values less than 1.00). The N450
latency for words and pseudowords at the left and right
hemispheres was similar (448 msec).
Across-Task Comparisons
Because one of the aims of this study was to compare
the processing of orthographic stimuli at different lin-
guistic levels, we compared the ERPs elicited by words,
pseudowords, and nonwords across tasks. In particular,
we focused on the comparison between the phonologi-
cal/phonetic, phonological/lexical, and semantic deci-
sions, testing the hypothesis that these processes are
functionally and, as far as the scalp distribution of poten-
tials and current densities reect underlying brain
mechanisms, neuroanatomically distinct. Overall, except
for the N170 which was elicited at the posterior-tempo-
ral and occipital sites (Figure 6),
4
the across-task com-
parison distinguished most clearly between the
nonwords and the phonologically legal stimuli (words
and pseudowords). Whereas the pattern of the ERP ac-
tivity for words and pseudowords differed depending on
whether the task required phonological/phonetic, lexi-
cal, or semantic analysis, the ERPs elicited by nonwords
were about the same across tasks. Furthermore, the nega-
tive potentials elicited by phonologically legal stimuli in
the phonological/phonetic, phonological/lexical, and se-
mantic decision tasks were absent (or almost absent) in
the ERPs elicited by nonwords (Figure 7).
The similarity of the ERPs elicited by nonwords across
the three linguistic tasks (the two phonological and the
semantic) was veried by an ANOVA of the mean ampli-
tude of the potentials elicited by nonwords at the fronto-
central and parietal electrode sites (F3, Fz, F4, FC1, FC2,
C3, Cz, C4, P3, Pz, P4), where the positive peak was
maximal. The mean amplitude was calculated for the
Figure
5. ERPs elicited by
nontarget stimuli (words,
pseudowords, and nonwords)
at sites of interest in the se-
mantic decision task. The
most salient event is the
N450 wave, larger for
pseudowords than for words
at FC1—on the left fronto-
central hemisphere, which
was not elicited by nonwords.
Bentin et al. 247
Figure
6. Back-view
scalp po-
tential distributions of the
ERPs elicited by words,
pseudowords, and nonwords
in the visual/orthographic
(rst row), phonetic (second
row), lexical/phonological
(third row), and semantic
(fourth row) tasks, at 170-
msec latency. The N170 poten-
tial is elicited by all
orthographic stimuli regard-
less of processing level,
slightly bigger over the left
than over the right hemi-
sphere. White hue represents
negative voltages, and black
hue, positive voltages. Half of
the scale (in
m
V) is presented
below each map.
Figure
7. ERPs elicited by
nonwords at the sites of inter-
est in the four processing
levels: visual/orthographic, pho-
netic, lexical/phonological,
and semantic tasks.
248 Journal of Cognitive Neuroscience Volume 11, Number 3
epoch from 250 to 500 msec after stimulus onset, which
includes the positive peak characteristic of the ERPs
elicited by these stimuli. This analysis conrmed that the
ERP elicited by nonwords was practically the same
across the three tasks (0.23, 0.22
m
V, and 0.18
m
V for the
rhyme, lexical decision, and semantic decision tasks, re-
spectively; F(2, 46) = 1.77, p = 0.18, GG epsilon = 0.99).
The analysis of the positive peak latency (at Cz) similarly
showed little difference across tasks (350, 359, and 388
msec for the phonological/phonetic, phonological/lexi-
cal, and semantic tasks, respectively; F(2, 46) = 3.37, p =
0.05, GG epsilon = 0.94).
A separate analysis of each task showed that words
and pseudowords elicited negative potentials that dif-
fered from the ERPs elicited by nonwords,. These poten-
tials peaked at about 320 msec in the phonetic task, 350
msec in the lexical task, and 450 msec in the semantic
task (Figures 3 to 5, and 8). To verify the statistical
reliability of these differences we analyzed the peak
latency of the negative potentials elicited in each task by
words and pseudowords at the sites where they were
maximal (T3 for the phonetic and lexical tasks and F7
for the semantic task). The ANOVA showed that the
latency of the negative potentials elicited by words and
pseudowords was similar across tasks (F(1, 23) = 1.6, p
= 0.22), whereas the main effect of task was highly
signicant (F(2, 46) = 192.7, p < 0.001). The interaction
between the two factors was not signicant (F(2, 46) <
1.00). Post hoc univariate comparisons revealed that the
latency of the negative peak in the semantic task (448
msec) was signicantly longer than in the phonologi-
cal/lexical task (358 msec; F(1, 23) = 235.9, p < 0.001),
which in turn was longer than in the phonological/
phonetic task (340 msec; F(1, 23) = 10.9, p < 0.005).
The amplitudes of the negative peaks across the three
tasks were compared using a Task (rhyme, lexical deci-
sion, semantic decision)
´
Stimulus Type (word,
pseudoword) ANOVA. The dependent variable was the
mean amplitude of each peak as measured for the sepa-
rate analyses for the rhyme and the lexical decision tasks
at T3 and for the semantic decision task at F7. This
analysis revealed a signicant difference between tasks
(F(2, 46) = 16.4, p < 0.001), whereas no difference was
found across tasks between the potentials elicited by
words and pseudowords (F(1, 23) < 1.00). The most
interesting result, however, was a signicant Stimulus
Type
´
Task interaction (F(2, 46) = 5.1, p < 0.01), sug-
gesting that the difference between the responses to
word and pseudowords varied across tasks. Post hoc
univariate analyses revealed that the N320 was slightly
larger for words (
-
1.42
m
V) than for pseudowords
(
-
1.02
m
V) in the rhyme task (F(1, 23 = 3.15, p < 0.09),
the two stimulus types elicited equally large N350 (
-
2.0,
and
-
2.13
m
V for words and pseudowords, respectively)
in the lexical decision task (F(1, 23) < 1.00), whereas the
N450 was larger for pseudowords (
-
1.3
m
V) than for
words (0.54
m
V) in the semantic decision (F(1, 23) =
24.4 p < 0.001).
Figure
8. Left-hemisphere distribution of ERPs elicited by words and pseudowords at 170, 320, 350, and 450 msec, in each of the four experi-
ments. White hue represents negative voltages, and black hue, positive voltages. Half of the scale (in
m
V) is presented below each map.
Bentin et al. 249
ERPs
Elicited
by
Target
Stimuli
In each task, the target stimuli elicited large P300 com-
ponents that were maximally positive at the centro-
parietal site (Pz). Although the P300 elicited by target
stimuli were not at the focus of this study, in the absence
of any objective measure of task difculty we analyzed
the amplitude and peak latency of this component (Fig-
ure 9 and Table 3). Moreover, as will become clear in the
general discussion, the comparison of the latency and
the amplitude of the P300 across tasks enhanced our
understanding of the cognitive processes involved in
each task.
P300 Latency
The P300 latencies, measured as the most positive peak
in the 350 to 650 msec window at Pz (Table 1), were
signicantly different among the six tasks (“size,”
rhyme,” lexical decision-1 (LD-1), lexical decision-2 (LD-
2), lexical decision-3 (LD-3), and semantic (F(5, 115) =
21.29, p < 0.001, GG epsilon = 0.74). Post hoc Tukey-A
comparison tests revealed that P300 latency was sig-
nicantly shorter for the visual task (“size” decision)
than for all other tasks (p < 0.01), and shorter for the
LD-1 and rhyme tasks than for the semantic, LD-2, and
LD-3 tasks (p < 0.05). No other differences were signi-
cant.
P300 Amplitude
The P300 peak amplitudes, measured at Pz (Table 3),
seem to be gradually reduced from the visual task to the
semantic task (F(5, 115) = 48.23, p < 0.001, GG epsi-
lon = 0.76). Post hoc Tukey-A analyses revealed, how-
ever, that the P300 amplitude was about the same in the
two shallowest tasks (size and LD-1) and signicantly
larger in these two tasks than in all other tasks (p < 0.01).
The P300 amplitude in the deepest (semantic) task was
signicantly smaller than in all other tasks (p < 0.01).
Post hoc comparisons also showed that the differences
between the rhyme, LD-2, and LD-3 tasks were statisti-
cally signicant (p < 0.01).
Hence, the P300 data suggest that in the present study,
as in other studies in which the level of processing has
been manipulated, shallower tasks were performed
faster than deeper tasks. Furthermore, assuming that the
amplitude of the P300 is inuenced by the amount of
effort invested in the performance (e.g., Donchin, 1981)
and the variance in the latency of the response in indi-
vidual trials (jitter), the P300 amplitudes elicited by the
targets suggest that the responses in the deeper tasks
required more mental effort and were more variable
than those in shallower tasks.
DISCUSSION
The present study was designed to explore the time
course of processing visually presented words, as
reected by the neural electrical activity elicited while
reading words at different task-induced levels of process-
ing. An oddball paradigm was used in which the distinc-
tion between targets and nontargets was based on either
visual, phonologic, or semantic processes. In addition we
introduced a rhyme task in which we assumed the need
for phonetic processing in task performance. We focused
mainly on the ERPs elicited by nontargets for which the
Figure
9. Across-subjects aver-
age of P300 to target stimuli
in the six tasks (size decision,
lexical decision-1 (LD-1),
rhyme decision, lexical deci-
sion-2 (LD-2), lexical decision-
3 (LD-3), and semantic
decision).
250 Journal of Cognitive Neuroscience Volume 11, Number 3
negative waveforms were relatively unmasked” by the
robust P300 that is typically observed in response to
targets. Four negative potentials distinct in latency and
scalp distribution were discerned, each associated with
a different level of processing: (1) one peaked at a
latency around 170 msec (N170) over the occipito-tem-
poral areas and distinguished between orthographic and
nonorthographic stimuli in the size-detection task; (2)
the second peaked at a latency around 320 msec (N320)
over midtemporal areas, was larger at left than at right
hemisphere sites, and distinguished between pronounce-
able and nonpronounceable letter strings in the rhyme
detection task; (3) the third peaked around 350 msec
(N350) over left fronto-temporal regions and distin-
guished between phonologically legal and phonologi-
cally illegal orthographic patterns in a lexical decision
task; (4) the fourth peaked around 450-msec latency
(N450) over left fronto-central regions and distinguished
between meaningful and meaningless phonologically le-
gal orthographic patterns in a semantic decision task. A
detailed examination of each of these negative potentials
and their interpretation will be deferred until after dis-
cussing the late positive potential elicited by the target
stimuli.
Modulation
of
P3
00
As is typical in oddball paradigms, all target stimuli in the
present study elicited a late positive potential that was
identied as P300 on the basis of task characteristics.
Several studies have emphasized the distinction between
a fronto-central (“P3a”) and a parietal (“P3b”) component
of the P300. The P3a is believed to reect the activation
of brain reactions to unexpected events (“processing of
surprise) and P3b appears to be associated with the
task-relevant categorization of oddball stimuli (Donchin,
1981; Verleger, Jaskowski, & Wauschkuhn, 1994). In the
present study, the centro-parietal distribution of the
P300 identied it as a P3b component. Many studies
suggested that the P3b peak latency may be used as a
temporal metric for stimulus evaluation (e.g., McCarthy
& Donchin, 1981) and that it is sensitive to categorical
decision strategies as well as the difculty of discriminat-
ing targets from nontargets (e.g., Kutas, McCarthy, &
Donchin, 1977). Its amplitude is determined by the task
difculty and the variance in the response latency in
single trials, the amount of attention resources invested
in the task, and design parameters such as the relative
frequency of the target or its physical salience (Ducan-
Johnson & Donchin, 1982). With this background in
mind, we will examine the characteristics of our four
tasks as reected by the latency and amplitude of the
P300 elicited by each target type.
Latency of P300: An Index of Task Complexity?
The P300 latency was signicantly longer in the LD-1
and in the rhyme tasks than in the size-decision task and
was longest for targets in the LD-2, LD-3, and semantic
tasks. The latency did not differ signicantly between the
LD-1 and the rhyme tasks or among LD-2, LD-3, and the
semantic tasks. Although, in general, the order in which
the P300 in the different tasks peaked was congruent
with the a priori determined level of processing, the
correlation was not perfect. The signicantly shorter
latency to word-targets in LD-1 (where the nontargets
were illegal nonwords) than in LD-2 (where the nontar-
gets were pseudowords) supports Balota and Chum-
bley’s (1984) suggestion that the rejection of illegal
nonwords (as well as the acceptance of high frequency
words) is based on their orthographic familiarity rather
than a deeper process of lexical search. Yet, the differ-
ence between the latency to word-targets in LD-1 and
targets in the size task suggests that although the execu-
tion of both tasks was based on a shallow visual analysis,
distinguishing words from illegal nonwords was more
demanding than distinguishing targets on the basis of
their size. Hence it appears that the P300 latency, like
RTs, does not reect the level of processing required to
recognize the target but rather the complexity of the
process and the decision time. This may also account for
the absence of a signicant difference between the P300
obtained in the semantic and the lexical decision tasks,
which required distinguishing words from pseudowords
(LD-2 and LD-3), that is, it could not be based on famili-
arity or pure phonological grounds, as was possible in
Table 3. Mean P300 Latencies and Amplitudes (±SEm) Elicited by the Targets in the Different Tasks. P300 was measured as
the largest positive potential value at Pz between 350 and 650 msec (SEm = standard error of the mean).
Task Size LD-1 Rhyme LD-2 LD-3 Semantic
Target stimuli High-sized
stimuli
Words among
nonwords
Words and
pseudowords
rhyming with
vitrail
Words among
pseudowords
Pseudowords
among words
Abstract
words
Latency (msec) 429 ± 30 485 ± 50 499 ± 46 554 ± 56 548 ± 69 530 ± 70
Amplitude (m V) 17.85 ± 4.20 17.17 ± 5.92 13.29 ± 4.16 12.15 ± 4.86 8.79 ± 4.06 6.36 ± 3.48
Bentin et al. 251
the LD-1 and the rhyme tasks. The level of processing
seems to be better reected in the amplitude of P300,
to which we now turn.
The Amplitude of P300: An Index of Levels of
Processing?
The P300 amplitude was equally high in the LD-1 and
the size tasks, signicantly higher than in all other tasks.
Furthermore, it gradually decreased from the rhyme to
the LD-2, LD-3, and semantic decision tasks. This variation
in amplitude cannot be accounted for by the probability
of the target because it was similar across tasks. It also
cannot be explained by the nature of the target stimuli
because the order of the amplitudes did not seem to
reect such factors. For example, although the frequency
of the abstract-word targets was higher than that of
concrete-word targets and of pseudowords, they elicited
a lower P300 amplitude. Moreover, the amplitude of the
P300 elicited by the physically outstanding targets in the
size task (which were twice as large as all other stimuli)
was equal to that in LD-1 where all the targets were
words, equally in size with the nontarget stimuli. This
suggests that the amplitude of the P300 may have cap-
tured the similarly shallow processes required to distin-
guish words from illegal nonwords or target stimuli that
were physically larger than the nontargets. It may also
have captured the increasingly deeper processes in-
duced by the different tasks from the rhyme to the
semantic decisions. Although this interpretation is tempt-
ing, it is obviously not the only one possible. A different
factor that may account for the variation in the ampli-
tude of the P300 in the different tasks is differential jitter
in the latency of single trials. It is possible that for simple
visual discriminations the decision time was about the
same across the single trials. On the other hand, it is
conceivable that in more difcult tasks the time required
for discriminating between targets and primes varied
across words. Consequently, the average decision-related
ERP should have lower amplitude (and a larger duration)
in the deeper than in the shallower tasks. For example,
as is evident in Figure 9, the P300 was considerably
broader in the semantic task than in the size or LD-1
tasks. This possibility is supported by the larger variance
across subjects in the P300 latency for the LD-3 and
semantic tasks than for the size and LD-1 tasks. Hence,
the alternative interpretation is that the amplitude of the
P300 in different tasks, like its latency, is (inversely)
correlated with their complexity.
Whether the P300 variation across tasks reected only
task complexity or also, at least indirectly, the level of
processing induced in each task, its pattern of variation
supports our a priori distinction between the tasks. Con-
sequently, we can now analyze the ERPs elicited by
nontarget stimuli which, unmasked” by the P300,
5
may
have better reected the neural activity associated with
each type of process.
Visual/Orthographic
Processing
The most important outcome of the analysis of the ERPs
elicited in the size-decision task was that orthographic
and nonorthographic stimuli elicited signicantly differ-
ent responses without further distinction within each
category. This pattern is similar to the results obtained
intracranially by Nobre and colleagues (1994), suggesting
that early in the course of visual processing, before
phonological analysis occurs, the brain may distinguish
between orthographic and nonorthographic visual infor-
mation. Unlike the intracranial ERPs, however, in which
the distinction between the two categories was limited
to the N200, on the scalp the distinction between
categories at the peak of N170 was followed by a longer
lasting epoch during which the ERPs elicited by ortho-
graphic and nonorthographic stimuli were distinct.
Furthermore, whereas intracranially orthographic and
nonorthographic stimuli elicit N200 potentials in adja-
cent but not overlapping regions of the middle fusiform
gyrus, this pattern may have been reected at T5 and T6
as an interaction between the stimulus category and the
hemispheric asymmetry: The N170 elicited by ortho-
graphic stimuli was larger than that elicited by non-
orthographic stimuli in the left posterior-temporal/
occipital regions of the scalp (T5) and smaller in the
right posterior-temporal/occipital regions of the scalp
(T6). Moreover, the difference between the two catego-
ries began considerably earlier at T5 (140-msec) rather
than T6 (210-msec). This difference suggests that al-
though both hemispheres probably respond to both
orthographic and nonorthographic visual information,
the well-documented superiority of the left hemisphere
for processing language-related stimuli may affect early
visual processing. In fact, the response of the right hemi-
sphere may have been initiated by activity starting rst
on the left. Such a system could account, for example,
for pure alexia resulting from lesions in the left occipital
cortex that also include the splenium of the corpus
callosum (e.g., Benton, 1975; Campbell & Regard, 1986;
Damasio & Damasio, 1983; Henderson, 1986).
Assuming that, at least for orthography, processing
specicity cannot be innate, the early distinction in the
visual system between orthographic and nonorthog-
raphic information (as well as the demonstrated spe-
cicity of adjacent regions for human faces, Bentin et al.,
1996; George et al., 1996) suggests that different parts of
the visual system can learn to tune themselves to re-
spond selectively to specic (probably ecologically im-
portant) visual information.
Although far-eld recorded, in conjunction with in-
tracranial recordings and neuroimaging data, the ortho-
graphic specicity observed in the present ERP results
may also provide a better understanding of the func-
tional neuroanatomy of the orthographic lexicon. They
suggest the existence of a functionally specialized stream
within the ventral visual pathway, specically involved in
252 Journal of Cognitive Neuroscience Volume 11, Number 3
the perceptual processing of orthographic stimuli. More-
over, consistent with PET ndings, the present results
suggest that this process is particularly conspicuous in
the left hemisphere. PET studies led some researchers to
suggest that written word forms are processed (or at
least initiated) in the occipital lobes (Petersen et al.,
1989; Petersen et al., 1990; Posner & Petersen, 1990;
Posner, Petersen, Fox, & Raichle, 1988; Posner & Raichle,
1994). Other researchers suggest that the extrastriate
cortex responds to any complex visual stimulus whereas
the specicity for visual word forms starts only in the
midtemporal regions (Beauregard et al., 1997; Bookhei-
mer, Zefro, Blaxton, Gaillard, & Theodore, 1995; Chert-
kow, Bub, Beauregard, Hosein, & Evans, in press; Howard
et al., 1992; Price et al., 1994). The lateral-occipital scalp
distribution of the ERPs and the SCD calculated on the
basis of the ERPs elicited in the orthographic task sup-
ports a suggestion, based on intracranial recordings, that
regions in the extrastriate cortex respond preferentially
to orthographic information, and this process may be the
rst step toward the formation of a word visual pattern
(e.g., Allison et al., 1994; Nobre et al., 1994). However,
these regions do not distinguish between legal and ille-
gal word forms and therefore cannot be the sole mecha-
nism that subserves the orthographic lexicon. We will
return to this issue when discussing the pattern of the
ERPs elicited in the lexical decision stages of the present
study.
Phonological/Phonetic
Processing
Unlike decisions regarding stimulus size, which can be
made just as well on orthographic and nonorthographic
stimuli, rhyme decisions based on written stimuli usually
require the transformation of orthographic patterns into
phonological patterns from which phonetic codes can
be discerned.
6
Consequently, in the rhyme task we used
only orthographic stimuli that, as expected, elicited an
N170 evident particularly at the posterior temporal and
occipital sites (see Figure 5, TP7 and TP8). As anticipated
on the basis of the results in the size task, the ERPs
elicited by the three orthographic stimulus types were
not distinguishable at the level of the N170. One hun-
dred milliseconds later, however, two categories of stim-
uli were evidently processed differently. One included
the words and the pseudowords for which the formation
of a phonological pattern was possible and on the basis
of which the phonetic decision could have been made.
The second category comprised the nonwords that
could not be transformed into a coherent phonological
structure and consequently allowed a negative decision
based on shallow orthographic analysis. The difference
between the ERPs elicited by pronounceable and non-
pronounceable stimuli was probably associated with the
difference in processing the two stimulus categories, as
well as to a difference in decision-making strategies.
Whereas, following the N170, the ERPs elicited by non-
words were dominated by a positive-going potential
(possibly a P300 associated with the fast and easy reach-
ing of a negative decision), the ERPs elicited by pro-
nounceable stimuli were comprised rst of a negative
potential (N320), at the resolution of which the P300-
like potential was observed.
Because the cognitive and linguistic processes re-
quired for making rhyme decisions are not evident in
performance, it is impossible to unequivocally link the
N320 to a particular cognitive event. For example, al-
though the full activation of the lexicon is not necessary
for generating phonetic codes,
7
as revealed by the cor-
rect decisions made for pseudowords, phonology prob-
ably mediates between orthography and phonetics.
Furthermore, it may be possible to decide whether two
orthographic patterns rhyme on the basis of matching
their abstract phonological realizations (i.e., without
converting the phonemes to phones). Therefore, we can-
not discard the possibility that the activity reected by
the ERPs in the rhyme-decision task was only phonologi-
cal. Indeed, although the spatial distributions of the
N320 and N350 did not completely overlap, both poten-
tials were maximal at T3. Moreover, the onset of the
difference between phonologically legal and illegal stim-
uli began slightly sooner in the lexical decision task (270
msec at T3) than in the phonetic decision task (295 msec
at T3). This difference, however, was not signicant, a fact
that is hardly surprising if we assume that phonological
processes were involved in both processes. Yet, com-
pared with the N320, the distribution of the N350 is
slightly more anterior in the left temporal lobe and
clearly broader in circumference, including parietal and
fronto-parietal areas that were not activated in the pho-
netic task. The difference in latency and scalp distribu-
tion between the N320 and the N350 that was observed
in the lexical decision tasks suggests that the cognitive
processes involved in these two tasks did not entirely
overlap. It is possible that the N320 is associated with
the phonetic transformation performed on pronounce-
able orthographic patterns, a process that began not
earlier than 270 msec from stimulus onset, following the
initiation of the orthographic analysis.
Assuming that ERPs are, at least partially, associated
with cognitive events and reect their time course and,
to some extent, their underlying neural basis, the prece-
dence of the N320 over the N350 and the partial overlap
in scalp distribution suggests that while phonological
units were activated in both tasks, the formation of
phonetic codes is faster (concluded sooner) than the
additional lexical (or postlexical) processes required to
reach a lexical decision. The scalp distribution of the
N320 was very different from that of the N170, being
particularly conspicuous at midtemporal-parietal sites,
predominantly over the left hemisphere. This pattern is
inconsistent with the ndings of Rugg (1984) (see also
Praamstra, Meyer, & Levelt, 1994; Rugg & Barrett, 1987),
who reported a right-hemisphere dominant N450 in a
Bentin et al. 253
rhyme-matching task. Yet, both the scalp distribution and
the considerably shorter latency of the N320 relative to
the N450 suggest that two different cognitive phenom-
ena were tapped in the two studies. The N450 may be
associated with a relatively late, postlexical phonological
process, whereas the N320 could represent an early
lexical or prelexical process of grapheme-to-phoneme-to-
phone translation. This suggestion is supported by the
distribution of the N320, which roughly corresponds to
Wernicke’s area. This distribution is consistent with the
data reported in several PET studies in which temporo-
parietal activation was found when subjects performed
rhyme-detection tasks on visually or auditory presented
words (Petersen & Fiez, 1993; Petersen et al., 1989).
Interestingly, these temporal/temporo-parietal regions
were not activated by simple auditory stimuli, including
tones, clicks, or rapidly presented synthetic syllables
(Lauter, Herscovitch, Formby, & Raichle, 1985; Mazziotta,
Phelps, Carson, & Kuhl, 1982). Moreover, clinical neuro-
psychological literature reported that lesions surround-
ing the left sylvian ssure (Wernicke’s area, insular
cortex, supramarginal gyrus) may cause a decit in
sound categorization and an inability to arrange sounds
into coherent speech (Marshall, 1986). Hence, the ERP
data in the present study concur with previous neuroi-
maging and neuropsychological evidence regarding the
neuroanatomical distribution of areas associated with
phonetic processing, suggesting that the phonetic analy-
sis of written words starts at about 270 msec from
stimulus onset, about 150 msec after the onset of ortho-
graphic analysis.
Lexical
and
Semantic
Processing
Lexical decisions do not imply the processing of the
word meaning; phonological patterns can be correctly
recognized as words even if their meaning is not known.
Yet, evidence for semantic priming at a short SOA fol-
lowing task-induced letter-level processing of the prime
suggests that the access to the semantic network and the
processing of the word’s meaning is the default action
of the word perception mechanism (see Smith, Bentin,
& Spalek, submitted, for a comprehensive discussion).
Consequently, using only performance measures, it is
very difcult to disentangle lexical/phonological and se-
mantic processes in single word recognition when only
performance measures are used. It is not surprising,
therefore, that inuential models tend to devalue the role
of phonological processing in word recognition, suggest-
ing that following the orthographic analysis (on the basis
of which, for example, a logogen is activated, Morton,
1969), the activation of the word’s meaning in the cog-
nitive/semantic system is a “direct” next step. One of the
aims of the present study was to explore the possibility
of distinguishing between lexical/phonological and se-
mantic processing by taking advantage of the time con-
tinuous measure provided by ERPs. A comparison
between the ERPs elicited in the rhyme, lexical decision,
and semantic decision experiments suggests that
phonological and semantic processes are indeed distinct
in time course and possibly also in their functional
neuroanatomy.
Recall that lexical decision processes were examined
in the present study in three separate oddball experi-
ments that differed in the characteristics of the distinc-
tion between targets and nontarget stimuli. We assumed
that the cognitive processes required for each distinc-
tion modulated the ERPs elicited by the nontargets in
each experiment. As in the rhyme task, the ERPs in the
lexical decision tasks distinguished mostly between the
nonwords (which required only a shallow, orthographic
process to be categorized) and the phonologically legal
stimuli. Although the ERPs elicited by words and
pseudowords were apparently distinguished better in
the lexical decision task than in the rhyme task, this
difference failed to reach statistical signicance (p =
0.085). In contrast, a signicant distinction was found
between words and pseudowords in the semantic deci-
sion task. The difference between the effect of stimulus
type in the lexical and semantic decision tasks might be
explained by assuming that different cognitive processes
were necessary for making each kind of decision. For
example, whereas lexical decisions may be based primar-
ily on activating phonological units in the lexicon, se-
mantic decisions probably require a more extensive and
deeper elaboration of the word’s meaning. Consequently,
although the activation of word meaning may start in
parallel with phonological matching and may even help
the lexical decision process, semantic decisions elicit
cortical activation that should usually last longer. Indeed,
in the present study, the onset of the difference between
the ERPs elicited by each stimulus type in the lexical and
semantic decision tasks were not very far apart, whereas
the epoch during which different ERPs were elicited by
each stimulus type was longer in the semantic than in
the lexical decision task. Differences between the func-
tional neuroanatomy of the semantic and lexical activity
is suggested by the signicantly different distribution of
the N350 and N450, the two most prominent negative
potentials that were elicited in the lexical decision and
semantic tasks, respectively. Whereas the N350 was larg-
est at T3 and was distributed over the midtemporal and
supratemporal regions, the semantic decision seemed to
involve, in addition, more anterior and superior areas of
the temporal lobes and adjacent regions in the left fron-
tal lobe. This distribution (particularly its left-hemi-
sphere-dominant asymmetry) is different from that
usually found for N400 potentials in sentences (Kutas &
Hillyard, 1982) or lexical decision tasks (e.g., Holcomb,
1993). It is, however consistent with PETndings in
tasks that require semantic activity (e.g., Demonet et al.,
1992) and fMRI studies of word generation (McCarthy
et al., 1994). At the very least, this distribution supports
a dissociation between pure phonological and semantic
254 Journal of Cognitive Neuroscience Volume 11, Number 3
activity, consistent with neurological studies that have
described a double dissociation between dyslexic pa-
tients who can read words without understanding their
meaning (e.g., Schwartz, Saffran, & Marin, 1980), and
patients who understand the meaning of spoken words
but are unable to read them (for a recent review see Ellis
& Young, 1996).
Comparing phonologically legal and illegal ortho-
graphic patterns across all tasks suggests that the linguis-
tic-related ERP activity (in single-word processing)
was reected in negativities whose peak latency
preceded the P3b. This nding is congruent with ample
evidence that has been published since the discovery
of the N400 (Kutas & Hillyard, 1980). In the present
lexical decision task, the most prominent negativity
peaked at 350 msec. As mentioned above, words
and pseudowords elicited similar ERPs at this latency.
This nding seems to contradict the well-established
RT difference between words and pseudowords in lexi-
cal decision tasks. Our data, however, were derived
from a lexical decision paradigm different from
the ordinary word/nonword decision tasks. First, it re-
quired no speeded RTs, and therefore some of the
factors inuencing the RTs in lexical decision tasks were
inconsequential in the present paradigm. Second,
and more important, the ERPs measured for the present
comparisons were not elicited by the target stimuli.
Both the words and the pseudowords were equally ir-
relevant for the subjects task and were therefore mem-
bers of the same task-related response category. Indeed,
the amplitude of the P300 elicited by the words was
signicantly higher than that elicited by the
pseudowords. In conclusion, we suggest that the N350
may be associated with the phonological analysis of the
orthographic pattern applied to both words and
pseudowords.
In the semantic task, the difference between words
and pseudowords was apparently divided into two
distinct epochs. The rst ended at about 350 msec from
stimulus onset (the peak latency of the negative poten-
tial in the lexical decision tasks). During this epoch the
ERPs elicited by words and pseudowords did not sig-
nicantly differ one from another. Therefore we suggest
that the ERP activity elicited during this period is asso-
ciated with phonological processes that are similar
in the lexical decision and the semantic tasks. During
the second epoch, words and pseudowords were clearly
different. This difference started at about 350 msec
and culminated at the peak of the N450, which is
not seen in the lexical decision task (Figures 2E and 5).
Surprisingly, the N450 elicited by pseudowords was
signicantly larger than that elicited by words. In general,
we (as well as others) assume that larger negativities
reect more extensive processing that, in this experi-
ment, was semantic (cf. the modulation of the N400
by semantic priming, Bentin et al., 1985, or by repetition,
Rugg, 1985, in lexical decision). Because the task was
to distinguish between abstract and concrete words,
one approach could have been to perform a lexical
decisionrst and then continue the semantic processing
only for words. Such an approach should have resulted
in a larger N450 for words than for pseudowords. A
second approach was also possible, however. In this
approach the reader would attempt to decide directly
whether a phonological legal orthographic pattern is
an abstract word or not (i.e., without making a
word/pseudoword distinction rst). If this approach
is taken, deciding that a (known) concrete word is not
abstract may be easier (and faster) than deciding that a
pseudoword is not an (infrequent) abstract word. Appar-
ently our subjects chose the second decision strategy.
Admittedly, this interpretation is post hoc. It is, however,
consistent with the larger P300 observed for words
(which might have been the source of the
word/pseudoword difference) and not (a priori) implau-
sible.
An
Overview
The interpretation of the present results and their impli-
cations for the psycholinguistic and neural mechanisms
involved in processing individual words are valid to the
extent that (1) our tasks implicated, indeed, the pre-
sumed perceptual and linguistic processes and (2) the
scalp-recorded ERPs were modulated by these processes.
Although none of the above caveats can be easily over-
ridden, we accepted both assumptions as working hy-
potheses. With these caveats in mind, we can continue
our discussion and suggest some interpretations.
The ERPs elicited by the different stimuli across tasks
displayed several important patterns. First, regardless of
task and phonological values, orthographic patterns
elicit fairly similar activity at the occipital and occipito-
temporal scalp regions, predominantly in the left hemi-
sphere (Figure 6). This pattern suggests that letters
automatically activate visual modules that are tuned to
detect orthographic material prior to any deeper
linguistic process. Orthographic stimuli that allow phon-
ological and/or phonetic processing activate language-
processing-specic areas in the midtemporal and
supratemporal regions, predominantly in the left hemi-
sphere (Figure 8). These areas are probably involved in
phonological and phonetic processing. In addition, se-
mantic activity elicits ERPs that are distributed over the
anterior-temporal fronto-central scalp areas. In the pre-
sent study we used only orthographic patterns. Other
studies, however, showed similar ERP distribution in re-
sponse to visually presented objects (Barrett & Rugg,
1989) and even nonlinguistic stimuli such as unfamiliar
human faces (Barrett & Rugg, 1989; Bentin & McCarthy,
1994). Hence, the fronto-central areas activated in the
semantic decision task in the present study may be part
of a conceptual semantic memory system that may in-
clude, but does not necessarily totally overlap with, the
Bentin et al. 255
words’ meaning network. Interestingly, there seems to
be a correlation between the site of activity on the
anterior-posterior dimension, on the one hand, and the
depth of processing in general and linguistic processing
in particular, on the other (Figure 2B and 2E). Apparently
deeper processing of the orthographic patterns is
associated with activity in more anterior regions of the
temporal lobe. A similar conclusion has been reached
by McCarthy and his colleagues using intracranial re-
cordings (McCarthy et al., 1995; Nobre & McCarthy,
1995), and it is congruent with the functional organiza-
tion of the ventral pathway” of the visual system de-
scribed by several authors (e.g., Felleman & Van Essen,
1991; Maunsell & Newsome, 1987; Van Essen & DeYoe,
1995).
The scalp distribution of the ERP activity in the differ-
ent tasks and their onset and time course is incongruent
with either a unied brain mechanism for word percep-
tion or a serial model of processing. The scalp distribu-
tion of the negative peaks, although overlapping to
some extent, was sufciently distinct (across peaks)
to suggest that different neural networks may be in-
volved in each type of process. Overall, such a pattern
may support a word-recognition mechanism based on
a network of interrelated neural modules working in
synchrony, each of which is responsible for a particular
aspect of the word-recognition process. The peaks of the
negative components associated with each level of proc-
essing were different, later in deeper processing tasks
than in more shallow ones. Yet, the epochs during which
the ERPs were modulated by each task overlapped
in time to a great extent. Although the duration of an ERP
does not necessarily equal the processing time, the
two are probably connected. Therefore, the overlap
between the ERPs elicited in different tasks suggests the
onset of deeper levels of processing does not wait
for the shallower process to conclude. Such a pattern
should be more congruent with a cascade (McClelland,
1979) than with a serial-processing model of word rec-
ognition.
Although suggestive, this research is obviously not
conclusive. It opens the door, however, for the investiga-
tion of the existence of separate functional modules”
involved in word recognition by providing converging
evidence for their functional neuroanatomical dissocia-
tion and describing their relative time course of activa-
tion.
METHODS
Subjects
Twenty-four right-handed volunteers (eight males), aged
19 to 30 years, were paid for their participation in the
experiment. They were all native French speakers with
normal or corrected-to-normal vision and without any
neurological or neuropsychological disorder.
Stimuli
The stimuli were 1368 words or wordlike four- to eight-
character strings (mean = 5.8). The stimuli were divided
intove types: (1) words in the French lexicon, (2)
pseudowords, which were orthographic patterns that
followed the rules of the French phonology and orthog-
raphy (e.g., lartuble”), (3) orthographically illegal non-
words that were unpronounceable consonant letter
strings (e.g., rtgdfs”), (4) strings of alphanumeric sym-
bols such as &@$£,” and (5) strings of forms such as
y
[
.” The pseudowords were constructed by substituting
two letters in the selected words. Among the 432 words,
400 were concrete (e.g., placard) and 32 abstract (e.g.,
amour). The mean frequency of the concrete words was
1250, 1280, 1083, and 1720 (per 10 millions, Imbs, 1971)
for the size, rhyme, lexical decision, and semantic deci-
sion tasks, respectively. A one-way ANOVA showed that
the difference between the frequencies of these groups
was not signicant (F(3, 336) < 1.00). The mean fre-
quency of the abstract words was 6228 per 10 million,
higher than the mean frequency of the concrete words
(1343 per 10 million). This difference, however, was
irrelevant to the comparisons made in the present study
because abstract words were only used as targets in the
oddball task and never compared with other word types.
Tasks
The entire study was divided into four tasks, each of
them inducing a different level of processing visual/
orthographic (task 1), phonological/phonetic (task 2),
phonological/lexical (task 3), and semantic (task 4). In
each task, the experimental paradigm was a mental odd-
ball task in which subjects had to mentally count the
number of target stimuli delivered randomly among non-
target stimuli. In task 1 (“size” task), the targets were
large-sized stimuli presented among standard-sized stim-
uli. The stimuli were words, pseudowords, nonwords,
strings of alphanumeric symbols, and strings of forms. In
task 2 (“rhyme” task), the targets were words or
pseudowords rhyming with the word vitrail, with or-
thographically possible endings being aille,” ail,” aye,”
or aï.” Nontarget stimuli were words, pseudowords, and
nonwords. Task 3 included three lexical decision types:
in LD-1 the targets were words interspersed among ille-
gal nonwords; in LD-2 the targets were words inter-
spersed among pseudowords; in LD-3 the targets were
pseudowords interspersed among words. In task 4 (the
semantic-decision task), subjects had to count abstract
words interspersed among concrete words, pseudo-
words, and nonwords.
Procedure
Subjects sat on a reclining chair in an electrically and
acoustically shielded room facing a computer monitor.
256 Journal of Cognitive Neuroscience Volume 11, Number 3
The screen was at a distance of approximately 100 cm
from the subject’s eyes. A rectangular blue window
(11
´
3 cm) was always present at the center of the
screen. Stimuli were foveally presented in this window
for 500 msec, at a rate of one every 1250 msec (SOA).
The subjects were instructed to avoid blinking while the
stimuli were exposed. They were given one practice
block before each of the four tasks, which were per-
formed within one session lasting about 1.5 h (not in-
cluding electrode placement procedures—see below).
The four experiments were presented in xed order:
size task, rhyme task, lexical tasks, and semantic decision.
The xed order was necessary to reduce the possible
interference of a deeper-level process with a more su-
percial level. However, the order of the three lexical
decision tasks was counterbalanced (using a Latin square
design) between subjects. In each task, the stimuli were
delivered randomly in blocks of 50 items each. The rst
task included 10 blocks (500 stimuli), and the second
task included 6 blocks (284 stimuli); each of the three
lexical decision types in the third task was composed of
2 blocks (100 stimuli), and the fourth task was com-
posed of 6 blocks (284 stimuli) (Table 1). Subjects re-
ported the number of target stimuli detected after each
block.
ERP
Recording
EEG was recorded by 32 Ag/AgCl scalp electrodes re-
ferred to the nose and positioned over symmetrical posi-
tions on the two hemispheres as illustrated in Figure 10.
The montage was guided by a special-purpose computer-
controlled system (Pastel) based on a three-dimensional
digitization of the head (Echallier, Perrin, & Pernier,
1992). During recording the electrode impedance was
kept below 2 k-
W
.
Eye movement artifacts were controlled off-line by the
two prefrontal electrodes (FP1 and FP2) and an elec-
trode placed at the outer canthus of the right eye (YH).
Trials in which the potential measured in any of those
channels exceeded 150
m
V were rejected. Artifacts in-
duced by ampliers blocking were avoided, excluding
trials in which amplitudes above 250
m
V were measured
in any of the channels.
The EEG and electroculogram (EOG) were amplied
with a bandpass of 0.03 to 320 Hz (sampling rate 1000
Hz) and stored on a computer disk for off-line analysis.
The ERPs were averaged separately for each stimulus
type in each experimental session over an analysis pe-
riod of 1024 msec, including 100-msec prestimulus. After
averaging, frequencies lower than 0.8 Hz and higher
than 16 Hz (3 dB) were digitally ltered out.
Data
Analysis
Scalp potential and current density topographic maps
were generated on a color graphics terminal using a
two-dimensional spherical spline interpolation (Perrin,
Pernier, Bertrand, & Echallier, 1989; Perrin, Pernier,
Bertrand, & Giard, 1987) and a radial projection from
Oz (back views) or from T3 or T4 (lateral views),
along the length of the meridian arcs. The topogra-
phies were color coded and were normalized to
the peak voltage value (positive or negative) of the
recording montage. As described in detail at the
beginning of the Results section, the electrophysiologi-
cal manifestations at different levels of processing word
were assessed by rst calculating the statistical validity
of the difference between the mean amplitude of
the ERP elicited by each stimulus type in each decision
task. The means were calculated for a visually deter-
mined epoch during which the waveforms seemed to
be modulated by each task and for electrodes symmetri-
cally located across the right and the left hemispheres.
The onset of this difference was statistically determined
as the rst latency at which the difference between
waveforms was signicant using a series of point-by-
point t tests. In addition, a series of negative potentials
was associated with the different stimulus conditions
in each task. The mean amplitude of each component
was calculated for an epoch comprising 24 points (98-
msec) 12 before and 12 after its visually determined
peak. To allow the comparison of the scalp distributions
of each component (and hence help distinguishing
among them), these values were calculated at the same
14 electrodes that covered the temporal and superior
temporal areas, symmetrically located over each hemi-
Figure
1
0
. Thirty-two-channel electrode montage including all stan-
dard sites in the 10–20 system. The ground was located on the fore-
head between FP1 and FP2 and the nose was used as the reference.
Bentin et al. 257
sphere. Finally, the latency of each peak was dened as
the latency of the most negative point during the rele-
vant epoch.
Statistical differences among the ERP components for
different stimulus types in each experimental session
(visual, phonetic, lexical, and semantic sessions) were
tested with repeated-measures ANOVAs. For all repeated
measures with more than 1
°
of freedom, the more con-
servative Greenhouse-Geisser adjusted df-values were
used. ANOVAs were followed by post hoc Tukey-A tests
or univariate F contrasts (Greenhouse & Geisser, 1959).
Acknowledgments
This study was performed while Shlomo Bentin was a visiting
scientist at INSERM Unit 280 in Lyon and was supported by a
stipend from the French Ministry of Science and Technology.
The study was supported in part by NICHD #01994 to Haskins
Laboratories in New Haven, Connecticut and by grant #94-
0052/1 from the U.S.-Israel Binational Foundation.
Reprint requests should be sent either to Shlomo Bentin, De-
partment of Psychology, Hebrew University, Jerusalem, 91905
Israel, or to Marie-Helene Giard, INSERM U280, 151, Albert
Thomas, 69424 Lyon, Cedex 03, France.
Notes
1. Men were more asymmetric than women for orthographic
stimuli (0.31 m V vs. 0.18 m V) and larger in the left than in the
right occipito-parietal sites), whereas women were more asym-
metric than men for nonorthographic stimuli (0.62 m V vs. 0.24
m V and larger in the right than in the left occipito-parietal
sites).
2. We will analyze this interaction in more detail in our analysis
of the distribution of the N350.
3. The signicantly larger N450 at F7 and F8 may, however,
reect the absence of the stimulus type effect at these sites.
Nonetheless, for pseudowords as well as for words, the largest
N450 was found at F7 (- 1.58 and 1.19 m V for pseudowords
and words, respectively).
4. An ANOVA comparing the N170 elicited at T5 and T6 in each
task showed that there was indeed no signicant main effect
of task (
F
(3, 69) = 1.9,
p
= 0.15). A signi cant interaction
between stimulus type and task, however, suggested that the
difference between words, pseudowords, and nonwords was
not the same across tasks. Separate analyses revealed that
whereas the difference between words and pseudowords was
not signicant in any task, the nonwords elicited signicantly
larger N170 than the words in the lexical decision (
F
(1, 23) =
59.3,
p
< 0.001) and in the semantic decision (
F
(1, 23) = 56.3,
p
< 0.001) tasks. This interaction suggests that deeper tasks may
have a top-down inuence on the N170 recorded at the pos-
terior-temporal lobes.
5. The positive amplitude elicited by nonwords in the
phonological and semantic tasks may, however, be a P300
reecting the faster decisions associated with these stimuli.
6. We chose a target that can rhyme with words ending in
different spellings. Hence, subjects could not have performed
this task by simply matching the orthographic patterns (see
Methods).
7. Some models of word recognition, however, suggest that
the phonological structures of pseudowords are derived by
activating lexical entries of analogous real words (e.g., Glushko,
1979).
REFERENCES
Allison, T., McCarthy, G., Nobre, A. C., Puce, A., & Belger, A.
(1994). Human extrastriate visual cortex and the percep-
tion of faces, words, numbers, and colors.
Cerebral Cortex,
5,
544–554.
Balota, D., & Chumbley, J. I. (1984). Are lexical decisions a
good measure of lexical access? The role of word fre-
quency in the neglected decision stage.
Journal of Experi-
mental Psychology: Human Perception and Performance,
10,
340–357.
Barrett, S. E., & Rugg, M. D. (1989). Event-related potentials
and the semantic matching of faces.
Neuropsychologia,
27,
913–922.
Barrett, S. E., & Rugg, M. D. (1990). Event-related potentials
and the phonological matching of pictures.
Brain and
Cognition,
14,
201–212.
Beauregard, M., Chertkow, H., Bub, D., Murtha, S., Dixon, R.,
& Evans, A. (1997). The neural substrate for concrete, ab-
stract, and emotional word lexica: A positron emission to-
mography study.
Journal of Cognitive Neuroscience,
9,
441–461.
Bentin, S. (1987). Event-related potentials, semantic proc-
esses, and expectancy factors in word recognition.
Brain
and Language,
31,
308–327.
Bentin, S. (1989). Electrophysiological studies of visual word
perception, lexical organization, and semantic processing:
A tutorial review.
Language and Speech,
32,
205–220.
Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G.
(1996). Electrophysiological studies of face perception in
humans.
Journal of Cognitive Neuroscience,
8,
551–565.
Bentin, S., Kutas, M., & Hillyard, S. A. (1993). Electrophysiologi-
cal evidence for task effects on semantic priming in audi-
tory word processing.
Psychophysiology,
30,
161–169.
Bentin, S., & McCarthy, G. (1994). The effect of immediate
stimulus repetition on reaction time and event-related po-
tentials in tasks of different complexity.
Journal of Experi-
mental Psychology: Learning, Memory, and Cognition,
20,
130–149.
Bentin, S., McCarthy, G., & Wood, C. C. (1985). Event-related
potentials, lexical decision and semantic priming.
Elec-
troencephalography and Clinical Neurophysiology,
60,
343–355.
Benton, A. L. (1975). Developmental dyslexia: Neurological as-
pects.
Advances in Neurology,
7,
2–41.
Besson, M., Fischler, I., Boaz, T., & Raney, G. (1992). Effects of
automatic associative activation on explicit and implicit
memory tests.
Journal of Experimental Psychology: Learn-
ing, Memory, and Cognition, 18,
89–105.
Bookheimer, S. Y., Zefro, T. A., Blaxton, T., Gaillard, W., &
Theodore, W. (1995). Regional cerebral blood ow during
object naming and word reading.
Human Brain Mapping,
3,
93–106.
Campbell, R. T. L., & Regard, M. (1986). Face recognition and
lipreading: A neurological dissociation.
Brain,
109,
509–
521.
Carr, T. H., & Pollatsek, A. (1985). Recognizing printed words:
A look at current models. In D. Besner, T. G. Waller, & G. E.
MacKinnon (Eds.),
Reading research: Advances in theory
and practice 5
(pp. 1–82). New York: Academic Press.
Chertkow, H., Bub, D., Beauregard, M., Hosein, C., & Evans, A.
(in press). Visual and orthographic components of single
word processing: A positron tomography study.
Brain.
258 Journal of Cognitive Neuroscience Volume 11, Number 3
Chwilla, D. J., Brown, C. M., & Hagoort, P. (1995). The N400
as a function of the level of processing.
Psychophysiology,
32,
274–285.
Coltheart, M. (1985). Cognitive neuropsychology and the
study of reading. In O. S. M. Marin & M. I. Posner (Eds.),
Attention and performance X.
Hillsdale, NJ: Erlbaum.
Coltheart, M., Patterson, K. E., & Marshall, J. C. (1980).
Deep
dyslexia.
London: Routledge and Kegan Paul.
Damasio, A. R., & Damasio, H. (1983). The anatomic basic of
pure alexia.
Neurology,
33,
1573–1583.
Deacon, D., Breton, F., Ritter, W., & Vaughan, H. G., Jr. (1991).
The relationship between the N2 and the N400: Scalp dis-
tribution, stimulus probability, and task relevance.
Psycho-
physiology, 28,
185–200.
Demonet, J. F., Chollet, F., Ramsay, S., Cardebat, D., Nespou-
lous, J. L., Wise, R., Rascol, A., & Frackowiak, R. S. J. (1992).
The anatomy of phonological and semantic processing in
normal subjects.
Brain,
115,
1753–1768.
Donchin, E. . Surprise! . . . Surprise?
Psychophysiology,
18,
493–513.
Ducan-Johnson, C. C., & Donchin, E. (1982). The P300 com-
ponent of the event-related brain potentials as an index of
information processing.
Biological Psychology, 14,
1–52.
Echallier, J. F., Perrin, F., & Pernier, J. (1992). Computer-as-
sisted placement of electrodes on the human head.
Elec-
troencephalography and Clinical Neurophysiology,
82,
160–163.
Ellis A. W., Flude, B. M., & Young, A. W. (1987). Neglect dys-
lexia” and the early visual processing of letters in words.
Cognitive Neuropsychology, 4,
439–464.
Ellis, A. W., & Young, A. W. (1996).
Human cognitive neuro-
psychology: A textbook with readings.
Hove & London:
Erlbaum Psychology Press.
Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierar-
chical processing in primate visual cortex.
Cerebral Cor-
tex, 1,
1–47.
Frith, C. D., Friston, K. J., Liddle, P. F., & Frackowiak, R. S. J.
(1991). A PET study of word nding.
Neuropsychologia,
29,
1137–1148.
Frith, C. D., Kapur, K. J., Friston, P. F., Liddle, P. F., & Frack-
owiak, R. S. J. (1995). Regional cerebral activity associated
with the incidental processing of pseudo-words.
Human
Brain Mapping,
3,
153–160.
Frost, J. A., Springer, J. A., Binder, J. R., Hammeke, T. A., Bell-
gowan, P. S. F., Rao, S. M., & Cox, R. W. (1997). Sex does
not determine functional lateralization of semantic process-
ing: Evidence from fMRI.
Proceedings of the Third Interna-
tional Conference of the Society for Human Brain
Mapping.
Copenhagen, April.
George, N., Evans, J., Fiori, N., Davidoff, J., & Renault, B.
(1996). Brain events related to normal and moderately
scrambled faces.
Cognitive Brain Research,
4,
65–76.
Greenhouse, S. W., & Geisser, S. (1959). On methods in the
analysis of prole data.
Psychometrika,
24,
95–112.
Glushko, R. J. (1979). The organization and activation of or-
thographical knowledge in reading aloud.
Journal of Ex-
perimental Psychology: Human Perception and
Performance, 5,
674–691.
Henderson, V. W. (1986). Anatomy of posterior pathways in
reading: A reassessment.
Brain and Language,
29,
199–
233.
Henik, A., Friedrich, F. J., & Kellogg, W. A. (1983). The depend-
ence of semantic relatedness effects upon prime process-
ing.
Memory and Cognition,
11,
366–373.
Henik, A., Friedrich, F. J., Tzelgov, J., & Tramer, S. (1994). Capac-
ity demands of automatic processes in semantic priming.
Memory and Cognition,
22,
157–168.
Hillyard, S. A., & Kutas, M. (1983). Electrophysiology of cogni-
tive processing.
Annual Review of Psychology,
34,
33–61.
Hinton, G. E., & Shallice, T. (1991). Lesioning an attractor net-
work: Investigations of acquired dyslexia.
Psychological Re-
view,
98,
74–95.
Holcomb, P. J. (1986). ERP correlates of semantic facilitation.
In W. C. McCallum, R. Zappoli, & F. Denoth (Eds.),
Elec-
troencephalography and clinical neurophysiology supple-
ment 38. Cerebral psychophysiology: Studies in
event-related potentials.
Amsterdam: Elsevier.
Holcomb, P. J. (1993). Semantic priming and stimulus degrada-
tion: Implications for the role of the N400 in language
processing.
Psychophysiology,
30,
47–61.
Holcomb, P. J., & Neville, H. J. (1990). Auditory and visual se-
mantic priming in lexical decision: A comparison using
event-related brain potentials.
Language and Cognitive
Processes,
5,
281–312.
Howard, D., Patterson, K., Wise, R., Brown, W. D., Friston, K.,
Weiller, C., & Frackowiak, R. (1992). The cortical localiza-
tion of the lexicons. Positron emission tomography evi-
dence.
Brain,
115,
1769–1782.
Imbs, P. (1971).
Dictionnaire des fréquences: Vocabulaire lit-
téraire des XIX
°
et XX
°
siècles.
Nancy, France: CNRS.
Jared, D., & Seidenberg, M. S. (1991). Does word identica-
tion proceed from spelling to sound to meaning?
Journal
of Experimental Psychology: General,
120,
358–394.
Kutas, M., & Hillyard, S. A. (1980). Event-brain potentials to se-
mantically inappropriate and surprisingly large words.
Bio-
logical Psychology,
11,
99–116.
Kutas, M., & Hillyard, S. A. (1982). The lateral distribution of
event-related potentials during sentence processing.
Neuropsychologia,
20,
579–590.
Kutas, M., & Hillyard, S. A. (1989). An electrophysiological
probe of incidental semantic association.
Journal of Cogni-
tive Neuroscience,
1,
38–49.
Kutas, M., Hillyard, S. A., & Gazzaniga, M. S. (1988). Process-
ing of semantic anomaly by right and left hemispheres of
commissurotomy patients.
Brain,
111,
553–576.
Kutas, M., Lindamood, T. E., & Hillyard, S. A. (1984). Word ex-
pectancy and event-related potentials during sentence
processing. In S. Kornblum & J. Requin (Eds.),
Preparatory
states and processes
(pp. 217234). Hillsdale, NJ: Erlbaum.
Kutas, M., McCarthy,G., & Donchin, E. (1977). Augmenting
mental chronometry: The P300 as a measure of stimulus
evaluation time.
Science,
197,
792–795.
Kutas, M., & Van Petten, C. (1988). Event-related brain poten-
tial studies of language.
Advances in Psychophysiology,
3,
139–187.
Lauter, J., Herscovitch, P., Formby, C., & Raichle, M. E. (1985).
Tonotopic organization in human auditory cortex revealed
by positron emission tomography.
Hearing Research,
20,
199–205.
Marshall, J. C. (1986). The description and interpretation of
aphasic language disorder.
Neuropsychologia,
24,
5–24.
Maunsell, J. H. R., & Newsome, W. T. (1987). Visual processing
in monkey extrastriate cortex.
Annual Review of Neuro-
science 10,
395–367.
Mazziotta, J. C., Phelps, M. E., Carson, R. E., & Kuhl, D. E.
(1982). Tomographic mapping of human cerebral metabo-
lism: Auditory stimulation.
Neurology,
32,
921–937.
McCallum, W. C., Farmer, S. F., & Pocock, P. V. (1984). The ef-
fects of physical and semantic incongruities on auditory
event-related potentials.
Electroencephalography and
Clinical Neurophysiology,
59,
477–488.
McCarthy, G., Blamire, A. M., Puce, A., Nobre, A. C., Bloch, G.,
Hyder, F., Goldman-Rakic, P., & Shulman, R. G. (1994). Func-
tional magnetic resonance imaging of human prefrontal
Bentin et al. 259
cortex activation during a spatial working memory task.
Proceedings of the National Academy of Science U. S. A.,
91,
8690–8694.
McCarthy, G., Blamire, A. M., Rothman, D. L., Gruetter, R., &
Shulman, R. G. (1993). Echo-planar magnetic resonance im-
aging studies of frontal cortex activation during word gen-
eration in humans.
Proceedings of the National Academy
of Science U. S. A.,
90,
4952–4956.
McCarthy, G., & Donchin, E. (1981). A metric of thought: A
comparison of P300 latency and reaction time.
Science,
211,
77–80.
McCarthy, G., & Nobre, A. C. (1993). Modulation of seman-
tic processing by spatial selective attention.
Electro-
encephalography and Clinical Neurophysiology,
88,
210–219.
McCarthy, G., Nobre, A. C., Bentin, S., & Spencer, D. D. (1995).
Language-related eld potentials in the anterior-medial
temporal lobe: 1. Intracranial distribution and neural
generators.
Journal of Neuroscience,
15,
1080–
1089.
McClelland, J. L. (1979). On the time-relations of mental proc-
esses: An examination of systems of processes in cascade.
Psychological Review,
86,
287–330.
McClelland, J. L., & Rumelhart, D. E. (1981). An interactive ac-
tivation model of context effects in letter perception: 1 An
account of basic ndings.
Psychological Review,
88,
375–
407.
Morton, J. (1969). Interaction of information in word recogni-
tion.
Psychological Review,
88,
375–407.
Nobre, A. C., Allison, T., & McCarthy, G. (1994). Word recogni-
tion in the human inferior temporal lobe.
Nature,
372,
260–263.
Nobre, A. C., & McCarthy, G. (1994). Language-related ERPs:
Modulation by word type and semantic priming.
Journal
of Cognitive Neuroscience,
6,
233–255.
Nobre, A. C., & McCarthy, G. (1995). Language-related eld po-
tentials in the anterior-medial temporal lobe: 2. Effects of
word type and semantic priming.
Journal of Neurosci-
ence,
15,
1090–1098.
Patterson, K. E., & Kay, J. (1982). Letter-by-letter reading:
Psychological descriptions of a neurological syndrome.
Quarterly Journal of Experimental Psychology,
34A,
411–
441.
Patterson, K. E., Marshall, J. C., & Coltheart, M. (1985).
Sur-
face dyslexia.
London: Erlbaum.
Perrin, F., Pernier, J., Bertrand, O., & Echallier, J. F. (1989).
Spherical splines for scalp potential and current density
mapping.
Electroencephalography and Clinical Neuro-
physiology,
72,
184–187.
Perrin, F., Pernier, J., Bertrand, O., & Giard, M. H. (1987). Map-
ping of scalp potentials by surface plane interpolation.
Electroencephalography and Clinical Neurophysiology,
66,
75–81.
Petersen, S. E., & Fiez, J. A. (1993). The processing of single
words studied with positron emission tomography.
An-
nual Review of Neuroscience,
16,
509–530.
Petersen, S. E., Fox, P. T., Posner, M. I., Mintun, M., & Raichle,
M. E. (1989). Positron emission tomographic studies of the
processing of single words.
Journal of Cognitive Neurosci-
ence,
1,
153–170.
Petersen, S. E., Fox, P. T., Snyder, A. Z., & Raichle, M. E. (1990).
Activation of extrastriate and frontal cortical areas by
visual words and word-like stimuli.
Science,
249,
1041–
1044.
Posner, M. I., & Petersen, S. E. (1990). The attention system
of the human brain.
Annual Review of Neuroscience,
13,
25–42.
Posner, M. I., Petersen, S. E., Fox, P. T., & Raichle, M. E.
(1988). Localization of cognitive operations in the human
brain.
Science,
240,
1627–1631.
Posner, M. I., & Raichle, M. E. (1994).
Images of mind.
New
York: Freeman.
Praamstra, P., Meyer, A. S., & Levelt, W. J. M. (1994). Neuro-
physiological manifestations of phonological processing: La-
tency variations of a negative ERP component time-locked
to phonological mismatch.
Journal of Cognitive Neurosci-
ence, 6,
204–219.
Price, C. J., Wise, R. J. S., Watson, J. D. G., Patterson, K.,
Howard, D., & Frackowiak, R. S. J. (1994). Brain activity dur-
ing reading: The effects of exposure duration and task.
Brain,
117,
1255–1269.
Pugh, K. R., Shaywitz, B. A., Constable, R. T., Shaywitz, S. E.,
Skudlarski, P., Fulbright, R. K., Bronen, R. A., Shankweiler,
D. P., Katz, L., Fletcher, J. M., & Gore, J. C. (1996). Cerebral
organization of component processes in reading.
Brain,
119,
1221–1238.
Rugg, M. D. (1984). Event-related potentials and the
phonological processing of words and nonwords.
Neuro-
psychologia,
22,
435–443.
Rugg, M. D. (1985). The effects of handedness on event-
related potentials in a rhyme-matching task.
Neuropsy-
chologia,
23,
765–775.
Rugg, M. D. (1990). Event-related potentials dissociate repeti-
tion effects of high- and low-frequency words.
Memory
and Cognition, 18,
367–379.
Rugg, M. D., & Barrett, S. E. (1987). Event-related potentials
and the interaction between orthographic and phonologi-
cal information in a rhyme-judgment task.
Brain and Lan-
guage,
32,
336–361.
Schwartz, M. F., Saffran, E. M., & Marin, O. S. M. (1980). Frac-
tionating the reading process in dementia: Evidence for
word-specic point-to-sound associations. In M. Coltheart,
K. E. Patterson, & J. C. Marshall (Eds.),
Deep dyslexia
(pp. 259–269). London: Routledge and Kegan Paul.
Shallice, T., & Warrington, E. K. (1977). The possible role of se-
lective attention in acquired dyslexia.
Neuropsychologia,
15,
31–41.
Seidenberg, M. S., & McClelland, J. L. (1989). A distributed, de-
velopmental model of word and naming.
Psychological Re-
view,
96,
528–568.
Shaywitz, B. A., Shaywitz, S. E., Pugh, K. R., Constable, R. T.,
Shaywitz, S. E., Skudlarski, P., Fulbright, R. K., Bronen, R. A.,
Fletcher, J. M., Shankweiler, D. P., Katz, L., & Gore, J. C.
(1995). Sex differences in the functional organization of
the brain for language.
Nature,
373,
607–609.
Smith, M. C., Bentin, S., & Spalek, T. (submitted).
On the auto-
maticity of semantic priming at short SOAs
.
Smith, M. C., Theodor, L., & Franklin, P. E. (1983). The relation-
ship between contextual facilitation and depth of process-
ing.
Journal of Experimental Psychology: Learning,
Memory, and Cognition,
9,
697–712.
Van Essen, D. C., & DeYoe, E. A. (1995). Concurrent process-
ing in the primate visual cortex. In M. S. Gazzaniga (Ed.),
The cognitive neurosciences
(pp. 383–400). Cambridge,
MA & London: MIT Press.
Verleger, R., Jaskowski, P., & Wauschkuhn, B. (1994). Suspense
and surprise: On the relationship between expectancies
and P3.
Psychophysiology,
31,
359–369.
Wise, R. J., Chollet, F., Hadar, U., Friston, K., Hoffner, E., &
Frackowiak, R. (1991). Distribution of cortical neural net-
works involved in word comprehension and word re-
trieval.
Brain,
114,
1803–1817.
Zatorre, R. J., Meyer, E., Gjedde, A., & Evans, A. C. (1996). PET
studies of phonetic processing of speech: Review, replica-
tion, and reanalysis.
Cerebral Cortex,
6,
21–30.
260 Journal of Cognitive Neuroscience Volume 11, Number 3
... The N170 ERP component (sometimes labeled the N1) has been of particular interest in studies of word reading (Schendan et al., 1998;Bentin et al., 1999;Rossion et al., 2003;Maurer et al., . /fdpys. . ...
... The N170 is thought to reflect the earliest stages of identifying a visual object as a word, and mapping it to phonological and orthographic knowledge. In skilled adult readers, the N170 is a negative-going potential with bilateral foci largest over lateral/inferior temporal-occipital areas of the scalp, typically peaking 150-200 ms after the appearance of a printed word form (Bentin et al., 1999). It is typically left-lateralized, and is thought to reflect activity in the ventral occipito-temporal cortex, including the visual word form area. ...
... Several repetition-related effects have also been reported in ERPs in response to written words. The N170 component, peaking around 170 msec over the occipitotemporal regions, is especially pronounced in response to faces and words in contrast to other visual stimuli (Rossion & Jacques, 2008;Maurer, Brandeis, & McCandliss, 2005;Bentin, Mouchetant-Rostaing, Giard, Echallier, & Pernier, 1999). Previous experiments have demonstrated early N170 word repetition effects, for the immediate repetition of words and nonwords (e.g., Cao, Ma, & Qi, 2015;Rugg & Nagy, 1987). ...
... This RS effect is not confined to the N1 component but extends to the following P2 peak and, in some studies, also includes an N250 component. Given that the N1 and N250 have been associated with early orthographic processing and mapping of orthographic information onto higher-level lexical representations (Grainger & Holcomb, 2009;Bentin et al., 1999), the early RS effects in the literature and the present study may suggest the involvement of sublexical orthographic and lexical processing, accelerated by priming (Chauncey, Holcomb, & Grainger, 2008;Kiyonaga, Grainger, Midgley, & Holcomb, 2007;. ...
Article
Visual word recognition is commonly rapid and efficient, incorporating top–down predictive processing mechanisms. Neuroimaging studies with face stimuli suggest that repetition suppression (RS) reflects predictive processing at the neural level, as this effect is larger when repetitions are more frequent, that is, more expected. It remains unclear, however, at the temporal level whether and how RS and its modulation by expectation occur in visual word recognition. To address this gap, the present study aimed to investigate the presence and time course of these effects during visual word recognition using EEG. Thirty-six native Cantonese speakers were presented with pairs of Chinese written words and performed a nonlinguistic oddball task. The second word of a pair was either a repetition of the first or a different word (alternation). In repetition blocks, 75% of trials were repetitions and 25% were alternations, whereas the reverse was true in alternation blocks. Topographic analysis of variance of EEG at each time point showed robust RS effects in three time windows (141–227 msec, 242–445 msec, and 467–513 msec) reflecting facilitation of visual word recognition. Importantly, the modulation of RS by expectation was observed at the late rather than early intervals (334–387 msec, 465–550 msec, and 559–632 msec) and more than 100 msec after the first RS effects. In the predictive coding view of RS, only late repetition effects are modulated by expectation, whereas early RS effects may be mediated by lower-level predictions. Taken together, our findings provide the first EEG evidence revealing distinct temporal dynamics of RS effects and P(rep) effects in visual processing of Chinese words.
... The present study shows that the word reading mean ERP amplitude for the pre-lexical time window (160-260 ms) was significantly increased over the right hemisphere for adult PDs compared to HCs. Previous ERP studies reported that grapho-phoneme conversion takes place in the pre-lexical window approximately 160-260 ms post-stimuli [40,56,120]. In comparison to the present study results, Mahe et al. [56] found significantly reduced ERP amplitudes in the pre-lexical stage (100 ms post-stimulus) over the left hemisphere in adult PDs compared to HCs during overt word and pseudo words reading in non-transparent French language. ...
Article
Full-text available
The study aimed to investigate overt reading and naming processes in adult people with dyslexia (PDs) in shallow (transparent) language orthography. The results of adult PDs are compared with adult healthy controls HCs. Comparisons are made in three phases: pre-lexical (150–260 ms), lexical (280–700 ms), and post-lexical stage of processing (750–1000 ms) time window. Twelve PDs and HCs performed overt reading and naming tasks under EEG recording. The word reading and naming task consisted of sparse neighborhoods with closed phonemic onset (words/objects sharing the same onset). For the analysis of the mean ERP amplitude for pre-lexical, lexical, and post-lexical time window, a mixed design ANOVA was performed with the right (F4, FC2, FC6, C4, T8, CP2, CP6, P4) and left (F3, FC5, FC1, T7, C3, CP5, CP1, P7, P3) electrode sites, within-subject factors and group (PD vs. HC) as between-subject factor. Behavioral response latency results revealed significantly prolonged reading latency between HCs and PDs, while no difference was detected in naming response latency. ERP differences were found between PDs and HCs in the right hemisphere’s pre-lexical time window (160–200 ms) for word reading aloud. For visual object naming aloud, ERP differences were found between PDs and HCs in the right hemisphere’s post-lexical time window (900–1000 ms). The present study demonstrated different distributions of the electric field at the scalp in specific time windows between two groups in the right hemisphere in both word reading and visual object naming aloud, suggesting alternative processing strategies in adult PDs. These results indirectly support the view that adult PDs in shallow language orthography probably rely on the grapho-phonological route during overt word reading and have difficulties with phoneme and word retrieval during overt visual object naming in adulthood.
... In lexical decision tasks, two ERP components are typically associated with different sub-processes of reading. The first component is the occipitotemporal N170, which is consistently elicited by visual words, peaking around 200 ms after stimulus onset [21][22][23][24][25]. The N170 reflects early stages of lexical or sub-lexical orthographic processing during word reading [26]; [27]; [24] and is typically left-lateralized [23]. ...
Article
Full-text available
Extensive studies have been conducted on the impact of foreign language reading anxiety on reading, primarily focusing on pedagogy and behavior but lacking electrophysiological evidence. The current study aimed to investigate the influence of foreign language reading anxiety on reading and its underlying mechanisms. The results revealed a negative correlation between foreign language reading anxiety and foreign language reading performance, irrespective of the native language. Adults with low levels of foreign language reading anxiety (LFLRA) demonstrated a significant difference in early lexical component N170 amplitude between foreign and native languages. However, this effect was not observed in adults with high levels of foreign language reading anxiety (HFLRA). In terms of N170 latency, HFLRA showed a longer N170 for the foreign language compared to the native language. Furthermore, the N170 effects were predominantly localized over the left occipitotemporal electrodes. Regarding N400 latency, a significant difference was found in LFLRA individuals between foreign and native language processing, while HFLRA individuals did not exhibit this difference. These findings suggest that HFLRA individuals experience inefficient lexical processing (such as orthography or semantics) during reading in foreign language.
... Research has proven that the amplitude of the N170 response is higher when presented with letters or words instead of other objects. Moreover, this response is typically focused over the left occipito/temporal area 55,58 . ...
Article
Full-text available
Considerable evidence suggests that musical education induces structural and functional neuroplasticity in the brain. This study aimed to explore the potential impact of such changes on word-reading proficiency. We investigated whether musical training promotes the development of uncharted orthographic regions in the right hemisphere leading to better reading abilities. A total of 60 healthy, right-handed culturally matched professional musicians and controls took part in this research. They were categorised as normo-typical readers based on their reading speed (syl/sec) and subdivided into two groups of relatively good and poor readers. High density EEG/ERPs were recorded while participants engaged in a note or letter detection task. Musicians were more fluent in word, non-word and text reading tests, and faster in detecting both notes and words. They also exhibited greater N170 and P300 responses, and target-non target differences for words than controls. Similarly, good readers showed larger N170 and P300 responses than poor readers. Increased reading skills were associated to a bilateral activation of the occipito/temporal cortex, during music and word reading. Source reconstruction also showed a reduced activation of the left fusiform gyrus, and of areas devoted to attentional/ocular shifting in poor vs. good readers, and in controls vs. musicians. Data suggest that music literacy acquired early in time can shape reading circuits by promoting the specialization of a right-sided reading area, whose activity was here associated with enhanced reading proficiency. In conclusion, music literacy induces measurable neuroplastic changes in the left and right OT cortex responsible for improved word reading ability.
Article
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
Preprint
Full-text available
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory responses are modulated by linguistic content. Here, we recorded magnetoencephalography (MEG) responses while subjects listened to four types of continuous-speech-like passages: speech-envelope modulated noise, English-like non-words, scrambled words, and narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the emergent features of speech processing in cortex, from acoustics to higher-level linguistics, as incremental steps in neural speech processing. Critically, we show a stepwise hierarchical progression of progressively higher order features over time, reflected in both bottom-up (early) and top-down (late) processing stages. Linguistically driven top-down mechanisms take the form of late N400-like responses, suggesting a central role of predictive coding mechanisms at multiple levels. As expected, the neural processing of lower-level acoustic feature responses is bilateral or right lateralized, with left lateralization emerging only for lexical-semantic features. Finally, our results identify potential neural markers of the computations underlying speech perception and comprehension.
Article
In recent years, many new cortical areas have been identified in the macaque monkey. The number of identified connections between areas has increased even more dramatically. We report here on (1) a summary of the layout of cortical areas associated with vision and with other modalities, (2) a computerized database for storing and representing large amounts of information on connectivity patterns, and (3) the application of these data to the analysis of hierarchical organization of the cerebral cortex. Our analysis concentrates on the visual system, which includes 25 neocortical areas that are predominantly or exclusively visual in function, plus an additional 7 areas that we regard as visual-association areas on the basis of their extensive visual inputs. A total of 305 connections among these 32 visual and visual-association areas have been reported. This represents 31% of the possible number of pathways it each area were connected with all others. The actual degree of connectivity is likely to be closer to 40%. The great majority of pathways involve reciprocal connections between areas. There are also extensive connections with cortical areas outside the visual system proper, including the somatosensory cortex, as well as neocortical, transitional, and archicortical regions in the temporal and frontal lobes. In the somatosensory/motor system, there are 62 identified pathways linking 13 cortical areas, suggesting an overall connectivity of about 40%. Based on the laminar patterns of connections between areas, we propose a hierarchy of visual areas and of somato sensory/motor areas that is more comprehensive than those suggested in other recent studies. The current version of the visual hierarchy includes 10 levels of cortical processing. Altogether, it contains 14 levels if one includes the retina and lateral geniculate nucleus at the bottom as well as the entorhinal cortex and hippocampus at the top. Within this hierarchy, there are multiple, intertwined processing streams, which, at a low level, are related to the compartmental organization of areas V1 and V2 and, at a high level, are related to the distinction between processing centers in the temporal and parietal lobes. However, there are some pathways and relationships (about 10% of the total) whose descriptions do not fit cleanly into this hierarchical scheme for one reason or another. In most instances, though, it is unclear whether these represent genuine exceptions to a strict hierarchy rather than inaccuracies or uncertainties in the reported assignment.