ArticlePDF Available

Cross-language activation in written word recognition: The case of bilingual deaf children

Authors:

Figures

Content may be subject to copyright.
Bilingualism: Language and Cognition
http://journals.cambridge.org/BIL
Additional services for Bilingualism: Language and Cognition:
Email alerts: Click here
Subscriptions: Click here
Commercial reprints: Click here
Terms of use : Click here
Cross-language effects in written word recognition: The case of bilingual
deaf children
ELLEN ORMEL, DAAN HERMANS, HARRY KNOORS and LUDO VERHOEVEN
Bilingualism: Language and Cognition / Volume 15 / Issue 02 / April 2012, pp 288 - 303
DOI: 10.1017/S1366728911000319, Published online: 28 September 2011
Link to this article: http://journals.cambridge.org/abstract_S1366728911000319
How to cite this article:
ELLEN ORMEL, DAAN HERMANS, HARRY KNOORS and LUDO VERHOEVEN (2012). Cross-language effects in written
word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition, 15, pp 288-303 doi:10.1017/
S1366728911000319
Request Permissions : Click here
Downloaded from http://journals.cambridge.org/BIL, IP address: 131.174.186.68 on 16 Oct 2013
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Bilingualism: Language and Cognition 15 (2), 2012, 288–303 C
Cambridge University Press 2011 doi:10.1017/S1366728911000319
Cross-language effects in
written word recognition: The
case of bilingual deaf children
ELLEN ORMEL
Centre for Language Studies, Radboud University
Nijmegen
Behavioural Science Institute, Radboud University
Nijmegen
DAAN HERMANS
Pontem, Royal Dutch Kentalis, Sint-Michielsgestel
Behavioural Science Institute, Radboud University
Nijmegen
HARRY KNOORS
Behavioural Science Institute, Radboud University
Nijmegen
Royal Dutch Kentalis, Sint-Michielsgestel
LUDO VERHOEVEN
Behavioural Science Institute, Radboud University
Nijmegen
(Received: March 16, 2009; final revision received: December 22, 2010; accepted: December 22, 2010; first published online September 28, 2011)
In recent years, multiple studies have shown that the languages of a bilingual interact during processing. We investigated sign
activation as deaf children read words. In a word–picture verification task, we manipulated the underlying sign equivalents.
We presented children with word–picture pairs for which the sign translation equivalents varied with respect to sign
phonology overlap (i.e., handshape, movement, hand-palm orientation, and location) and sign iconicity (i.e., transparent
depiction of meaning or not). For the deaf children, non-matching word–picture pairs with sign translation equivalents that
had highly similar elements (i.e., strong sign phonological relations) showed relatively longer response latencies and more
errors than non-matching word–picture pairs without sign phonological relations (inhibitory effects). In contrast, matching
word–picture pairs with strongly iconic sign translation equivalents showed relatively shorter response latencies and fewer
errors than pairs with weakly iconic translation equivalents (facilitatory effects). No such activation effects were found in the
word–picture verification task for the hearing children. The results provide evidence for interactive cross-language
processing in deaf children.
Keywords: deaf children, cross-language interactions
A greatly debated topic in bilingual research is whether the
two languages in the mental lexicon of bilinguals operate
as separate systems (Gerard & Scarborough, 1989) or in
an interactive manner (Dijkstra, Grainger & van Heuven,
1999; Dijkstra, van Heuven & Grainger, 1998; de Groot,
Delmaar & Lupker, 2000; Kroll, Bobb & Wodniecka,
2006; Kroll & Stewart, 1994). The evidence points in the
direction of the latter (van Hell & Dijkstra, 2002; Marian
& Spivey, 2003). Recognition of words in one language
is affected by lexical knowledge in the other (Dijkstra,
van Jaarsveld & ten Brinke, 1998; Gollan, Forster &
Frost, 1997). The languages used to study cross-language
interaction have been languages with varying degrees of
* We would like to thank Marchien Hoffer for her contributions to
this study. We also thank the anonymous reviewers for their valuable
comments. Parts of the data for the present study have been presented
at the 5th International Symposium on Bilingualism, 2005 and the
20th Annual CUNY conference on human sentence processing, 2007.
This research was supported by Royal Dutch Kentalis.
Address for correspondence:
Ellen Ormel, Department of Linguistics, Radboud University Nijmegen, P.O. Box 9103, 6500 HD Nijmegen, The Netherlands.
E.Ormel@let.ru.nl
orthographic and phonological overlap (Bijeljac-Babic,
Biardeau & Grainger, 1997; de Groot et al., 2000; van Hell
& Dijkstra, 2002; van Heuven, Dijkstra & Grainger, 1998;
Kandil & Jiang, 2004; Lemhöfer, Dijkstra & Michel,
2004; Wang, Koda & Perfetti, 2003; van Wijnendaele &
Brysbaert, 2002). Indeed, Thierry and Wu (2004; also
Wu & Thierry, 2010) report cross-language interaction
in two languages (Chinese and English) with minimal
overlap. The present study examines interaction where the
languages in question are produced in different modalities
without any sharing of orthography and only minimal
sharing of phonology. We asked: Does cross-language
interaction occur for deaf children who use a sign language
and who can read Dutch?
Sign languages are the natural languages of deaf
people and have their own grammars. Dutch and Sign
Language of the Netherlands (NGT), for example, are
rather independent of each other and have fundamentally
different grammars. In contrast to most spoken languages,
sign languages also do not have an accompanying written
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 289
system (Evans, 2004; Padden & Ramsey, 1998). This
means that there is no orthographic overlap between
a signed language and the written form of a spoken
language with the exception of a set of finger-spelled loan
words and initialized handshapes. Initialized handshape
refers to the use of a finger-spelled letter as the
handshape for a sign, which represents the first letter
of the word translation equivalent. Furthermore, only
minimal phonological overlap between signed and spoken
languages is possible because signed languages rely upon
visual and spatial information (although a number of signs
are accompanied by specific mouth movements that relate
to a spoken word). Hearing and deaf people who know
a sign language and who can read therefore provide an
excellent venue for the study of interactive cross-language
activation. Recently, in several studies with bimodal deaf
adults, cross-language written word-recognition effects
have been observed (Grote & Linz, 2003; Morford,
Wilkinson, Villwock, Piñar & Kroll, 2011). The main
question in the present study is whether or not signs
and sign elements are activated during the written word
recognition of deaf children, whose proficiency in either
language is still developing. The answer to this question
is important given the major implications this may have
for deaf children’s literacy development.
Interactive bilingual language access
in deaf individuals
Studies of possible cross-language effects during the
written word recognition of bimodal bilinguals are limited
(Grote & Linz, 2003; Hanson & Feldman, 1989, 1991;
Morford et al., 2011; Treiman & Hirsh-Pasek, 1983).
Treiman and Hirsh-Pasek (1983) assessed the activation
of signs during the reading of English sentences by native
deaf adult signers. They found some activation of sign
during sentence processing but no activation of the spoken
language phonology or finger spelling.
Two decades later, Grote and Linz (2003) investigated
the effects of sign iconicity. They used a task in which
deaf and hearing signers judged the semantic relatedness
of stimulus pairs (a semantic relatedness judgment task).
Pairs of pictures and written words and pairs of pictures
and signs were presented on a computer screen, and
participants were instructed to decide whether or not the
items in the pairs were semantically related. Picture–sign
pairs were presented to deaf participants and hearing
participants who knew sign language (i.e., bimodal
participants). Picture–word pairs were presented to non-
signing hearing participants and to bimodal hearing
participants. Most interestingly, not only within-language
effects in the picture–sign task but also cross-language
effects in the picture–word task were found for the bimodal
hearing participants. Responses were faster when the
iconic features of the sign translation equivalent of the
word were visible in the picture (for example, the word
EAGLE and a picture of the beak of an eagle, where the
sign for eagle in German Sign Language (DGS) shows
the shape of a beak of an eagle) in comparison to the
same word EAGLE in combination with a picture that
does not show the iconic features of the sign translation
(for example, the word EAGLE and a picture of a wing of
an eagle).
Recently, Morford et al. (2011) examined cross-
modal bilingual effects during written word recognition
in a group of deaf bilingual adults. In a semantic
relatedness judgment task, pairs of written words
were presented sequentially on a computer screen and
participants decided whether or not the words were
semantically related. For semantically similar word
pairs and semantically dissimilar word pairs, half of
the items had phonologically related American Sign
Language (ASL) translation equivalents and half had
phonologically unrelated ASL translation equivalents.
For the semantically related pairs of words, those with
phonologically related ASL translation equivalents were
responded to faster than those with no such ASL
phonological relation. For the semantically unrelated
pairs of words, those with a phonologically related ASL
translation equivalent were responded to slower than
those with no such phonological relation. The results
thus showed that deaf adults activated ASL translation
equivalents when asked to process pairs of written English
words (but see Hanson & Feldman, 1989, 1991, who
found no evidence of cross-language activation of sign
translation equivalents during the reading of English word
primes followed by English target words by deaf adults).
In sum, the above findings provide evidence for cross-
language influences of signs during the comprehension of
written words. Treiman and Hirsch-Pasek (1983) provided
evidence with regard to the written sentence processing
of deaf adults, Grote and Linz (2003) provided evidence
for cross-language activation (sign iconicity) with regard
to written word processing by hearing adult signers,
and Morford et al. (2011) provided evidence for cross-
language activation (sign phonology) with regard to
written word processing by deaf adults.
The present study concerning deaf children is built
upon the results obtained in a study by Ormel, Hermans,
Knoors and Verhoeven (2009) on sign recognition by deaf
children. In that study, we found that at least two elements
of signs, namely sign iconicity and sign phonology,
affected the recognition of sign language. In the present
work, we examine whether the same two elements may
affect the written word recognition of bilingual deaf
children in a cross-language interactive manner, in line
with the findings for deaf and hearing adults (Grote &
Linz, 2003; Morford et al., 2011).
In all sign languages, the individual signs are composed
of parameters, which thus constitute the sign phonology
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
290 Ellen Ormel et al.
Strong sign phonological relation between signs (DOG and CHAIR)
No sign phonological relation between signs (DOG and COMB)
Figure 1. Examples of strong and no sign phonological relatedness.
(e.g., Klima & Bellugi, 1979; Stokoe, 1980). The most
relevant parameters include handshape, movement of
the hands and arms, location of the hands relative
to the body, and hand-palm orientation. In Figure 1,
two sign pairs in NGT are presented. The first pair
shows extensive phonological overlap. The second pair
shows no phonological overlap. In the first pair (DOG–
CHAIR),1there is overlap for location (neutral), overlap
for hand-palm orientation (downward), and large overlap
in movement (straight line downward, although the
movement in DOG is repeated) but a distinct handshape
(open B-hand versus closed S-hand). In the second pair
(DOG–COMB), there is a distinct location (neutral versus
head), a distinct hand-palm orientation (downward versus
sideways), a distinct handshape (open B-hand versus
1It is standard practice to capitalize the written glosses for signs.
When capitals are used, thus, the reader can assume that the author is
referring to a sign and not to a written or spoken word or the literal
translation for the sign. In the manuscript, no capitals were used when
referring to words and pictures, but instead the words are presented
in italics and a larger font was used.
closed “money-hand”) and a partly distinct movement
(straight line downward, although only in DOG, the
movement is repeated).
The parameters of handshape, hand-palm orientation,
movement and location thus help distinguish signs in a
manner similar to how the English phonemes /b/ and /w/
distinguish the spoken words ball and wall. Many models
of spoken and visual word recognition assume that the
presentation of a spoken or written word (e.g., dog) leads
to the activation of its sublexical (i.e., phonological or
orthographic) features (/d/, /o/ and /g/). These sublexical
features may automatically activate related words (e.g.,
dog, doll), and these words can compete for selection
during the word recognition process. In other words, it
is assumed in many models of spoken and written word
recognition, including the Cohort model of lexical access,
that word recognition can be construed as a process
of competition in which a cohort of phonologically
related words compete for selection as the spoken word
information becomes available (e.g., Gaskell & Marslen-
Wilson, 2002; Zwitserlood, 1996). Similarly, for sign
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 291
language, the recognition of a sign may start with the
activation of various elements (i.e., handshape, hand-
palm orientation, movement and location) and several
competing lexical candidates (i.e., neighbour signs) may
be activated as a result (see also Clark & Grosjean, 1982;
Emmorey & Corina, 1990). This is how we explained
part of our results in the Ormel et al. (2009) study.
Participants in the Ormel et al. study were shown signs
and pictures that had overlapping phonological elements
(i.e., neighbour status) but did not refer to the same
concept (e.g., DOG–CHAIR in NGT). The participants
then had to reject the sign–picture pair as not referring
to the same concept (i.e., provide a “no” response) in a
sign–picture verification task. Extended sign phonology
overlap was expected to create an interference effect, as
once the target sign has been activated, its sublexical
sign phonology also becomes activated (i.e., movement,
handshape, location and orientation information). Such
sublexical sign information can simultaneously activate
a cohort of sign neighbours (i.e., signs with overlapping
sublexical elements). The sign for DOG, for example,
can correctly activate DOG but also CHAIR – amongst
other candidates – because the signs for DOG and CHAIR
share a number of NGT elements but not the same
handshape. As expected, those sign–picture pairs with
considerable phonological overlap underlying them led
to slower and less accurate responses on the part of the
deaf children. Thus, the manipulation of sublexical sign
phonology showed that lexical competition occurs during
sign recognition, which is initiated by a match with active
sublexical phonological units.
A second critical element for the recognition of sign
language is sign iconicity, which refers to the mapping
between the form of the sign and its meaning; the so-
called level of transparency (see Ormel et al., 2009, for
effects of sign iconicity during sign recognition by deaf
children). In spoken languages, onomatopoeia is a classic
example of iconicity (Pietrandrea, 2002). Just as in a
spoken language, most of the relations between the form
of a sign and its meaning are arbitrary for sign language.
There are, however, signs that allow their meaning to be
retrieved more or less directly from their form (van der
Kooij, 2002), for example the NGT sign for “house”,
which shows the prototypical shape of the roof of a
house.
Sign–picture pairs were also used in the aforemen-
tioned study by Ormel et al. (2009) to examine whether
sign iconicity plays a role in the overt processing of
signs by deaf elementary school children. Extended sign
iconicity was expected to create a facilitation effect and
thus make it easier to respond “yes” to strongly iconic pairs
of items. The results in the 2009 study showed strongly
iconic signs to indeed elicit faster and more accurate
responses than weakly iconic signs and thus a facilitatory
effect of sign iconicity for the deaf children.
Bilingual deaf children present a unique opportunity
to study bilingual language processing in children (see
e.g., Bialystok, Luk and Kwan, 2005, for bilingual
processing in hearing children). Given Ormel et al.’s
(2009) finding that sign phonology and sign iconicity
affect sign recognition among deaf children, the question
we ask here is whether these aspects of lexical signs
are also activated cross-linguistically during written word
recognition.
The present study
In the Netherlands, deaf children usually grow up in
a joint sign language/sign supported spoken Dutch
language environment with mostly hearing parents. In
the vast majority of cases, NGT is the more natural
and accessible language for the children (Klatter-Folmer,
van Hout, Kolen & Verhoeven, 2006; Knoors, 2007).
The question is whether or not such knowledge of NGT
possibly affects their processing of written Dutch. As
mentioned before, the critical assumptions for the current
study are based on the Sign–Picture Verification Task
administered to deaf children by Ormel et al. (2009).
In the present study, the question is whether these same
sign elements (sign phonology and sign iconicity) are
activated in a Wor d –Picture Verification Task administered
to deaf children. Both deaf and hearing elementary school
children were asked to make “yes/no” decisions with
regard to whether the items in printed word–picture
pairs presented to them on a computer screen referred
to the same concept or not. Extended overlap between
the underlying NGT translation equivalents of the items
is expected to create an interference effect for (the
rejection of) conceptually unrelated pairs. That is, the
“no”-responses to simultaneously presented conceptually
unrelated word–picture pairs, for which the underlying
NGT (translation) equivalents are phonologically similar,
could be expected to be slower and less accurate than the
responses to simultaneously presented word–picture pairs
for which the underlying NGT (translation) equivalents
are not phonologically similar. Similarly, highly iconic
NGT translation equivalents are expected to produce
afacilitation effect (in the form of faster and more
accurate responses) for (the acceptance of) conceptually
identical pairs when compared to weakly iconic translation
equivalents.
Method
Participants
A total of ninety-eight hearing children and forty deaf
children participated in the present study. In the lower
grades, fifty hearing and twenty deaf children were
included. In the higher grades, forty-eight hearing and
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
292 Ellen Ormel et al.
Table 1 . Participant characteristics.
Groups Group size Grades Mean age Gender
Deaf; lower grades 20 3/4 122 months 45% girls, 55% boys
Deaf; higher grades 20 5/6 144 months 30% girls, 70% boys
Hearing; lower grades 50 3 105 months 54% girls, 46% boys
Hearing; higher grades 48 5 132 months 50% girls, 50% boys
Note: Given that group numbers were small in the schools for the deaf children, the deaf children in grades 3 and 4 as
well as in grades 5 and 6 were combined. The hearing children were included only to verify the non-existence of sign
effects. Matching for age was for that reason not crucial.
twenty deaf children were included (see Table 1). The
hearing children were in grade 3 (mean =8.9 years,
SD =.5) and grade 5 (mean =11.0 years, SD =.4). The
hearing children attended one of two elementary schools
in the Netherlands, and none of the hearing children were
familiar with NGT. The age of the deaf children ranged
from 9.1 to 13.1 years. The younger deaf children were
in either grade 3 or 42and between 9.1 and 10.11 years
(mean =10.2 years, SD =.6). The older deaf children
were in grades 5 or 6 and between 11.0 and 13.1 years
(mean =12.0, SD =.7). All deaf children had a hearing
loss of more than 80 dB in the better ear. The deaf
children attended one of three schools for deaf education
in the Netherlands. All of these schools provided bilingual
deaf education with a curriculum that consisted of a
combination of NGT and Sign Supported Dutch (SSD). In
SSD, Dutch word order is used with the support of signs.
The deaf children had been taught NGT from the age of
four years at school. Prior to that age, many of the children
had already attended a preschool for deaf children and thus
interacted with caregivers who used sign language from
the age of two or three. Formal exposure to written Dutch
started at the age of four years.
Stimuli
The pictures originated from the Dutch Leesladder
[Reading Ladder] (Irausquin & Mommers, 2001), which
is a computer program for children with reading
disabilities. The pictures were coloured 6 cm ×6 cm line
drawings representing nouns and presented on the right
side of the computer screen; the words were presented on
the left side of the computer screen.
Stimulus selection. The experimental stimuli were
established on the basis of two instrument design
studies. The first study involved a Phonological Similarity
Judgment Task in which signs had to be judged for
2The age range within a class at a school for the deaf in the Netherlands
can be larger than that within a mainstream elementary education
class. One of the younger participants in Experiment 1 was already
participating in the upper grades of the elementary school, for
example.
their degree of phonological similarity. The second study
involved an Iconicity Judgment Task in which signs had to
be judged for their degree of sign iconicity. The signs were
all part of the standard NGT lexicon. The (translation)
equivalents of the signs were used in the Word–Picture
Verification Task.
The stimuli for the Phonology Similarity Judgment
Task used in the first instrument design study were
121 pairs of signs with different degrees of formational
parameter overlap (i.e., overlap in location, movement
and/ or handshape). The Dutch translation equivalents
for the sign pairs were largely unrelated with respect
to orthography, phonology and meaning. The signs
were presented on a computer screen to three deaf
and six hearing bimodal bilingual adults. Three of the
six hearing participants were employed as teachers of
NGT. The other three of the hearing participants were
in their final year of training to become sign language
interpreters in The Netherlands. The participants were
asked to judge, as rapidly as possible, the extent to which
two consecutively presented signs were similar in form
along a 1–7 point rating scale. The sign items were
presented randomly and varied with regard to the degree
of overlap in their form (i.e., phonology). Based upon
the similarity judgments of the nine participants, twenty-
four pairs of items with a large degree of phonological
overlap (minimal pairs overlapping for at least two of
the formational parameters) were selected for inclusion
in the Word–Picture Verification Task. In the second
instrument design study, which involved the Iconicity
Judgment Task, we demonstrated that hearing children
without any knowledge of sign language could indeed
recognize the meanings of strongly iconic NGT signs
(Condition 1) better than the meanings of weakly iconic
NGT signs (Condition 2). Thirty-one hearing elementary
school children from grades 4, 5 and 6 in the Netherlands
with an age of ten to twelve years were administered
the Iconicity Judgment Task. Twenty-four strongly iconic
signs and twenty-four arbitrary signs (judged by the
experimenters) were presented on a computer screen. The
iconic properties of the signs resembled form features
of the referent (e.g., signs for MOUNTAIN [berg],
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 293
Condition 1: Translation equivalent of a sign with strong sign iconicity
Condition 2: Translation equivalent of a sign with weak sign iconicity
Figure 2. Level of sign iconicity. The participants were shown a word (e.g., the word “house” or “fruit”) and a picture (e.g.,
of a house or of fruit).
MOON [maan] and HOUSE [huis] have strongly iconic
properties, whereas signs for FRUIT [ fruit], MEAT
[vlees] and CLOCK [klok] do not have any strong
iconic properties). One second after the sign offset,
four words appeared on the screen simultaneously. The
participants were to determine which of the four words
referred to the presented NGT sign. Accuracy rates
were computed separately for each participant for both
test conditions. The expectation was that if the iconic
signs were indeed iconic, the meaning of those signs
could be derived more easily than for arbitrary signs
for non-signers. This was confirmed by the hearing
children. A significant effect was obtained for sign
iconicity (t(46) =4.76, p <.001: Strong iconicity
condition, M=76.34%; Weak iconicity condition, M
=44.13%). Based upon the outcomes of the Iconicity
Judgment Task, twenty-four translation equivalents for
strongly iconic signs were accepted for inclusion in the
Word–Picture Verification Task. In the present study,
“sign iconicity” refers to the iconicity of the sign
translation equivalent for the Dutch word presented; “sign
phonology” refers to the degree of phonological similarity
between the sign translation equivalent for the Dutch word
presented and the sign for the picture presented in each
pair.
Stimulus conditions. In the Word–Picture Verification
Task, 192 word–picture pairs were presented: 50% of
the word–picture pairs were conceptual matches and
thus required a “yes” response while 50% of the
pairs were conceptual mismatches and thus required
a “no” response. The stimulus items were distributed
across six conditions reflecting the strengths of three
factors: Sign Iconicity (Condition 1: Strong sign iconicity,
conceptual match items; Condition 2: Weak sign iconicity,
conceptual match items; see Figure 2), Sign Phonology
(Condition 3: Strong sign phonological relation between
two underlying signs, conceptual mismatch items;
Condition 4: No sign phonological relation between
two underlying signs, conceptual mismatch items, see
Figure 3), and Semantics (Condition 5: Strong semantic
relation, conceptual mismatch items; Condition 6: No
semantic relation, conceptual mismatch items, see
Figure 4). The Semantics conditions were included only
to verify that the children were able to perform a
word–picture verification paradigm, by showing expected
semantic relatedness effects. Semantic relatedness was
expected to create an interference effect (Damien &
Bowers, 2003). The presentation of a spoken or written
word (e.g., dog) leads to the activation of semantic
features, and these features may automatically activate
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
294 Ellen Ormel et al.
Condition 3: Translation equivalent of signs with a strong sign phonological relation
Condition 4: Translation equivalent of signs with no sign phonological relation
Figure 3. Sign phonological relatedness. Participants were shown a word (e.g., the word “dog”) and a picture (e.g., of a chair
or a comb).
semantically related words (e.g., cat, horse). The latter
words can then compete for selection during the word
recognition process. In other words, deaf children are
expected to be slower and less accurate in the rejection
of semantically related as opposed to unrelated Dutch
word–picture pairs. Strong semantically related items
were pairs within the same semantic category such as
spoon” (word) and “fork” (picture) or “bean” (word) and
carrot” (picture).
Each of the six conditions contained twenty-four
unique word–picture combinations (see Table 2). In each
of the three factors (Sign Iconicity, Sign Phonology,
Semantics) different stimuli were included.
In the Sign Iconicity conditions, the pictures and
therefore the twenty-four unique pairs were repeated
once, which resulted in forty-eight items for Condition
1 and forty-eight items for Condition 2, or ninety-six
pairs referring to the same concept (e.g., the word for
“house” and a picture of a “house”). The rationale behind
the repetition of the twenty-four items in conditions
1 and 2 was based upon the design of conditions
3 through 6, which also involved repetition of the
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 295
Table 2 . Experimental design of the Word–Picture Verification Task.
Factor Condition Response No. of items
Sign Iconicity
Condition 1: Strong iconicity Yes 24 2
Condition 2: Weak iconicity Yes 24 2
Sub-total 96
Sign Phonology
Condition 3: Strong overlap No 24 1
Condition 4: No overlap No 24 1
Sub-total 48
Semantics
Condition 5: Strong overlap No 24 1
Condition 6: No overlap No 24 1
Sub-total 48
Total 192
Condition 5: Word with a strong semantic relation to the picture
Condition 6: Word with no semantic relation to the picture
Figure 4. Semantic relatedness. The participants were shown a word (e.g., the word “spoon”) and a picture (e.g., of a fork or
a purse).
words and pictures. The Sign Phonology and Semantics
conditions 3 through 6 each involved twenty-four pairs
that referred to different concepts (e.g., the word for “dog”
and a picture of a “chair”). For example, Condition 3
contained mismatch pairs for which the underlying signs
showed strong sign phonological relations. Condition 4
was constructed by recombining the words and pictures
from Condition 3 in such a manner that the underlying
NGT translation equivalents would be phonologically
unrelated. In Condition 3, for example, the Dutch word
for “dog” was combined with a picture of a “chair” (the
sign equivalents were phonologically related); similarly,
the Dutch word for “uncle” was combined with a picture
of a “comb” (the sign equivalents were phonologically
related). In Condition 4, sign phonologically unrelated
pairs were created via the combination of the word for
“dog” with a picture of a “comb”, for example, or the
combination of the word “uncle” with a picture of a
“chair”.
The same procedure of recombining words and pictures
was used for Condition 5 and Condition 6. The total
number of word–picture pairs that referred to different
concepts was ninety-six.
Set of experimental items. The set of experimental
pairs presented to the children consisted of four sets
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
296 Ellen Ormel et al.
of stimulus pairs preceded by a practice set of ten
pairs that were different from the stimulus pairs. Each
of the four experimental sets contained four additional
practice pairs, followed by twelve pairs from Condition
1 (strong sign iconicity: conceptual match) and twelve
pairs from Condition 2 (weak sign iconicity: conceptual
match) or twenty-four pairs requiring a “yes” response;
six pairs from Condition 3 (strong sign phonological
relation: conceptual mismatch), six pairs from Condition
4 (no sign phonological relation: conceptual mismatch),
six pairs from Condition 5 (strong semantic relation:
conceptual mismatch), and six pairs from Condition 6
(no semantic relation: conceptual mismatch) or twenty-
four pairs requiring a “no” response. A total of four
practice trials and forty-eight unique word–picture pairs
thus constituted each experimental set. We programmed
the task in such a manner that the same pictures or
words did not occur within the same set of fifty-two
pairs.
While sign frequency measures are not yet available for
NGT, the Dutch word orthography frequency measures
could be determined on the basis of CELEX counts
(Baayen, Piepenbrock & van Rijn, 1993) and the child
database by Schrooten and Vermeer (1994). We did not
compare performance in the Sign Phonology condition
directly to Semantics or Sign Iconicity conditions, which
meant that word properties could differ for the three
factors.
The words corresponding to the pictures in Condition
1 had an average of 1.59 log frequency per million based
upon CELEX measures and an average frequency of
22.2 per 15,000 according to Schrooten and Vermeer.
The mean length was 4.66 letters. In Condition 2, the
words corresponding to the pictures had an average
of 1.52 log frequency per million based upon CELEX
measures and an average frequency of 22.7 per 15,000
according to Schrooten and Vermeer. The mean length
was 5.21 letters. For both the word frequencies and word
lengths, the differences were not significant: for the log
frequencies (F<1); for the length (F(1,48) =1.6,
p>.1).
The recombination of the words and pictures in
Conditions 3 and 5 to create unrelated pairs in Conditions
4 and 6, respectively, produced identical word orthography
frequencies and word lengths for Conditions 3 and 4,
and Conditions 5 and 6, respectively. In Conditions 3
and 4, the words corresponding to the pictures had a
combined average of 1.52 log frequency per million
based upon CELEX measures and an average frequency
of 18.9 per 15,000 according to the children’s corpora
of Schrooten and Vermeer.3The mean length was 5.46
3In the present study, we assumed the word frequencies to be similar
for hearing and deaf people. Whether word frequencies for hearing
people also apply to deaf people is not known. Nevertheless, we are
Table 3 . Word characteristics measured by mean word
length (number of letters), frequencies based on the
CELEX database and frequencies based on the
Schrooten and Vermeer database.
Mean length Celex1Schrooten & Vermeer2
Condition 1 4.66 1.59 22.2
Condition 2 5.21 1.52 22.7
Condition 3 5.46 1.52 18.9
Condition 4 5.46 1.52 18.9
Condition 5 4.31 1.52 38.8
Condition 6 4.31 1.52 38.8
Notes: 1 Log frequency per million.
2: Frequency per 15,000 words.
letters. In Conditions 5 and 6, the words corresponding to
the pictures had an average of 1.53 log frequency per
million based upon CELEX measures and an average
frequency of 38.8 per 15,000 according to Schrooten and
Vermeer. The mean length was 4.31 letters (see Table 3).
The words were presented using an Arial size 36 font.
Design
The order of presentation was constructed using a Latin
Square design. Within each of the four stimulus sets, the
forty-eight pairs were presented in a random order for
each child.
Apparatus
In this experiment, a laptop, a Dell Latitude 640, was used.
The test was constructed using the commercially available
software program E-Prime, version 1.0 (Schneider,
Eschman & Zuccolotto, 2002).
Procedure
The children’s teachers provided the instructions for the
task on a class basis in NGT for the deaf children and
spoken Dutch for the hearing children. After instruction,
questions could be asked about the procedure. The
experiment was then conducted with groups of six
children in a separate, well-lighted room with two
experimenters present. The distance between the children
and the laptop screen was approximately 40 cm. During
instruction, the participants were informed that a fixation
point would appear on the screen for one second, followed
by a word on the left side of the screen and a picture on the
reasonably confident that the item control measures hold for the deaf
children. More research is definitely needed to gain insight into word
frequencies for deaf participants. In addition to group norms, we
recommend individual frequency ratings.
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 297
right side of the screen. The picture and the word appeared
simultaneously and remained visible for the same amount
of time; both stimuli disappeared after the participant
responded or after a period of 10 seconds and the next
item followed. When both the word and picture referred
to the same concept, a match response was required and
the respondent had to press the “Enter” button, which
had a green mark on it. When the word and picture did
not refer to the same concept, a mismatch response was
required and the respondent had to press the “Caps Lock”
button, which had a red mark on it. The deaf participants
were instructed similarly to the hearing children, namely
to provide a match response when the word and picture
referred to the same concept, and without reference to
sign translation. After the instructions were provided and
understood by the participants, a practice set of ten items
was presented. The Dutch word for “break” then appeared
on the screen to indicate the start of a self-paced break;
the participants could continue by pressing one of the
response buttons. After every fifty-two items (i.e., four
practice pairs plus forty-eight stimulus pairs), a self-paced
break again occurred. Upon completion of the four sets of
stimuli, the Dutch word for “End” appeared.
Results
The deaf children were our main interest, but an overall
analysis of the deaf and hearing children’s responses was
preferred in order to show that deaf and hearing children
indeed performed differently on Sign Phonology and
Sign Iconicity conditions. The Levene’s test of Equality
of Error, which tests the null hypothesis that the error
variance of the dependent variable is equal across groups,
showed significantly different standard deviations for
the deaf versus hearing children. Both Sign Iconicity
conditions showed significant differences in the error
variance for the number of errors produced by the deaf
versus hearing children (Condition 1: F(1,136) =7.80,
p=.006; Condition 2: F(1,136) =14.04, p=.000).
Both Sign Phonology conditions also showed significant
differences in the error variance for the number of errors
produced by the deaf versus hearing children (Condition
3: F(1,136) =5.88, p=.017; Condition 4: F(1,136) =
9.56, p=.002).
Keeping in mind the Levene’s test results, the
interactions between Hearing Status (deaf vs. hearing)
and Sign Iconicity, Sign Phonology and Semantics,
respectively, were nevertheless analyzed at this stage in the
study in order to gain insight into whether the responses
of the deaf versus hearing children differed. Significant
interactions were indeed found between Hearing Status
and all of the conditions. The results of an overall ANOVA
(GLM) analysis of the response times showed a significant
interaction between Hearing Status and Sign Iconicity
(“yes” response) (F1(1,136) =11.62, p<.01, η2
p=.08;
F2(1,46) =24.34, p<.001, η2
p=.35), between Hearing
Status and Sign Phonology (“no” response) (F1(1,136) =
4.71, p<.05, η2
p=.03; F2<1), and between Hearing
Status and Semantics (“no” response) (F1(1,136) =6.06,
p<.05, η2
p=.04; F2<1).
The results of an overall ANOVA analysis of the
error data again showed a significant interaction between
Hearing Status and Sign Iconicity (F1(1,136) =10.33,
p<.01, η2
p=.07; F2(1,23) =17.29, p<.001, η2
p
=.27) and between Hearing Status and Sign Phonology
(F1(1,136) =5.28, p<.05, η2
p=.04; F2(1,23) =1.29,
p>.1, η2
p=.05). No significant interaction occurred
between Hearing Status and Semantics for the error data,
however (F1<1; F2<1).
The responses of the deaf and the hearing children were
further examined. Hearing children, who are not familiar
with sign language, should not show any sign effects in
the Word–Picture Verification Task.
The responses to the 192 word–picture pairs were
analyzed in an ANOVA. The results for the hearing
children are summarized in Table 4, and the results
for the deaf children are summarized in Table 5.
For each participant, the mean response time to the
correct responses (RT) and error scores were computed
for the three factors: Sign Iconicity (“yes” response),
Sign Phonology (“no” response) and Semantics (“no”
response). For the RT measures, inaccurate responses
and RTs that were more than two standard deviations
from the participant and item mean were excluded from
further analysis. For the younger hearing participants,
1.72% of the data was excluded. For the older hearing
participants, 1.92% of the data was excluded. For the
younger deaf participants, 0.4% of the data was excluded.
For the older deaf participants, 0.76% of the data was
excluded.4In separate analyses for the deaf vs. the hearing
4Only two of the 40 deaf children were known to come from deaf
families and have NGT as their native language. We looked at the
individual data from these children to get an impression of whether
their results somehow differed from the overall results. In the reaction
time data for the sign iconicity conditions, the children showed
precisely the expected pattern as seen from the overall analyses: the
children showed faster reaction times for stronglyiconic items relative
to weakly iconic items. For the error results, child 1 also showed the
expected pattern: more items correct in the strongly iconic condition
than in the weakly iconic condition; child 2 showed no difference
between the two sign iconicity conditions. In the sign phonology
conditions, the reaction times were also precisely as expected and in
keeping with the overall results: Slower reaction times for the strong
phonologically related sign items than for the phonologically unrelated
sign items. Child 1 showed more errors for the strong phonologically
related sign items than for the weak phonologically related sign items;
child 2 showed equal performance in the two conditions.
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
298 Ellen Ormel et al.
Table 4 . Mean response times (of the correct responses only, in milliseconds)
and correct responses (in percentages) on the word–picture verification of
hearing children in third or fifth grade (with standard deviations presented in
parentheses).
RT Correct responses
Sign Iconicity
Condition 1: Strong iconicity 3rd Grade 1660 (478) .93 (.06)
Condition 2: Weak iconicity 1670 (424) .93 (.07)
Condition 1: Strong iconicity 5th Grade 1269 (299) .93 (.04)
Condition 2: Weak iconicity 1298 (278) .94 (.04)
Sign Phonology
Condition 3: Strong overlap 3rd Grade 2224 (708) .91 (.07)
Condition 4: No overlap 2244 (702) .91 (.07)
Condition 3: Strong overlap 5th Grade 1666 (421) .93 (.06)
Condition 4: No overlap 1655 (514) .93 (.06)
Semantics
Condition 5: Strong overlap 3rd Grade 1983 (487) .90 (.07)
Condition 6: No overlap 2001 (596) .93 (.07)
Condition 5: Strong overlap 5th Grade 1578 (355) .90 (.07)
Condition 6: No overlap 1543 (395) .94 (.06)
Table 5 . Mean response times (of the correct responses only, in milliseconds)
and correct responses (in percentages) on the word–picture verification of deaf
children in third/fourth or fifth/sixth grade (with standard deviations presented in
parentheses).
RT Correct responses
Sign Iconicity
Condition 1: Strong iconicity 3rd /4th Grade 1465 (465) .93 (.06)
Condition 2: Weak iconicity 1596 (455) .91 (.08)
Condition 1: Strong iconicity 5th /6th Grade 1459 (334) .94 (.06)
Condition 2: Weak iconicity 1582 (371) .91 (.08)
Sign Phonology
Condition 3: Strong overlap 3rd/4th Grade 2123 (700) .87 (.11)
Condition 4: No overlap 2006 (629) .89 (.14)
Condition 3: Strong overlap 5th/6th Grade 2058 (619) .89 (.07)
Condition 4: No overlap 1981 (584) .94 (.08)
Semantics
Condition 5: Strong overlap 3rd/4th Grade 1988 (648) .88 (.15)
Condition 6: No overlap 1876 (593) .93 (.09)
Condition 5: Strong overlap 5th/6th Grade 1887 (520) .91 (.08)
Condition 6: No overlap 1722 (420) .95 (.07)
children, Sign Iconicity was treated as a within-subjects
and between-items factor; Sign Phonology was treated as
a within-subjects and within-items factor; and Semantics
was treated as a within-subjects and within-items factor;
those three factors were the independent variables. Again,
RT and error were the dependent variables. Grade was
treated as a between-subjects factor and within-items
factor, and main effects of Grade as well as interactions
between Grade and Sign Iconicity, Grade and Sign
Phonology, and Grade and Semantics were calculated.
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 299
Response time data for Sign Iconicity (“yes” responses)
Hearing children. For the hearing children, no main
effect of Sign Iconicity was obtained; word–picture pairs
with strongly versus weakly iconic sign (translation)
equivalents did not influence the response times of the
hearing children (F1(1,96) =2.07, p>.1, η2
p=.02;
F2<1). A main effect of Grade was detected (F1(1,96)
=25.37, p<.001, η2
p=.21; F2(1,46) =612.93, p<.001,
η2
p=.93) with the fifth graders responding faster than
the third graders (see Table 4). There was no interaction
between Grade and Sign Iconicity (F1<1; F2<1).
Deaf children. For the deaf children, a significant main
effect of Sign Iconicity was found (F1(1,38) =24.97,
p<.001, η2
p=.40; F2(1,46) =10.84, p<.01, η2
p=.19).
Word–picture pairs with strongly iconic sign (translation)
equivalents were responded to faster than word–picture
pairs with weakly iconic sign (translation) equivalents by
the deaf children. Neither a main effect of Grade (F1<
1; F2(1,46) =1.15, p>.1, η2
p=.02) nor an interaction
between Grade and Sign Iconicity (F1<1; F2<1) was
found for the deaf children (see Table 5).
Response time data for Sign Phonology
(“no” responses)
Hearing children. For the hearing children, no main effect
of Sign Phonology was found; word–picture pairs with
strongly overlapping sign (translation) equivalents versus
non-overlapping sign (translation) equivalents did not
affect the response times (F1<1; F2<1). A significant
main effect of Grade was found with the fifth grade
children responding faster than the third grade children
(F1(1,96) =22.99, p<.001, η2
p=.19; F2(1,23) =
147.12, p<.001, η2
p=.87). No significant interaction
between Grade and Sign Phonology was found for the
hearing children (F1<1; F2(1,23) =1.04, p>.1,
η2
p=.04).
Deaf children. A significant main effect of Sign Phonology
was found for the deaf children (F1(1,38) =6.97, p<.05,
η2
p=.16; F2(1,23) =1.26, p>.1, η2
p=.05). The deaf
children were slower to respond to conceptually unrelated
Dutch word–picture pairs when the NGT signs underlying
the word and the picture phonologically overlapped
strongly as opposed to when they did not overlap. Once
again, neither a main effect of Grade (F1<1; F2<1) nor
an interaction between Grade and Sign Phonology (F1<
1; F2<1) was found for the deaf children.
Response time data for Semantics (“no” responses)
Hearing children. A main effect of Semantics on the
hearing children’s response times was not found (F1<
1; F2<1). However, a significant main effect of Grade
was found with the fifth graders responding faster to both
semantically related and semantically unrelated pairs than
the third graders (F1(1,96) =21.87, p<.001, η2
p=.19;
F2(1,23) =307.85, p<.001, η2
p=.93). No significant
interaction between Grade and Semantics was found for
the hearing children (F1(1,96) =1.48, p>.1, η2
p=.02;
F2(1,23) =1.52, p>.1, η2
p=.06).
Deaf children. An inhibitory main effect of Semantics
on the response times of the deaf children was found in
the participant analyses (F1(1,38) =12.03, p<.001, η2
p
=.24; F2<1). Semantically related word–picture pairs
were responded to slower on average than semantically
unrelated word–picture pairs. A marginally significant
main effect of Grade was observed in the item analyses
(F1<1; F2(1,23) =3.52, p<.1, η2
p=.13). No
significant interaction between Grade and Semantics was
found (F1(1,38) =1.24, p>.1, η2
p=.03; F2<1).
Error data for Sign Iconicity (“yes” responses)
Hearing children. A main effect of Sign Iconicity was
not found in the error analyses for the hearing children
(F1(1,96) =1.70, p>.1 η2
p=.02; F2<1). The error
analyses also showed no main effect of Grade for the
hearing children (F1<1; F2<1). There was also no
significant interaction between Grade and Sign Iconicity
(F1<1; F2<1).
Deaf children. For the deaf children, Sign Iconicity pro-
duced a significant main effect in the analyses of the error
rates (F1(1,38) =6.35, p<.05, η2
p=.14; F2(1,46) =
18.46, p<.001, η2
p=.29). The deaf children made
fewer errors on those word–picture pairs with underlying
strong sign iconicity. There was no main effect of Grade
for the deaf children (F1<1; F2<1). No significant
interaction between Grade and Sign Iconicity was found
(F1<1; F2<1).
Error data for Sign Phonology (“no” responses)
Hearing children. There was no main effect of Sign
Phonology (F1<1; F2<1). A marginal effect of Grade
was found for error data but only in the item analyses
(F1(1,96) =2.03, p>.1, η2
p=.02; F2(1,23) =3.06, p<
.1, η2
p=.12). No significant interaction between Grade
and Sign Phonology was found for the error data from the
hearing children (F1<1; F2<1).
Deaf children. The deaf children made more errors in
response to conceptually unrelated Dutch word–picture
pairs when the NGT signs underlying the word and
the picture were strongly overlapping as opposed to not
overlapping (F1(1,38) =9.28, p<.01, η2
p=.19;
F2(1,23) =2.79, p<.1, η2
p=.11). No main effect of
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
300 Ellen Ormel et al.
Grade was found for the deaf children in the participant
analyses (F1<1). However, the item analyses showed
a significant effect (F2(1,23) =5.38, p<.05, η2
p=
.19). No significant interaction between Grade and Sign
Phonology was found in the participant analyses (F1<
1). The item analyses showed a significant interaction
between Grade and Sign Phonology for the deaf children’s
error data (F2(1,23) =4.86, p<.05, η2
p=.18). Sign
Phonology uniformly affected the deaf children’s error
data irrespective of grade in the participant analyses
(F1(1,38) =6.35, p<.05, η2
p=.14). However, in the
item analysis, sign phonology only affected the error data
of the older deaf children (F2(1,23) =6.34, p<.05,
η2
p=.22), not the younger deaf children (F2<1).
Error data for Semantics (“no” responses)
Hearing children. A main effect of Semantics was found
(F1(1,96) =20.82, p<.01, η2
p=.18; F2(1,23) =6.69,
p<.01, η2
p=.23). More errors were made on word–
picture pairs with a strong semantic relation than on pairs
with no semantic relation. A significant main effect of
Grade was not found for the error data from the hearing
children, which means that the two grades showed similar
error levels (F1<1; F2<1). No significant interaction
between Grade and Semantics was found either (F1<1;
F2<1), which means that the errors produced by the two
grades were similarly affected by Semantics.
Deaf children. For the deaf children, the significant
main effect of Semantics showed more errors to be
made on semantically related word–picture pairs than on
semantically unrelated pairs (F1(1,38) =9.44, p<.01,
η2
p=.20; F2(1,23) =3.78, p<.1, η2
p=.14). In the item
analyses, a marginally significant main effect of Grade
was detected (F1<1; F2(1,23) =3.23, p<.1, η2
p=
.12), with the older deaf children appearing to make fewer
errors than the younger children on the Semantics items.
No significant interaction was found between Grade and
Semantics for the error data from the deaf children (F1<
1; F2<1).
Discussion and conclusion
In this study, evidence was found for bilingual cross-
language processing on the part of deaf children. During
the word–picture verification task, the non-target (i.e.,
sign) language underwent activation as indicated by the
sign phonology inhibition effects and the sign iconicity
facilitation effects. When the phonology of the sign
(translation) equivalents underlying the mismatching
word–picture pairs overlapped partly, inhibition occurred;
that is, the activation of the phonologically related signs
during the word–picture verification task resulted in
slower and less accurate responses on the part of the
deaf children. Conversely, when the underlying sign
(translation) equivalents for the target word–picture pairs
were highly iconic (i.e., the meaning of the sign resembled
the form of the sign to a considerable extent), facilitation
occurred; that is, matching pairs with strongly iconic sign
translation equivalents were responded to more quickly
and accurately than those words with only weakly iconic
sign translation equivalents. We can thus conclude that as-
pects of both the sign phonology and sign iconicity of the
sign (translation) equivalents underlying the word–picture
pairs presented to the deaf children were activated during
their processing of the stimuli. Significant semantic effects
were also found and thus reflect a general disposition to
activate semantically related items when reading single
words. This condition was added to verify that the children
were actually performing the critical experimental tasks
(Sign Phonology and Sign Iconicity) appropriately.
For the sake of thoroughness, it was further
demonstrated that sign effects do not occur for hearing
children, which means that the inhibitory and facilitatory
effects found to occur for the deaf children during the
word–picture verification task can only be attributed to
the bilingual activation of their sign language knowledge.
As expected, Semantics produced a main effect in the
error data, which showed the hearing children to activate
semantically related words when reading single Dutch
words. Significant grade effects in the response times
showed that the older hearing children were faster and, in
some cases, made fewer errors than the younger children
in all conditions.
The word–picture verification results reported here for
bilingual deaf children are in line with the results of an
increasing number of studies of bilingual people (e.g.,
Wang et al., 2003; van Wijnendaele & Brysbaert, 2002).
Bilingual language processing has been demonstrated
not only for languages with similar scripts but also
for languages with highly different scripts, and for
languages with no script overlap. This last is the case
for studies of cross-language activation for English and
Chinese (see, e.g., Thierry & Wu, 2004; Wu & Thierry,
2010), and also for a signed language and a written
language. One innovative aspect of the present study
is that the two languages are produced in two different
modalities without sharing orthography and only minimal
phonology. The results of the present study extend the
findings of cross-language activation for a signed and
a spoken/written language in deaf adults and hearing
adults by showing that cross-language effects already
influence lexical processing before deaf children have
attained full proficiency in either language. None of the
studies showing cross-language activation looked into the
time course of the activation.
In Figure 5 (see also Ormel, 2008), the processes
that may be involved in the written word recognition
of deaf children and in line with the present results are
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 301
Semantics
Lexical sign (2,4) Lexical orthography (1)
Sub-lexical sign (3) Sub-lexical orthography
Letter string
Figure 5. Sign activation during written word recognition in bilingual deaf children: Deaf Bilingual Interactive Activation
model.
presented schematically (see the BIA+ model by Dijkstra
& van Heuven, 2002, for the source of the current model).
According to this view, once lexical orthography has been
activated for the purposes of written word recognition of
the word in a word–picture pair (1), the sign translation
equivalent for the target word becomes activated (2). Sign
phonology (i.e., sublexical sign elements) is also activated
in the form of sign movement information, handshape,
location of the sign and orientation of the hands (3).
When this occurs, not only the correct combination
of sublexical sign elements is activated but also other
combinations involving one or more of the four sign
elements. For example, in the DOG–CHAIR pair, the
word dog may activate the sign for DOG as well as its
phonological neighbour CHAIR (Figure 1). Arguably, the
picture in this pair also activates the sign for CHAIR.
This overlapping activation creates a conflict for the
required “no” answer to this conceptually non-matching
pair. The inhibition effect of sign phonology thus occurs as
a result of underlying competing lexical items that contain
overlapping combinations of sublexical sign elements (see
also Thierry & Wu, 2004, for similar results with bilingual
Chinese–English hearing students).
Whether or not sign phonology or, for that matter, sign
iconicity actually mediates the retrieval of the meanings of
written words (pre-conceptual activation) or is activated
after access to the meaning (post-conceptual activation)
remains an open question for further investigation. One
way to explore the exact nature of contact involves
a study in which words and pictures (or word pairs)
are presented sequentially, varying the stimulus onset
asynchrony (SOA) of the stimuli. Such a paradigm would
provide further insight into the time course of bilingual
lexical activation.
The present results showed sign iconicity to play a
facilitating role in the written word recognition of deaf
elementary school children (i.e., their provision of “yes”
responses in a word–picture verification task). Past studies
of the role of sign iconicity in early sign language
acquisition have produced mixed results (e.g., Markham
and Justice, 2004; Meier, 2002; Orlansky & Bonvillian,
1984; Tolar, Lederberg, Gokhale and Tomasello, 2007).
The present effects for sign iconicity can be explained
by an on-line processing advantage. The iconic features
of the word’s sign translation equivalent may facilitate
recognition of the printed word, leading to faster and more
accurate judgments on the picture verification task (see
also Thompson, Vinson and Vigliocco, 2009, for an on-
line sign–picture study with deaf adults). However, it is
possible that the isomorphism between the picture and the
underlying iconic sign is what is actually producing the
faster responses.
Directions for further research
The present findings provide much-needed insight into
the word reading of young bilingual deaf children and
thereby help us better understand the initial stages in
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
302 Ellen Ormel et al.
bilingual language architecture. As described by Kroll
and Stewart in 1994, more proficient users of their second
language can access the meaning of a word in their second
language directly whereas less proficient users access the
meaning of a word in their second language indirectly,
via the first language. Perhaps deaf children can be
compared to less proficient users of their second language.
Research with deaf adults who are fluent or near fluent
readers could provide insight into the more proficient
stages of bilingual processing of a signed language in
combination with a written language. Some initial insight
into the role of reading fluency has been provided by
the research of Morford et al. (2011) who studied highly
proficient deaf readers and by Grote and Linz (2003)
who studied highly proficient hearing readers who had
deaf parents (so-called CODAs: Children of Deaf Adults).
Future research is required to gain further insight into the
distinction between bilingual signers who have different
levels of proficiency in their (first and) second language.
In this regard, it would be informative to obtain word
and sign frequency norms for deaf children and adults.
Deaf signer’s lexical knowledge may differ from that of
hearing controls. Related to the issue of proficiency, it
would, moreover, be valuable to assess bilingual deaf and
hearing children and adults for language dominance.In
contrast to most of the studies using similar – alphabetic –
scripts, priming effects in past studies using different
scripts occurred for only the dominant language and not
the other language (i.e., the dominant language primed
recognition of words in the non-dominant language but
not vice versa; see, for example, Keatley, Spinks &
de Gelder, 1994, and Thierry & Wu, 2004). Given the
seemingly large language differences for bilinguals who
know a sign language and a spoken/written language, the
same unidirectional (priming) effects can be expected,
whereby the dominant language can be used during
processing of the non-dominant language, but not vice
versa.
References
Baayen, R., Piepenbrock, R., & van Rijn, H. (1993). The
CELEX Lexical Database. Technical report. Pennsylvania,
PA: University of Pennsylvania, Linguistic Data
Consortium.
Bialystok, E., Luk, G., & Kwan, E. (2005). Bilingualism,
biliteracy, and learning to read: Interactions among
languages and writing systems. Scientific studies of
reading, 9 (1), 43–61.
Bijeljac-Babic, R., Biardeau, A., & Grainger, J. (1997). Masked
orthographic priming in bilingual word recognition.
Memory and Cognition, 25, 447–457.
Clark, L. E., & Grosjean, F. (1982). Sign recognition processes in
American Sign Language: The effect of context. Language
and Speech, 25 (4), 325–340.
Damien, M. F., & Bowers, J. S. (2003). Locus of
semantic interference in a word picture interference task.
Psychonomic Bulletin and Review, 10 (1), 111–117.
Dijkstra, T., Grainger, J., & van Heuven, W. J. B. (1999).
Recognition of cognates and interlingual homographs:
The neglected role of phonology. Journal of Memory and
Language, 41, 496–518.
Dijkstra, T., & van Heuven, W. J. B. (2002). The architecture of
the bilingual word recognition system: From identification
to decision. Bilingualism: Language and Cognition, 5 (3),
175–197.
Dijkstra, T., van Heuven, W. J. B., & Grainger, J. (1998). Sim-
ulating cross-language competition with the bilingual in-
teractive activation model. Psychologica Belgica, 38 (3/4),
177–196.
Dijkstra, T., van Jaarsveld, H., & ten Brinke, S.
(1998). Interlingual homograph recognition: Effects of
task demands and language intermixing. Bilingualism:
Language and Cognition, 1, 51–66.
Emmorey, K., & Corina, D. (1990). Lexical recognition in sign
language: Effects of phonetic structure and morphology.
Perceptual and Motor Skills, 71, 1227–1252.
Evans, C. (2004). Literacy development in deaf students: Case
studies in bilingual teaching and learning. American Annals
of the Deaf, 149 (1), 17–27.
Gaskell, M. G., & Marslen-Wilson, W. D. (2002). Representation
and competition in the perception of spoken words.
Cognitive Psychology, 45, 220–266.
Gerard, L. D., & Scarborough, D. L. (1989). Language-
specific lexical access of homographs by bilingual. Journal
of Experimental Psychology: Learning, Memory, and
Cognition, 15, 305–313.
Gollan, T. H., Forster, K. I., & Frost, R. (1997).
Translation priming with different scripts: Masked priming
with cognates in Hebrew–English bilinguals. Journal
of Experimental Psychology: Learning, Memory, and
Cognition, 23 (5), 1122–1139.
Groot, A. M. B. de, Delmaar, P., & Lupker, S. J. (2000).
The processing of interlexical homographs in translation
recognition and lexical decision: Support for nonselective
access to bilingual memory. Quarterly Journal of
Experimental Psychology, 53 (A), 397–428.
Grote, K. & Linz, E. (2003). The influence of sign language
iconicity on semantic conceptualization. In W. G. Muller &
O. Fisher (eds.), From sign to signing: Iconicity in language
and literature 3, pp. 23–40. Amsterdam: John Benjamins.
Hanson, V. L., & Feldman, L. B. (1989). Language specificity in
lexical organization: Evidence from deaf signers’ lexical
organization of American Sign Language and English.
Memory and Cognition, 17 (3), 292–301.
Hanson, V. L., & Feldman, L. B. (1991). What makes signs
related? Sign Language Studies, 70, 35–46.
Hell, J. G. van, & Dijkstra, T. (2002). Foreign language
knowledge can influence native language performance in
exclusivelynative context. Psychonomic Bulletin & Review,
9(4),780–789.
Heuven, W. J. B. van, Dijksta, T., & Grainger, J. (1998).
Orthographic neighborhood effects in bilingual word
recognition. Journal of Memory and Language, 39, 458–
483.
http://journals.cambridge.org Downloaded: 16 Oct 2013 IP address: 131.174.186.68
Cross-language effects in written word recognition 303
Irausquin, R., & Mommers, C. (2001). Leesladder. Een
programma voor kinderen met leesmoeilijkheden. [Reading
ladder. A program for children with reading difficulties]:
Tilburg: Zwijsen.
Kandil, M. A., & Jiang, N. (2004). The role of scripts in
bilingual lexical organization: Evidence from switching
cost. Georgia State Working Papers in Applied Linguistics,
1, 1–14.
Keatley, C. W., Spinks, J. A., & de Gelder, B. (1994).
Asymmetrical cross-language priming effects. Memory &
Cognition, 22 (1), 70–84.
Klatter–Folmer, J., van Hout, R., Kolen, E., & Verhoeven,
L. (2006). Language development in deaf children’s
interactions with deaf and hearing adults: A longitudinal
study. Journal of Deaf Studies and Deaf Education, 11 (2),
238–251.
Klima, E. S., & Bellugi, U. (1979). The signs of language.
Cambridge, MA: Harvard University Press.
Knoors, H. (2007). Educational responses to varying objectives
of deaf parents of deaf children: A Dutch perspective.
Journal of Deaf Studies and Deaf Education, 12 (2), 243–
253.
Kooij, E. van der (2002). Phonological categories in Sign
Language of The Netherlands: The role of phonetic
implementation and iconicity. Utrecht: LOT.
Kroll, J. F., Bobb, S. C., & Wodniecka, Z. (2006). Language
selectivity is the exception, not the rule: Arguments against
a fixed locus of language selection in bilingual speech.
Bilingualism: Language and Cognition, 9, 119–135.
Kroll, J. F., & Stewart, E. (1994). Category interference in
translation and picture naming: Evidence for asymmetric
connections between bilingual memory representations.
Journal of Memory and Language, 33, 149–174.
Lemhöfer, K., Dijkstra, T., & Michel, M. (2004). Three
languages, one ECHO: Cognate effects in trilingual word
recognition. Language and Cognitive Processes, 19 (5),
585–611.
Marian, V., & Spivey, M. (2003). Competing activation in
bilingual language processing: Within- and between-
language competition. Bilingualism: Language and
Cognition, 6 (2), 97–115.
Markham, P. T., & Justice, E. M. (2004). Sign language iconicity
and its influence on the ability to describe the function of
objects. Journal of Communication Disorders, 37 (6), 535–
546.
Meier, R. (2002). Why different, why the same? Explaining
effects and nn-effects of modality upon linguistic structure
in sign and speech. In R. Meier, K. Cormier & D. G. Quinto-
Pozos (eds.), Modality and structure in signed and spoken
languages, 1–26. Cambridge: Cambridge University Press.
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F.
(2011). When deaf signers read English: Do written words
activate their sign translations? Cognition, 118, 286–292.
Ormel, E. (2008). Visual word recognition in bilingual
deaf children. Ph.D. dissertation, Radboud University
Nijmegen.
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2009).
The role of sign phonology and iconicity during sign
processing: The case of deaf children. Journal of Deaf
Studies and Deaf Education, 14, 436–448.
Orlansky, M. D., & Bonvillian, J. D. (1984). The role of iconicity
in early sign language acquisition. Journal of Speech and
Hearing Disorders, 49, 287–292.
Padden, C., & Ramsey, C. (1998). Reading ability in signing deaf
children. Topics in Language Disorders, 18 (4), 30–46.
Pietrandrea, P. (2002). Iconicity and arbitrariness in Italian Sign
Language. Sign Language Studies, 2 (3), 296–321.
Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-
Prime. Psychology software tools, Learning Research
and Development Centre, University of Pittsburgh,
Pittsburgh.
Schrooten, W., & Vermeer, A. (1994). Woorden in het
basisonderwijs. 15.000 woorden aangeboden aan leerlingen
(Studies in Meertaligheid 6). Tilburg: Tilburg University
Press.
Stokoe, W. C. (1980). Sign language structure. Annual Review
of Anthropology, 9, 365–390.
Thierry, G., & Wu, Y. J. (2004). Electrophysiological evidence
for language interference in late bilingual. NeuroReport,
15 (10), 1555–1558.
Thompson, R. L., Vinson, D. P., & Vigliocco, G. (2009). The link
between form and meaning in American Sign Language:
Lexical processing effects. Journal of Experimental
Psychology: Language, Memory, and Cognition, 35 (2),
550–557.
Tolar, T. D., Lederberg, A. R., Gokhale, S., & Tomasello, M.
(2007). The development of the ability to recognize the
meaning of iconic signs. Journal of Deaf Studies and Deaf
Education, 13 (2), 225–240.
Treiman, R., & Hirsh-Pasek, K. (1983). Silent reading:
Insights from second-generation deaf readers. Cognitive
Psychology, 15, 39–65.
Wang, M., Koda, K., & Perfetti, C. A. (2003). Alphabetic and
nonalphabetic L1 effects in English word identification: A
comparison of Korean and Chinese English L2 learners.
Cognition, 87, 129–149.
Wijnendaele, I. van, & Brysbaert, M. (2002). Visual word
recognition in bilinguals: Phonological priming form the
second to the first language. Journal of Experimental
Psychology: Human Perception and Performance, 28 (3),
619–627.
Wu, Y. J., & Thierry, G. (2010). Chinese–English bilinguals
reading English hear Chinese. The Journal of Neuroscience,
30 (22), 7646–7651.
Zwitserlood, P. (1996). Form priming. Language and Cognitive
Processes, 11 (6), 589–596.
... The last decade has introduced a stunning change to our understanding of how signing bilinguals process language. Studies of both deaf and hearing signers show that signing bilinguals activate signed language phonology while completing monolingual spoken language reading or listening tasks (Bonsignori & Demestre, 2018;Chiu, Kuo, Lee & Tzeng, 2016;Giezen, Blumenfeld, Shook, Marian, & Emmorey, 2015;Kubuş, Villwock, Morford, & Rathmann, 2015;Meade, Midgley, Sevcikova Sehyr, Holcomb, & Emmorey, 2017;Mendoza, Jackson Maldonado, & Morán, 2018;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Ormel, Hermans, Knoors, & Verhoeven, 2012;Pan, Shu, Wang, & Yan, 2015;Quandt & Kubicek, 2018;Shook & Marian, 2012;Villameriel, Dias, Costello, & Carreiras, 2016). Activation of spoken language orthographic representations during signed language processing is also attested (Hosemann, Altvater-Mackensen, Herrman, & Mani, 2013;Thompson & Langdon, 2018). ...
... For semantically unrelated words, by contrast, the implicit ASL relationship slowed response time. Ormel et al. (2012) used picture-word verification to provide the first evidence that deaf children developing proficiency in a signed and a spoken language also experience cross-language activation. Deaf children acquiring Dutch and Sign Language of the Netherlands (NGT) were shown a colored line drawing and a Dutch word and asked to decide whether or not the word described the picture. ...
... The current study adds to the growing consensus that crosslanguage activation influences language processing in all bilinguals regardless of language modality. Evidence that deaf bilinguals activate signed language forms during written word processing spans many different signed languages, including ASL (Morford et al., 2011;Meade et al., 2017;Quandt & Kubicek, 2018), Chinese Sign Language (Pan et al., 2015), German Sign Language (Kubuş et al., 2015), LIS (Bonsignori & Demestre, 2018), Mexican Sign Language (Mendoza et al., 2018), NGT (Ormel et al., 2012), and Taiwanese Sign Language (Chiu et al., 2016). Like unimodal spoken language bilinguals, lexical access for deaf bilinguals is language nonselective. ...
Article
Full-text available
When deaf bilinguals are asked to make semantic similarity judgments of two written words, their responses are influenced by the sublexical relationship of the signed language translations of the target words. This study investigated whether the observed effects of American Sign Language (ASL) activation on English print depend on (a) an overlap in syllabic structure of the signed translations or (b) on initialization, an effect of contact between ASL and English that has resulted in a direct representation of English orthographic features in ASL sublexical form. Results demonstrate that neither of these conditions is required or enhances effects of cross-language activation. The experimental outcomes indicate that deaf bilinguals discover the optimal mapping between their two languages in a manner that is not constrained by privileged sublexical associations.
... On the other hand, DHH signing children have sign-based representations which may support development of word reading skills. Indeed, experimental evidence indicates that there is a signbased route to word reading for DHH signers (Barca, Pezzulo, Castrataro, Rinaldi, & Caselli, 2013; Conlin & Paivio, 1975; Kubus, Villwock, Morford, & Rathman, 2015; Morford, Kroll, Piñar, & Wilkinson, 2014; Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011; Ormel, Hermans, Knoors, & Verhoeven, 2012; Pan, Shu, Wang, & Yan, 2015; Treiman & Hirsh-Pasek, 1983). This is quite remarkable given that sign language and written language do not correspond in sub-lexical structure. ...
... For example, Morford et al. (2011) reported that semantic similarity judgements were faster when written words had phonologically similar sign translations than when they were phonologically unrelated. Similar findings were reported in the case of deaf children by Ormel et al. (2012). Although these findings indicate that sign phonology support word identification in deaf signers, response times, often measured at the level of seconds, are rough measures of lexical retrieval, since the retrieval process takes only a few hundred milliseconds (Leinenger, 2014). ...
... Earlier research on word reading in DHH signing children indicates an important role for sign language skills relating to sub-lexical and lexical processing. In particular, sign language PA has been reported to be correlated with word reading (McQuarrie & Abbott, 2013), and experimental evidence indicates that signs can become automatically associated with their orthographic counterparts (e.g., Morford et al., 2011; Ormel et al., 2012; Pan et al., 2015). In addition, fingerspelling ability, that is, proficiency in accessing representations of and producing the handshapes that correspond to written letters in a manual alphabet, may facilitate establishment of new written vocabulary (Haptonstall-Nykaza & Schick, 2007), and has been found to be related to reading skills (Stone et al., 2015). ...
Thesis
Full-text available
Reading development is supported by strong language skills, not least in deaf and hard-of-hearing (DHH) children. The work in the present thesis investigates reading development in DHH children who ...
... It seems unlikely that this pattern resulted from some factor other than the phonological relationship of the ASL translations, as there were no significant phonological condition effects in the same window in the hearing sign-naïve control group in Experiment 2. Given the timing of this N400 effect and the lack of phonological overlap between English and ASL, interactivity at a lexicosemantic level seems to be the most plausible underlying mechanism, in agreement with unimodal bilingual studies (e.g.,Thierry & Wu, 2007). This spreading of activation is formally represented in models of bimodal bilingual processing as direct lexical connections between the written word and the sign translation equivalent and/or indirect connections via a shared semantic store (Ormel, Hermans, Knoors, & Verhoeven, 2012;Shook & Marian, 2012). Thus, these results build on previous behavioral studies with bimodal bilinguals by demonstrating that automatic crosslanguage co-activation in this population occurs in a bottom-up fashion, rather than exclusively through top-down strategic translation. ...
Article
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700–900 ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals.
... Both languages are simultaneously active, while the continuous phonemic input stream is mapped onto both lexica (e.g., according to principles formulated in the Cohort Model by 29 Marslen-Wilson (1987)) until an unambiguous lexical decision can be made (see Marian 2009). The co-activation of (visual) speech and signed language has also been demonstrated in deaf (Morford et al. 2011, Ormel et al. 2012) and hearing (, Shook & Marian 2012) bimodal bilinguals. ...
Thesis
Audio-oral speech and visuo-manual sign language as used by the Deaf community are two very different realizations of the human linguistic communication system. Sign language is not only used by the hearing impaired but also by different groups of hearing individuals. To date, there is a great discrepancy in scientific knowledge about signed and spoken languages. Particularly little is known about the integration of the two systems, even though the vast majority of deaf and hearing signers also have a command of some form of speech. This neurolinguistic study aimed to achieve basic knowledge about semantic integration mechanisms across speech and sign language in hearing native and non-native signers. Basic principles of sign processing as reflected in electrocortical brain activation and behavioral decisions were examined in three groups of study participants: Hearing native signers (children of deaf adults, CODAs), hearing late learned signers (professional sign language interpreters), and hearing non-signing controls. Event-related brain potentials (ERPs) and behavioral response frequencies were recorded while the participants performed a semantic decision task for priming lexeme pairs. The lexeme pairs were presented either within speech (spoken prime-spoken target) or across speech and sign language (spoken prime-signed target). Target-related ERP responses were subjected to temporal principal component analyses (tPCA). The neurocognitive basis of semantic integration processes were assessed by analyzing different ERP components (N170, N400, late positive complex) in response to the antonymic and unrelated targets. Behavioral decision sensitivity to the target lexemes is discussed in relation to the measured brain activity. Behaviorally, all three groups of study participants performed above chance level when making semantic decisions about the primed targets. Different result patterns, however, hinted at three different processing strategies. As the target-locked electrophysiological data was analyzed by PCA, for the first time in the context of sign language processing, objectively allocated ERP components of interest could be explored. A little surprisingly, the overall study results from the sign-naïve control group showed that they performed in a more content-guided way than expected. This suggested that even non-experts in the field of sign language were equipped with basic skills to process the cross-linguistically primed signs. Behavioral and electrophysiological study results together further brought up qualitative differences in processing between the native and late learned signers, which raised the question: can a unitary model of sign processing do justice to different groups of sign language users?
... Studier visar att teckenspråkiga barn gör översättningar av skrift till teckenspråk vid läsning och det syns genom de felläsningar de gör (Ormel, Hermans, Knoors & Verhoeven, 2012strukturer har betydelse för förmåga att läsa och skriva. De fann att " having a strong phonological foundation in any language may be more important than the modality through which it is realized " (s. ...
Book
Full-text available
Elever som är döva eller har en hörselnedsättning riskerar att få bristande läs- och skrivkunskaper i svenska vilket i sin tur försvårar lärandet i flera andra ämnen. Specialpedagogiska skolmyndigheten (SPSM) ska arbeta med att bland annat sammanställa forskningsresultat och verka för en ökad måluppfyllelse i specialskolan. Det forsknings- och utvecklingsprojekt som har resulterat i denna bok är ett led i det arbetet. Projektet och boken fokuserar på framgångsfaktorer. Boken är en systematisk litteraturstudie där 175 forskningsartiklar och ett stort antal andra källor publicerade under 2010-2015 granskats och sammanställts. Resultatet av den systematiska litteraturgranskningen visar att det finns två övergripande områden som framgångsfaktorerna kan kategoriseras i; interaktionen som grund för allt lärande och fördelen av en visuellt orienterad läs- och skrivpraktik. Projektet har initierats och finansierats av SPSM och boken ges ut i samarbete med Karlstads universitet. Projektet har genomförts av Carin Roos, docent vid Karlstads universitet tillika projektledare samt Karin Allard, lektor vid Örebro universitet. Dessutom har några medarbetare från SPSMs specialskolor gjort sina självständiga arbeten i speciallärarprogrammet inom projektet.
... So the unique patterns of code-blending and dual lexical retrieval observed for 'bimodal bilinguals' hold little relevance for this corner of bilingual experience. Deaf bilinguals exhibit some widely documented bilingual behaviors, such as cross-language activation during lexical access of printed words (Kubu¸sKubu¸s, Villwock, Morford & Rathmann, published online January 27, 2014; Morford, Wilkinson, Villwock, Piñar & Kroll, 2011; Ormel, Hermans, Knoors & Verhoeven, 2012) and signs (Hosemann, Altvater-Mackensen, Herrmann & Mani, 2013) as well as sensitivity to implicit L2 language patterns (Anible, Twitchell, Waters, Dussias, Piñar & Morford, published online April 1, 2015). However, without understanding how deaf bilinguals use and manage multiple languages, we cannot hope to distinguish effects of general cognitive processing (phonological forms grounded in the same sensory-motor systems compete for activation and engage inhibition and control processes) from effects of bimodality (phonological forms grounded in different sensory-motor systems become integrated into multi-modal representations). ...
Article
Full-text available
In 1939, NYU Professor of German, Murat Roberts warned readers about the potentially harmful effects of societal bilingualism: “When two languages come to be spoken by the same society for the same purposes, both of these languages are certain to deteriorate. The sense of conflict disturbs in both of them the basis of articulation, deranges the procedure of grammar, and imperils the integrity of thought. The representation of the mind is divided into incongruous halves; and the average speaker, being no linguistic expert, finds it difficult to keep the two media apart. Confusion follows. The contours of language grow dim as the two systems collide and intermingle” (23). Roberts’ warnings about the threat of bilingualism are a thin cloak over the assumption that monolingualism is the norm. But even without dire predictions, conceptions of language representation and use derived from the study of bilinguals have been slow to enter the mainstream. The relation between language representation and control (Abutalebi & Green, 2007) and the dynamic nature of grammatical knowledge across the lifespan (Linck, Kroll & Sunderman, 2009) should change the way we conceptualize all language processing, whether monolingual, bilingual or multilingual.
Article
Full-text available
Since signs and words are perceived and produced in distinct sensory-motor systems, they do not share a phonological basis. Nevertheless, many deaf bilinguals master a spoken language with input merely based on visual cues like mouth representations of spoken words and orthographic representations of written words. Recent findings further suggest that processing of words involves cross-language cross-modal co-activation of signs in deaf and hearing bilinguals. Extending these findings in the present ERP-study, we recorded the electroencephalogram (EEG) of fifteen congenitally deaf bilinguals of German Sign Language (DGS) (native L1) and German (early L2) as they saw videos of semantically and grammatically acceptable sentences in DGS. Within these DGS-sentences, two signs functioned as prime and target. Prime and target signs either had an overt phonological overlap as signs (phonological priming in DGS), or were phonologically unrelated as signs but had a covert orthographic overlap in their written German translation (orthographic priming in German). Results showed a significant priming effect for both conditions. Target signs that were either phonologically related as signs or had an underlying orthographic overlap in their written German translation engendered a less negative going polarity in the electrophysiological signal compared to overall unrelated control targets. We thus provide first evidence that deaf bilinguals co-activate their secondly acquired ‘spoken/written’ language German during whole sentence processing of their native sign language DGS.
Article
An exploratory study was implemented with bilingual deaf children using a quasi-experimental pre- and posttest design with a 10-week American Sign Language (ASL) and English bilingual Shared Book Reading intervention. Standardized and research-made instruments were used to evaluate ASL and English skills. Intervention effects showed improvements in Receptive ASL skills, Book Reading, and the ability to draw and describe drawings in both languages. Growth in visual phonology was evident in the drawings, and no relationship between auditory phonology and English word identification was found. The results provide implications for early literacy instruction with suggestions for future research.
Article
Full-text available
In this study, the authors show that cross-lingual phonological priming is possible not only from the 1st language (L1) to the 2nd language (L2), but also from L2 to L1. In addition, both priming effects were found to have the same magnitude and to not be related to differences in word naming latencies between L1 and L2. The findings are further evidence against language-selective access models of bilingual word processing and are more in line with strong phonological models of visual word recognition than with the traditional dual-route models.
Conference Paper
Full-text available
Abstract: The Influence of Sign Language Iconicity on Semantic Conceptualization The empirical work described in this paper explores the influence of sign language iconicity on semantic conceptualization processes of deaf and hearing signers of German Sign Language (GSL). The main question addressed is whether the meaning of a sign is independent of iconic aspects of the sign-signifier. This research question is based on the observation that in many cases the forms of the signs of GSL resemble forms of the object, action or event they denote. In fact, the most important way to introduce new signs to the lexicon of GSL is by highlighting one or another aspect of a certain referent. However, it is not clear if the iconicity of a sign-signifier preserves during its lexicalization process an influence on the sense of the sign, i.e. on the strength of semantic relations to other concepts in the cognitive system. In a Peirceian framework, our main theoretical question can be reformulated as follows: 1. Does a sign loose its function as an 'icon' through transformation into a linguistic 'legisign' and 2. does the iconic relation between the sign and its referent retain an influence on the interpretant after the sign´s integration into the language system. In the first experiment it was examined whether deaf and hearing signers of German Sign Language (GSL) and hearing speakers of German Spoken Language (GSL) (n=60) show different response times (RT) in a verification task where they had to judge the presence or absence of a semantic relation between a reference item (sign vs. spoken word) and a target item (picture). All reference items in Sign Language were iconic and highlighted a certain aspect of the referent they denoted, whereas their translational equivalents in spoken language were arbitrary. The combined target items (3 different pictures: e.g. window / door / roof) were all semantically related to the reference item (e.g. house), but only one of the pictures actually resembled the prior presented sign in GSL (e.g. roof). The experimental material consisted of 120 pairs composed of 20 reference items (signs vs. spoken words) combined with 20 x 3 semantically related pictures and 20 x 3 semantically non-related pictures (distractor items). In the second study it was examined whether the deaf and hearing subjects show different choices in a task asking them to decide which of two presented pictures has a stronger semantic relation to a target item (sign vs. spoken word). The relative number of choices in favor for a specific picture were measured. The results of both studies support the hypothesis that the iconicity of a sign has an influence on the structure and organization of the semantic network of deaf and hearing signers. To be more specific, the features and properties of a certain object which are highlighted by physical and spatial aspects of the sign, are more central in the semantic network in which the meaning of the sign is embedded. The results will be discussed with respect to theoretical implications about the relation of iconicity and arbitrariness of linguistic signs.
Book
Full-text available
Dit boek is uitverkocht. Op woordwerken (http://let.uvt.nl/general/people/avermeer/woordwerken.htm) is de tekst en zijn de woordenlijsten te downloaden
Article
Full-text available
The empirical work described in this paper explores the influence of sign language iconicity on semantic conceptualization processes of deaf and hearing signers of German Sign Language (GSL). The main question addressed is whether the meaning of a sign is independent of iconic aspects of the sign-signifier. This research question is based on the observation that in many cases the forms of the signs of GSL resemble forms of the object, action or event they denote. In fact, the most important way to introduce new signs to the lexicon of GSL is by highlighting one or another aspect of a certain referent. However, it is not clear if the iconicity of a sign-signifier preserves during its lexicalization process an influence on the sense of the sign, i.e. on the strength of semantic relations to other concepts in the cognitive system. In a Peirceian framework, our main theoretical question can be reformulated as follows: 1. Does a sign loose its function as an 'icon' through transformation into a linguistic 'legisign' and 2. does the iconic relation between the sign and its referent retain an influence on the interpretant after the sign´s integration into the language system. In the first experiment it was examined whether deaf and hearing signers of German Sign Language (GSL) and hearing speakers of German Spoken Language (GSL) (n=60) show different response times (RT) in a verification task where they had to judge the presence or absence of a semantic relation between a reference item (sign vs. spoken word) and a target item (picture). All reference items in Sign Language were iconic and highlighted a certain aspect of the referent they denoted, whereas their translational equivalents in spoken language were arbitrary. The combined target items (3 different pictures: e.g. window / door / roof) were all semantically related to the reference item (e.g. house), but only one of the pictures actually resembled the prior presented sign in GSL (e.g. roof). The experimental material consisted of 120 pairs composed of 20 reference items (signs vs. spoken words) combined with 20 x 3 semantically related pictures and 20 x 3 semantically non-related pictures (distractor items). In the second study it was examined whether the deaf and hearing subjects show different choices in a task asking them to decide which of two presented pictures has a stronger semantic relation to a target item (sign vs. spoken word). The relative number of choices in favor for a specific picture were measured. The results of both studies support the hypothesis that the iconicity of a sign has an influence on the structure and organization of the semantic network of deaf and hearing signers. To be more specific, the features and properties of a certain object which are highlighted by physical and spatial aspects of the sign, are more central in the semantic network in which the meaning of the sign is embedded. The results will be discussed with respect to theoretical implications about the relation of iconicity and arbitrariness of linguistic signs.
Article
Full-text available
In two experiments Dutch–English bilinguals were tested with English words varying in their degree of orthographic, phonological, and semantic overlap with Dutch words. Thus, an English word target could be spelled the same as a Dutch word and/or could be a near-homophone of a Dutch word. Whether such form similarity was accompanied with semantic identity (translation equivalence) was also varied. In a progressive demasking task and a visual lexical decision task very similar results were obtained. Both tasks showed facilitatory effects of cross-linguistic orthographic and semantic similarity on response latencies to target words, but inhibitory effects of phonological overlap. A third control experiment involving English lexical decision with monolinguals indicated that these results were not due to specific characteristics of the stimulus material. The results are interpreted within an interactive activation model for monolingual and bilingual word recognition (the Bilingual Interactive Activation model) expanded with a phonological and a semantic component.
Article
Spanish-English bilinguals were tested in a two-part lexical-decision experiment. Word stimuli were (a) noncognates (words with differently spelled translations, e.g., dog and perro) (b) cognates (words with identically spelled translations, e.g., actual), and (c) homographic noncognates (words spelled identically in both languages but with different meanings, e.g., red). The noncognate and cognate words had similar frequencies of usage in each language, but the homographic noncognates differed. In each part of the experiment, subjects looked for words in a single target language. In both parts, word latencies were primarily determined by frequency of usage of a word in the target language. In the unanticipated cross-language-transfer trials in Part 2, no cross-language facilitation of noncognate translations was found. However, there was equivalent cross-language facilitation of cognates and homographic noncognates (i.e., repetitions of the same spelling pattern). This cross-language transfer was independent of the target language and frequency of usage in the target languages. The results of this experiment are consistent with the hypothesis that lexical information is represented in language-specific lexicons and that word recognition requires searching the language-appropriate lexicon.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
This is a book primarily about signed languages, but it is not a book targeted just at the community of linguists and psycholinguists who specialize in research on signed languages. It is instead a book in which data from signed languages are recruited in pursuit of the goal of answering a fundamental question about the nature of human language: what are the effects and non-effects of modality upon linguistic structure? By modality, I and the other authors represented in this book mean the mode – the means – by which language is produced and perceived. As anyone familiar with recent linguistic research – or even with popular culture – must know, there are at least two language modalities, the auditory–vocal modality of spoken languages and the visual–gestural modality of signed languages. Here I seek to provide a historical perspective on the issue of language and modality, as well to provide background for those who are not especially familiar with the sign literature. I also suggest some sources of modality effects and their potential consequences for the structure of language. 1.2 What's the same? Systematic research on the signed languages of the Deaf has a short history. In 1933, even as eminent a linguist as Leonard Bloomfield (1933:39) could write with assurance that: Some communities have a gesture language which upon occasion they use instead of speech. Such gesture languages have been observed among the lower-class Neapolitans, among Trappist monks (who have made a vow of silence), among the Indians of our western plains (where tribes of different language met in commerce and war), and among groups of deaf-mutes. It seems certain that these gesture languages are merely developments of ordinary gestures and that any and all complicated or not immediately intelligible gestures are based on the conventions of ordinary speech.
Article
The effect of context on sign-recognition processes in American Sign Language (ASL) was studied by means of the gating paradigm (Grosjean, 1980). Individual signs were presented in two different conditions: a context condition in which signs were preceded by a context, and a no-context condition in which they were excised from the signing stream. A strong context effect was found: signs were isolated sooner in context, perfect confidence in the response was reached earlier, and the candidates proposed before the isolation point reflected a narrowing-in process that was both semantic and phonological. Future research in sign recognition and models of lexical access are discussed in light of these findings.