ArticlePDF Available

Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture

Authors:

Abstract

Significance Although sign languages and nonlinguistic gesture use the same modalities, only sign languages have established vocabularies and follow grammatical principles. This is the first study (to our knowledge) to ask how the brain systems engaged by sign language differ from those used for nonlinguistic gesture matched in content, using appropriate visual controls. Signers engaged classic left-lateralized language centers when viewing both sign language and gesture; nonsigners showed activation only in areas attuned to human movement, indicating that sign language experience influences gesture perception. In signers, sign language activated left hemisphere language areas more strongly than gestural sequences. Thus, sign language constructions—even those similar to gesture—engage language-related brain systems and are not processed in the same ways that nonsigners interpret gesture.
Classification: BIOLOGICAL SCIENCES
Title: Neural systems supporting linguistic structure, linguistic experience, and
symbolic communication in sign
language and gesture
Short Title: Sign language and gesture in signers and non-signers
Aaron J. Newman
a
, Ted Supalla
b
, Nina Fernandez
c
, Elissa L. Newport
b
,
Daphne Bavelier
c,d
a
Departments of Psychology & Neuroscience, Psychiatry, Surgery, and Pediatrics
(Division of Neurology), Dalhousie University, Halifax, NS, B3H 4R2 Canada. ORCID
ID orcid.org/0000-0001-5290-8342
b
Department of Neurology, Georgetown University, Washington, DC, 20007 USA
c
Department of Brain and Cognitive Sciences, University of Rochester,
Rochester, NY, 14620, USA
d
Department of Psychology, University of Geneva, Geneva, Switzerland
CORRESPONDING AUTHOR:
Aaron Newman
Department of Psychology & Neuroscience
Box 15000
Dalhousie University
Halifax, NS B3H 4R2
Canada
Aaron.Newman@dal.ca
+1 (902) 488-1973
KEYWORDS: brain | American Sign Language | fMRI | deafness | neuroplasticity
NEWMAN ET AL.
2
Abstract
Sign languages used by Deaf communities around the world possess the same structural and
organizational properties as spoken languages: in particular, they are richly expressive and also
tightly grammatically constrained. They therefore offer the opportunity to investigate the extent
to which the neural organization for language is modality-independent, as well as to identify
ways in which modality influences this organization. The fact that sign languages share the
visual-manual modality with a nonlinguistic symbolic communicative system — gesture —
further allows us to investigate where the boundaries lie between language and symbolic
communication more generally. In the present study we had three goals: to investigate the neural
processing of linguistic structure in American Sign Language (using verbs of motion classifier
constructions, which may lie at the boundary between language and gesture); to determine
whether we could dissociate the brain systems involved in deriving meaning from symbolic
communication (including both language and gesture) from those specifically engaged by
linguistically-structured content (sign language); and to assess whether sign language experience
influences the neural systems used for understanding nonlinguistic gesture. The results
demonstrated that even sign language constructions that appear on the surface to be similar to
gesture are processed within the left-lateralized frontal-temporal network used for spoken
languages — supporting claims that these constructions are linguistically structured. Moreover,
while non-signers engage regions involved in human action perception to process
communicative, symbolic gestures, signers instead engage parts of the language processing
network — demonstrating an influence of experience on the perception of non-linguistic stimuli.
NEWMAN ET AL.
3
Significance Statement
Though sign languages and nonlinguistic gesture utilize the same modalities, only sign languages
have established vocabularies and follow grammatical principles. This is the first study to ask
how the brain systems engaged by sign language differ from those used for nonlinguistic gesture
matched in content, using appropriate visual controls. Signers engaged classic left-lateralized
language centers when viewing both sign language and gesture; non-signers showed activation
only in areas attuned to human movement, indicating that sign language experience influences
gesture perception. In signers, sign language activated left-hemisphere language areas more
strongly than gestural sequences. Thus sign language constructions – even those similar to
gesture – engage language-related brain systems and are not processed in the same ways that
non-signers interpret gesture.
NEWMAN ET AL.
4
\body
Introduction
Sign languages such as American Sign Language (ASL) are natural human languages with
linguistic structure. Signed and spoken languages also largely share the same neural substrates,
including left hemisphere dominance revealed by brain injury and neuroimaging. At the same
time, sign languages provide a unique opportunity to explore the boundaries of what, exactly,
“language” is. Speech-accompanying gesture is universal (1), yet such gestures are not language
— they do not have a set of structural components or combinatorial rules and cannot be used on
their own to reliably convey information. Thus gesture and sign language are qualitatively
different, yet both convey symbolic meaning via the hands. Comparing them can help identify the
boundaries between language and nonlinguistic symbolic communication.
In spite of this apparently clear distinction between sign language and gesture, some
researchers have emphasized their apparent similarities. One construction that has been a focus of
contention is classifier constructions (also called verbs of motion). In ASL a verb of motion (e.g.
moving in a circle) will include a root expressing the motion event, morphemes marking the
manner and direction of motion (e.g., forward or backward), and also a classifier that specifies
the semantic category (e.g. vehicle) or size and shape (e.g., round, flat) of the object that is
moving (2). While verbs of motion with classifiers occur in some spoken languages, in ASL
these constructions are often iconic — the forms of the morphemes are frequently similar to the
visual-spatial meanings they express — and they have therefore become a focus of discussion
about the degree to which they (and other parts of ASL) are linguistic or gestural in character.
Some researchers have argued that the features of motion and spatial relationships marked in
ASL verbs of motion are in fact not linguistic morphemes, but are based on the analog imagery
NEWMAN ET AL.
5
system that underlies nonlinguistic visual-spatial processing (3-5). In contrast, Supalla (2, 6, 7)
and others have argued that these ASL constructions are linguistic in nature, differing from
gestures in that they have segmental structure, are produced and perceived in a discrete
categorical (rather than analog) manner, and are governed by morphological and syntactic
regularities found in other languages of the world.
These similarities and contrasts between sign language and gesture allow us to ask some
important questions about the neural systems for language and gesture. The goal of this study was
to examine the neural systems underlying the processing of ASL verbs of motion as compared
with nonlinguistic gesture. This allowed us to ask whether, from the point of view of neural
systems, there is linguistic structure in ASL verbs of motion. It also allowed us to distinguish
networks involved in symbolic communication from those involved specifically in language, and
to determine whether sign language experience alters systems for gesture comprehension.
It is already established that a very similar, left-lateralized neural network is involved in the
processing of many aspects of lexical and syntactic information in both spoken and signed
languages. This includes the inferior frontal gyrus (IFG, classically called Broca’s area), superior
temporal sulcus (STS) and adjacent superior and middle temporal gyri, and the inferior parietal
lobe (IPL, classically called Wernicke’s area) including the angular (AG) and supramarginal gyri
(SMG; 4, 8-18). Likewise, narrative and discourse-level aspects of signed language depend
largely on right STS regions, as they do for spoken language (17, 19). While the neural networks
engaged by signed and spoken language are overall quite similar, some studies have suggested
that the linguistic use of space in sign language engages additional brain regions. During both
comprehension and production of spatial relationships in sign language, the superior parietal
lobule (SPL) is activated bilaterally (4, 5, 12). In contrast, parallel studies in spoken languages
have found no (12) or only left (20) parietal activation when people describe spatial relationships.
NEWMAN ET AL.
6
These differences between signed and spoken language led Emmorey (5) to conclude that, “the
location and movements within [classifier] constructions are not categorical morphemes that are
selected and retrieved via left hemisphere language regions.” (p. 531). However, in these studies
signers had to move their hands while speakers did not, and it is unclear however whether
parietal regions are involved in processing linguistic structure in sign language as opposed to
simply using the hands to symbolically represent spatial structure and relationships. Other studies
have touched on the question of symbolic communication, comparing the comprehension of sign
language with pantomime and with meaningless, sign-like gestures (11, 21-23). In signers,
activation for both sign language and pantomime gestures was reported in classical language-
related areas including the IFG, the posterior region of the STS (STSp), and the SMG, though
typically these activations are stronger for sign language than gesture. Similar patterns of
activation — though often more bilateral — have been observed in non-signers for meaningful as
well as for meaningless gesture perception (24-27).
Thus on the one hand, the classical left-lateralized “language processing” network appears to
be engaged by both signers and non-signers for interpreting both sign language and non-linguistic
gesture. On the other hand, sign language experience appears to drive a specialization of these
regions for signs over non-signs in signers, whereas similar levels of activation are seen for
gestures and spoken descriptions of human actions in non-signers (27). In the right hemisphere,
when viewing gestures both signers and non-signers show activation of the homologues of the
left hemisphere language regions noted above, including the STS (the STSp associated with
biological motion, as well as more anterior areas), inferior and superior parietal regions, and the
IFG.
One important caveat in considering the literature is that most studies have compared task-
related activation to a simple baseline condition that did not control for the types of movements
NEWMAN ET AL.
7
made or for other low-level stimulus features. Thus apparent differences between sign language
and gesture may in fact be attributable to their distinct physical/perceptual qualities, while subtle
but important activation differences between sign language and non-linguistic communication
may have been “washed out” by the overall similarity of brain activation when people attempt to
find meaning in hand, body, and face movement relative to a static stimulus.
Our goal in the present study was to further investigate the neural processing of ASL verbs of
motion and to determine whether we could dissociate the brain systems involved in deriving
meaning from symbolic communication (including both language and gesture) from those
specifically engaged by linguistically-structured content (sign language), while controlling for
the sensory and spatial processing demands of the stimuli. We also asked whether sign language
experience influences the brain systems used to understand non-linguistic gesture. To address
these questions, we measured brain activation using fMRI while two groups of people, deaf
native ASL signers and non-signing, hearing native English speakers, viewed two types of
videos: ASL verbs of motion constructions describing the paths and manners of movement of
toys (e.g., a toy cow falling off a toy truck, as the truck moves forward (2); see Figure S5 for
examples), and gestured descriptions of the same events. Importantly, we also included a
“backward-layered” control condition in which the ASL and gesture videos were played
backward, with three different videos superimposed (as in (17, 18)).
We predicted that symbolic communication (i.e., both gesture and ASL, in signers and non-
signers) would activate areas typically seen in both sign language and meaningful gesture
processing, including the left IFG and the SMG and STSp bilaterally. We further predicted that
the left IFG would show stronger activation for linguistically structured content, i.e., in the
contrast of ASL versus gesture in signers but not in non-signers. We also predicted that other
areas typically associated with syntactic processing and lexical retrieval, including anterior and
NEWMAN ET AL.
8
middle left STS, would be more strongly activated by ASL in signers. Such a finding would
provide evidence in favor of the argument that verbs of motion constructions are governed by
linguistic morphology and argue against these being gestural constructions rather than linguistic.
We further predicted that visual-manual language experience would lead to greater activation of
the left IFG of signers than non-signers when viewing gesture, though less than for ASL.
Results
Behavioral Performance
Overall, Deaf signers showed greater accuracy than hearing non-signers in judging which
picture matched the preceding video, shown in Figure S1. Deaf signers were more accurate than
non-signers for both ASL (signers: 1.99% errors; non-signers: 17.85% errors) and for gestures
(signers: 2.37% errors; non-signers: 5.68% errors). A generalized linear mixed model fitted using
a binomial distribution, involving factors Group (signers, non-signers) and Stimulus Type (ASL,
gesture) identified a Group × Stimulus Type interaction, z = 3.53, p = .0004. Post hoc tests
showed that signers were significantly more accurate than hearing non-signers for both ASL, z =
7.61, p < .0001, and for gestures, z = 2.9, p = .004. Furthermore, deaf signers showed similar
levels of accuracy on both ASL and gesture stimuli, while hearing non-signers were significantly
more accurate in judging gestural than ASL stimuli, z = 6.91, p < .0001.
Examination of the reaction times (RTs) shown in Figure S1 suggested an interaction
between group and stimulus type for this measure as well, with signers showing faster responses
to ASL (1543.4 ms, SD = 264.1) than to gestures (1815.4 ms, SD = 314.3), but non-signers
showing faster responses to gestures (1701.3 ms, SD = 325.3) than to ASL (1875.0 ms, SD =
334.4). This observation was borne out by the results of a 2 (Group) × 2 (Stimulus Type) linear
mixed effects analysis, which included a significant Group × Stimulus Type interaction,
NEWMAN ET AL.
9
F(1,2795) = 148.08, p <.0001. RTs were significantly faster for signers than non-signers when
judging ASL stimuli, t = 3.36, p = .0006, however the two groups did not differ significantly
when judging gesture stimuli, t = 1.15, p = .2493. Signers were also significantly faster at making
judgments for ASL than gesture stimuli, t = 10.54, p <.0001, while non-signers showed the
reverse pattern, responding more quickly gesture stimuli, t = 6.63, p <.0001.
fMRI Data
While we developed backward-layered control stimuli to closely match and control for
the low-level properties of our stimuli, we first examined the contrast with fixation in order to
compare our data with previous studies (most of which used this contrast) and to gain perspective
on the entire network of brain regions that responded to ASL and gesture. This is shown in Figure
S2 with details in Table S1. Relative to fixation, ASL and gesture activated a common, bilateral
network of brain regions in both deaf signers and hearing non-signers, including occipital,
temporal, inferior parietal, and motor regions. Deaf signers showed unique activation in the IFG
and the anterior and middle STS bilaterally.
Contrasts with Backward-Layered Control Stimuli.
The better-matched contrasts of ASL and gesture with backward-layered control stimuli
identified a subset of the brain regions activated in the contrast with fixation, as seen in Figure 1.
Details of activation foci are provided in Table S2. Very little of this activation was shared
between signers and non-signers. Notably, all of the activation in the occipital and superior
parietal lobes noted in the contrast with fixation baseline was eliminated when we used a control
condition that was matched on higher-level visual features. On the lateral surface of the cortex,
activations were restricted to areas in and around the IFG and along the STS, extending into the
IPL. Medially, activation was found in the ventromedial frontal cortex and fusiform gyrus for
NEWMAN ET AL.
10
both groups, for ASL only. Signers uniquely showed activation in the left IFG and bilateral
anterior/middle STS, for both ASL and gesture. Non-signers uniquely showed activation
bilaterally in the STSp for both ASL and gesture (though with a small amount of overlap with
signers in the left STSp), as well as in the left inferior frontal sulcus and right IFG for ASL only.
The only area showing any extensive overlap between signers and non-signers was the right STS.
Comparison of ASL and gesture within each group
No areas were more strongly activated by gesture than by ASL in either group. In signers,
the ASL - gesture contrast yielded an exclusively left-lateralized network, including the IFG and
middle STS, as well as the fusiform gyrus (Figure 2, left panel, and Table S3). By contrast, the
only areas that showed significantly stronger activation for ASL than gesture in hearing people
was a small part of the left STSp (distinct from the areas activated in signers) and the posterior
cingulate gyrus. Although non-signers showed significant activation for ASL but not gesture in or
around the IFG bilaterally, these activations were not significantly stronger for ASL than for
gesture, implicating sub-threshold activation for gesture.
Between-group comparisons
Signers showed significantly stronger activation for ASL than non-signers in the
anterior/middle STS bilaterally and in the left IFG (Figure 2, right panel, and Table S4). The area
of stronger activation in the left IFG did not survive multiple comparison correction. However,
because differences between groups were predicted a priori in this region, we interrogated it
using a post hoc region of interest analysis. Left IFG was defined as Brodmann’s areas 44 and 45
(28) and within this we thresholded activations at z > 2.3, uncorrected for multiple comparisons.
As seen in Figure 2, the area that showed stronger activation for signers than non-signers was
within the left IFG cluster that, in signers, showed stronger activation for ASL than gesture.
NEWMAN ET AL.
11
Hearing non-signers showed greater activation than signers only for gesture, and this was
restricted to the STSp/SMG of the right hemisphere.
Discussion
The central question of this study was whether distinct brain systems are engaged during the
perception of sign language, compared to gestures that also use the visual-manual modality and
are symbolic communication but lack linguistic structure. Some previous work has suggested that
aspects of sign language — such as verbs of motion — are non-linguistic and are processed like
gesture, thus relying on brain areas involved in processing biological motion and other spatial
information. This position would predict shared brain systems for understanding verbs of motion
constructions and non-linguistic gestures expressing similar content. In contrast, we hypothesized
that verbs of motion constructions are linguistically governed, and as such would engage
language-specific brain systems in signers distinct from those used for processing gesture. We
also investigated whether knowing sign language influenced the neural systems recruited for non-
linguistic gesture, by comparing responses to gesture in signers and non-signers. Finally, we
compared signers and non-signers to determine whether understanding symbolic communication
differs when it employs a linguistic code as opposed to when it is created ad hoc. Because there is
little symbolic but nonlinguistic communication in the oral-aural channel, such a comparison is
best done using sign language and gesture. While many neuroimaging studies have contrasted
language with control stimuli, to our knowledge no studies have compared linguistic and non-
linguistic stimuli while attempting to match the semantic and symbolic content. Here, ASL and
gesture were each used to describe the same action events.
In the present study, when compared to the low-level fixation baseline, ASL and gesture
activated an extensive, bilateral network of brain regions in both deaf signers and hearing non-
NEWMAN ET AL.
12
signers. This network is consistent with previous studies of both sign language and gesture that
used similar contrasts with a low-level baseline (10, 22-27). Critically however, when compared
to the backward-layered conditions which controlled for stimulus features (such as biological
motion and face perception) and for motor responses, a much more restricted set of brain areas
was activated, with considerably less overlap across groups and conditions. Indeed, the only area
commonly activated across sign language and gesture in both signers and non-signers was in the
middle/anterior STS region of the right hemisphere. In general, when there were differences
between stimulus types, ASL elicited stronger brain activation than gesture in both groups.
However, the areas that responded more strongly to ASL were almost entirely different between
groups, again supporting the influence of linguistic experience in driving the brain responses to
symbolic communication.
Linguistic Structure
Our results show that ASL verbs of motion produce a distinct set of activations in native signers,
based in the classic left hemisphere language areas: the IFG and anterior/middle STS. This
pattern of activation was significantly different from that found in signers observing gesture
sequences expressing approximately the same semantic content, and was wholly different from
the bilateral pattern of activation in the STSp found in hearing non-signers observing either sign
language or gestural stimuli. These results thus suggest that ASL verbs of motion are not
processed by native signers as nonlinguistic imagery — since nonsigners showed activation
primarily in areas associated with general biological motion processing — but rather are
processed in terms of their linguistic structure (i.e. as complex morphology), as Supalla (2, 6, 7)
has argued. This finding is also consistent with evidence that both grammatical judgment ability
and left IFG activation correlated with age of acquisition in congenitally deaf people who learned
NEWMAN ET AL.
13
ASL as a first language (14). Apparently, despite their apparent iconicity, ASL verbs of motion
are processed in terms of their discrete combinatorial structure, like complex words in other
languages, and depend for this type of processing on the left hemisphere network that underlies
spoken as well as other aspects of signed languages.
The other areas more strongly activated by ASL — suggesting linguistic specialization —
were in the left temporal lobe. These included the middle STS — an area associated with lexical
(lemma) selection and retrieval in studies of spoken languages (29) — and the posterior STS. For
signers this left-lateralized activation was posterior to the STSp region activated bilaterally in
non-signers for both ASL and gesture and typically associated with biological motion processing.
The area activated in signers is in line with the characterization of this region as “Wernicke’s
area” and its association with semantic and phonological processing.
Symbolic Communication
Symbolic communication was a common feature of both the ASL and gesture stimuli. Previous
studies had suggested that both gesture and sign language engage a broad, common network of
brain regions including classical left hemisphere language areas. Some previous studies used
pantomimed actions (10, 21, 27), which are more literal and less abstractly symbolic than some
of the gestures used in the present study; other studies used gestures with established meanings
(emblems; 22, 27, 30), or meaningless gestures (21, 23, 30). Thus the stimuli differed from those
in the present study in terms of their meaningfulness and degree of abstract symbolism. Our data
revealed that when sign language and gesture stimuli are closely matched for content, and once
perceptual contributions are properly accounted for, a much more restricted set of brain regions
are commonly engaged across stimulus types and groups — restricted to middle and anterior
regions of the right STS. We have consistently observed activation in this anterior/middle right
NEWMAN ET AL.
14
STS area in our previous studies of sign language processing, in both hearing native signers and
late learners (16) and in deaf native signers both for ASL sentences with complex morphology
(including spatial morphology (18), and narrative and prosodic markers (17)). The present
findings extend this to non-linguistic stimuli in non-signers, suggesting that the right anterior STS
is involved in the comprehension of symbolic manual communication regardless of linguistic
structure or sign language experience.
Sign Language Experience
Our results indicate that lifelong use of a visual-manual language alters the neural response to
non-linguistic manual gesture. Left frontal and temporal language processing regions showed
activation in response to gesture only in signers, once low-level stimulus features were accounted
for. While these same left hemisphere regions were more strongly activated by ASL, their
activation by gesture exclusively in signers suggests that sign language experience drives these
areas to attempt to analyze visual-manual symbolic communication even when it lacks linguistic
structure. An extensive portion of the right anterior/middle STS was also activated exclusively in
signers. This region showed no specialization for linguistically-structured material, although in
previous studies we found sensitivity of this area to both morphological and narrative/prosodic
structure in ASL (17, 18). It thus seems that knowledge of a visual-manual sign language can
lead to this region’s becoming more sensitive to manual movements that have symbolic content,
whether linguistic or not, and that activation in this region increases with the amount of
information that needs to be integrated to derive meaning.
It is interesting to note that our findings contrast with some previous studies that compared
ASL and gesture, and found left IFG activation only for ASL (10, 21). This finding was
interpreted as evidence for a “gating mechanism” whereby signs were distinguished from gesture
NEWMAN ET AL.
15
at an early stage of processing in native signers, with only signs passed forward to the IFG.
However, those previous studies did not use perceptually-matched control conditions (allowing
for the possibility of stimulus feature differences), and used pantomimed actions. In contrast, we
used sequences of gestures that involved the articulators to symbolically represent objects and
their paths. In this sense our stimuli more closely match the abstract, symbolic nature of language
than does pantomime. Thus there does not appear to be a strict “gate” whereby the presence or
absence of phonological or syntactic structure determines whether signers engage the left IFG;
rather signers may engage language-related processing strategies when meaning needs to be
derived from abstract symbolic, manual representations.
Conclusions
This study was designed to assess the effects of linguistic structure, symbolic communication,
and linguistic experience on brain activation. In particular, we sought to compare how sign
languages and non-linguistic gesture are treated by the brain, in signers and non-signers. This
comparison is of special interest in light of recent claims that some highly spatial aspects of sign
languages (e.g., verbs of motion) are organized and processed like gesture rather than like
language. Our results indicate that ASL verbs of motion are not processed like spatial imagery or
other nonlinguistic materials, but rather are organized and mentally processed like grammatically
structured language, in specialized brain areas including the inferior frontal gyrus and superior
temporal sulcus of the left hemisphere.
While in this study both ASL and gesture conveyed information to both signers and non-
signers, we identified only restricted areas of the right anterior/middle STS that responded
similarly to symbolic communication across stimulus types and groups. Overall, our results
suggest that sign language experience modifies the neural networks that are engaged when people
NEWMAN ET AL.
16
try to make sense of non-linguistic, symbolic communication. Non-signers engaged a bilateral
network typically engaged in the perception of biological motion. For native signers on the other
hand, rather than sign language being processed like gesture, gesture is processed more like
language. Signers recruited primarily language processing areas, suggesting that lifelong sign
language experience leads imposing a language-like analysis even for gestures that are
immediately recognized as nonlinguistic.
Finally, our findings support the analysis of verbs of motion classifier constructions as being
linguistically structured (2, 6, 7), insofar as they specifically engage classical left hemisphere
language processing regions in native signers but not areas subserving nonlinguistic spatial
perception. We suggest that this is because, although the signs for describing spatial information
may have had their origins in gesture, over generations of use they have become regularized and
abstracted into segmental, grammatically-controlled linguistic units—a phenomenon that has
been repeatedly described in the development and evolution of sign languages (31-33) and the
structure and historical emergence of full-fledged adult sign languages (2, 6, 7, 34). Human
communication systems seem always to move toward becoming rapid, combinatorial, and highly
grammaticized systems (31); the present findings suggest that part of this change may involve
recruiting the left hemisphere network as the substrate of rapid, rule-governed computation.
Materials and Methods
Please see the Supplementary Information for detailed Materials and Methods.
Participants.
Nineteen congenitally deaf native learners of ASL (signers) and 19
normally
hearing, native English speakers (non-signers) participated. All provided informed consent.
ASL and gesture stimuli.
Neural activation to sign language as compared with gesture was
observed by asking participants to watch videotaped ASL and gesture sequences expressing
NEWMAN ET AL.
17
approximately the same content. Both ASL and gesture video sequences were elicited by the
same set of stimuli, depicting events of motion. These stimuli included a set of short videos
of toys moving along various paths, relative to other objects, and a set of line drawings
depicting people, animals, and/or objects engaged in simple activities. Examples of these are
shown in Figure S5. After viewing each such video, the signer or gesturer was filmed
producing, respectively, ASL sentences or gesture sequences describing the video. These
short clips were shown to participants in the scanner. The
ASL constructions were produced
by a native ASL signer; gestured descriptions were produced by three native English speakers
who did not know sign language. The native signer and the gesturers were instructed to describe
each video or picture without speaking, immediately after viewing it. The elicited ASL and
gestures were video recorded, edited, and saved as digital files for use in the fMRI experiment.
Backward-layered control stimuli were created by making the signed and gestured movies partly
transparent, reversing them in time, then overlaying three such movies (all of one type, i.e., ASL
or gesture) in software and saving these as new videos.
fMRI Procedure. Each participant completed 4 fMRI scans (runs) of 40 trials each. Each trial
consisted of a task cue (instructions to to try to determine the meaning of the video, or watch for
symmetry between the two hands), followed by a video (ASL, gesture, or control), followed by a
response prompt. For ASL and gesture movies, participants saw two pictures and had to indicate
via button press which best matched the preceding video. For control movies, participants made
button presses indicating whether, during the movie, three hands had simultaneously had the
same handshape. Two runs involved ASL stimuli only, while the other two involved gesture
stimuli only. Within each run, half the trials were the ASL or gesture videos and the other half
were their backward-layered control videos. The ASL movies possessed enough iconicity, and
the “foil” pictures were designed to be different enough from the targets, that task performance
NEWMAN ET AL.
18
was reasonably high even for non-signers viewing ASL. Data were collected using an EPI pulse
sequence on a 3T MRI system (TE = 30 ms; TR = 2 sec; 90 deg flip angle, 4 mm isotropic
resolution). fMRI data were analyzed using FSL FEAT software according to recommendations
of the software developers (www.fmrib.ox.ac.uk/fsl).
Acknowledgements
This study was supported by a grant from the James S. McDonnell Foundation to DB,
EN,
and TS, and by NIH grants DC00167 (EN and TS) and DC04418 (DB). AJN was supported
by NSERC (Canada) and the Canada Research Chairs program. We are grateful to
Dara
Baril, Patricia Clark, Jason Droll, Matt Hall, Elizabeth Hirshorn, Michael Lawrence,
Don
Metlay, and Jennifer Vannest for their help on this project, and to Barbara Landau and
Rachel Mayberry for their thoughtful comments.
References
1. McNeill D (1985) So you think gestures are nonverbal? Psychological Review 92:350-371.
2. Supalla T (1982) Structure and acquisition of verbs of motion and location in American Sign
Language. Ph.D. thesis (University of California, San Diego).
3. Liddell SK (2003) Grammar, Gesture, and Meaning in American Sign Language (Cambridge
University Press).
4. Emmorey K, et al. (2002) Neural Systems Underlying Spatial Language in American Sign
Language. NeuroImage 17:812-824.
5. Emmorey K, McCullough S, Mehta S, Ponto LLB, and Grabowski TJ (2013) The biology of
linguistic expression impacts neural correlates for spatial language. Journal of Cognitive
Neuroscience 25:517-33.
NEWMAN ET AL.
19
6. Supalla T (1990) Serial verbs of motion in ASL. Theoretical Issues in Sign Language
Research, eds Fischer SD, Siple P (The University of Chicago Press, Chicago), pp 127–152.
7. Supalla T (2003) Revisiting visual analogy in ASL classifier predicates. Perspectives on
Classifier Constructions in Sign Language, ed Emmorey K (Lawrence Erlbaum Associates,
Inc, Mahwah, NJ).
8. Bavelier D, et al. (2008) Encoding, rehearsal, and recall in signers and speakers: Shared
network but differential engagement. Cerebral Cortex 18:2263-74.
9. Campbell R, MacSweeney M, and Waters D (2008) Sign language and the brain: A review.
Journal of Deaf Studies and Deaf Education 13:3-20.
10. Corina D, et al. (2007) Neural correlates of human action observation in hearing and deaf
subjects. Brain Research 1152:111-29.
11. Hickok G, Bellugi U, and Klima ES (1996) The neurobiology of sign language and its
implications for the neural basis of language. Nature 381:699-702.
12. MacSweeney M, et al. (2002) Neural correlates of British Sign Language comprehension:
spatial processing demands of topographic language. Journal of Cognitive Neuroscience
14:1064-1075.
13. MacSweeney M, Capek CM, Campbell R, Woll B (2008) The signing brain: The
neurobiology of sign language. Trends in Cognitive Sciences 12:432–440.
14. Mayberry RI, Chen JK, Witcher P, Klein D (2011) Age of acquisition effects on the
functional organization of language in the adult brain. Brain and Language 119:16–29.
15. Neville HJ, Bavelier D, and Corina D (1998) Cerebral organization for language in deaf and
hearing subjects: Biological constraints and effects of experience. Proceedings of the
National Academy of Sciences of the U.S.A 95:922-929.
NEWMAN ET AL.
20
16. Newman AJ, Bavelier D, Corina D, Jezzard P, and Neville HJ (2002) A critical period for
right hemisphere recruitment in American Sign Language. Nature Neuroscience 5:76-80.
17. Newman AJ, Supalla T, Hauser PC, Newport EL, and Bavelier D (2010) Prosodic and
narrative processing in American Sign Language: An fMRI study. NeuroImage 52:669-76.
18. Newman AJ, Supalla T, Hauser P, Newport EL, and Bavelier D (2010) Dissociating neural
subsystems for grammar by contrasting word order and inflection. Proceedings of the
National Academy of Sciences 107:7539.
19. Hickok G, et al. (1999) Discourse deficits following right hemisphere damage in deaf signers.
Brain and Language 66:233–248.
20. Damasio H, Grabowski TJ, Tranel D, Ponto LL, Hichwa RD, and Damasio AR (2001) Neural
correlates of naming actions and of naming spatial relations. Neuroimage 13:1053-1064.
21. Emmorey K, Xu J, Gannon P, Goldin-Meadow S, Braun A (2010) CNS activation and
regional connectivity during pantomime observation: No engagement of the mirror neuron
system for deaf signers. NeuroImage 49:994–1005.
22. Husain FT, Patkin DJ, Thai-Van H, Braun AR, Horwitz B (2009) Distinguishing the
processing of gestures from signs in deaf individuals: An fMRI study. Brain Research
1276:140–150.
23. MacSweeney M, et al. (2004) Dissociating linguistic and nonlinguistic gestural
communication in the brain. NeuroImage 22:1605–1618.
24. Decety J, et al. (1997) Brain activity during observation of actions: Influence of action
content and subject's strategy. Brain 120:1763-1777.
25. Lotze M,et al. (2006) Differential cerebral activation during observation of expressive
gestures and motor acts. Neuropsychologia 44:1787-95.
NEWMAN ET AL.
21
26. Villarreal M, et al. (2008) The neural substrate of gesture recognition. Neuropsychologia
46:2371–2382.
27. Xu J, Gannon PJ, Emmorey K, Smith JF, Braun AR (2009) Symbolic gestures and spoken
language are processed by a common neural system. Proceedings of the National Academy
of Sciences 106:20664–20669.
28. Eickhoff SB, et al. (2007) Assignment of functional activations to probabilistic
cytoarchitectonic areas revisited. NeuroImage 36:511–521.
29. Indefrey P (2011) The spatial and temporal signatures of word production components: A
critical update. Frontiers in Psychology 2:255.
30. Husain FT, Patkin DJ, Kim J, Braun AR, Horwitz B (2012) Dissociating neural correlates of
meaningful emblems from meaningless gestures in deaf signers and hearing non-signers.
Brain Research 1478:24–35
31. Newport EL (1981) Constraints on structure: Evidence from American Sign Language and
language learning. Minnesota Symposium on Child Psychology 14. ed Collins, WA
(Lawrence Erlbaum Associates, Inc., Hillsdale, NJ) Vol. 14.
32. Newport EL (1999) Reduced input in the acquisition of signed languages: Contributions to
the study of creolization. Language Creation and Language Change: Creolization,
Diachrony, and Development, ed DeGraff, M (MIT Press, Cambridge, MA).
33. Senghas A, Kita S, and Ozyürek A (2004) Children creating core properties of language:
Evidence from an emerging sign language in Nicaragua. Science 305:1779-82.
34. Supalla T, and Clark P (2015) Sign Language Archeology: Understanding the History and
Evolution of American Sign Language (Gallaudet University Press, Washington, D.C.).
NEWMAN ET AL.
22
35. Worsley KJ (2001) Statistical analysis of activation images. Functional Magnetic Resonance
Imaging: An Introduction to the Methods. ed Jezzard P, Matthews PM, Smith SM (Oxford
University Press, New York), pp 251-270.
NEWMAN ET AL.
23
Figure Legends
Figure 1: Statistical maps for each stimulus type relative to the backward-layered control
stimuli, in each subject group. Statistical maps were masked with the maps shown in Figure S2,
so that all contrasts represent brain areas activated relative to fixation baseline. Thresholded at z
>2.3, with a cluster size-corrected p < .05. In the coronal and sagittal views, the right side of the
brain is shown on the right side of each image.
Figure 2: Left panel: Between-condition differences for each subject group, for the contrasts
with backward-layered control stimuli. No areas showed greater activation for gesture than ASL
in either group. Thresholded at z >2.3, with a cluster size-corrected p <.05. Right panel: Between-
group differences for the contrast of each stimulus type relative to the backward-layered control
stimuli. No areas were found that showed stronger activation in signers than non-signers for
gesture, nor in non-signers than signers for ASL. Thresholded at z >2.3, with a cluster size-
corrected p <.05. Significant between-group differences in the left IFG were obtained in a
planned ROI analysis at z >2.3, uncorrected for cluster size.
 
 
SI FOR NEWMAN ET AL. 1
Supplementary Information for: Neural systems supporting linguistic structure, linguistic
knowledge, and symbolic communication in sign language and gesture
Aaron J. Newman
a
, Ted Supalla
b
, Nina Fernandez
c
, Elissa L. Newport
b
,
Daphne Bavelier
c,d
a. Departments of Psychology & Neuroscience, Psychiatry, Surgery, and Pediatrics (Division
of Neurology), Dalhousie University, Halifax, NS, B3H 4R2 Canada. ORCID ID
orcid.org/0000-0001-5290-8342
b. Department of Neurology, Georgetown University, Washington, DC, 20007 USA
c. Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY,
14620, USA
d. Department of Psychology, University of Geneva, Geneva, Switzerland
SI FOR NEWMAN ET AL. 2
Table of Contents
Results ................................................................................................................................. 3
Behavioral Performance .................................................................................................. 3
fMRI Analyses of ASL and gesture conditions relative to fixation baseline .................. 3
Materials and Methods ........................................................................................................ 5
Participants ...................................................................................................................... 5
Materials used to elicit ASL and gesture stimuli
......................................................... 5
Procedure used to elicit ASL and gesture stimuli ........................................................... 6
Selection of stimuli for fMRI experiment ....................................................................... 8
fMRI Control Stimuli ...................................................................................................... 9
Procedure ...................................................................................................................... 10
MRI data acquisition ..................................................................................................... 12
fMRI preprocessing and data analysis .......................................................................... 12
Figures............................................................................................................................... 15
SI FOR NEWMAN ET AL. 3
Results
Behavioral Performance
Behavioral data and analyses are reported in the main article. Plots of the accuracy and
reaction time data are shown in Figure S1.
fMRI Analyses of ASL and gesture conditions relative to fixation baseline
Within-group comparisons. The main article text reports the results of the contrasts of
each condition against fixation, in each group; the statistical maps of this contrast are shown in
Figure S2.
Here we also report the results of the direct comparison between ASL and gesture within
each group (i.e., without first contrasting each type of communication with its corresponding
backward-layered control condition, as was reported in the main text). The activation map for
this is shown in Figure S3. In general, when between-condition differences were found, ASL
elicited stronger activation than gesture stimuli in both signers and non-signers. The one
exception to this was that the superior parietal lobule (SPL) showed stronger activation for
gesture than ASL in signers. Notably, the areas of stronger activation for ASL than gesture were
almost completely non-overlapping in signers and non-signers.
Greater activation for ASL than gesture in signers was found bilaterally in the
middle/anterior STS, the right IFG, the thalamus and caudate nucleus, and in the medial
supplementary motor area (SMA). Although left IFG activation did not survive multiple
comparison correction, we conducted an region of interest (ROI) analysis in this region because
we had predicted differences between ASL and gesture a priori. Left IFG was defined as
Brodmann’s areas 44 and 45 on the basis of the Jülich histological atlas (28) and within this ROI,
we thresholded activations at z > 2.3, uncorrected for multiple comparisons. By this approach, a
SI FOR NEWMAN ET AL. 4
large region of the left IFG was activated more strongly for ASL than gesture in signers, and a
small portion of the pars opercularis was also activated for the same contrast in non-signers.
For hearing non-signers, ASL activated a broad, bilateral network more strongly than
gestures. This network was almost entirely orthogonal to that activated by the same contrast in
signers, however; the only area of overlap between groups was in the medial SMA of the left
hemisphere. The network activated in non-signers included the STSp, superior parietal lobule
(SPL), premotor cortex (the precentral gyrus), occipital and occipital-temporal regions, and
medial SMA, precuneus, and ventro-medial frontal cortex. In addition, right middle STS and
parahippocampal gyri were activated in this contrast. Although signers also showed stronger
right STS activation for ASL, the differences for non-signers were more posterior than for
signers, and showed virtually no overlap between the two groups.
Signers showed stronger activation for gestures than ASL only in the bilateral SPL.
Although non-signers also showed bilateral SPL activation in the reverse contrast (i.e., stronger
for ASL), the areas more strongly activated by gesture in signers were anterior and superior to
those activated in non-signers, with minimal overlap. In non-signers there were no brain areas in
which gestures evoked stronger activation than ASL.
Between-Group Comparisons. The results of the between-group comparisons for each
stimulus type largely replicated the differences observed qualitatively in the preceding section, as
seen in Figure S4. Signers showed stronger activation than non-signers, for both ASL and gesture,
in the middle/anterior STS bilaterally. These were the only areas more strongly activated in
signers when fixation was used as a baseline. Non-signers, on the other hand, showed stronger
activation than non-signers in the SPL and occipital regions, as well as ventro-medial frontal
SI FOR NEWMAN ET AL. 5
cortex, but only for ASL stimuli. Non-signers did not show stronger activations than signers for
gestures in any brain region.
Materials and Methods
Participants
Nineteen congenitally deaf (80 dB or greater loss in both ears), native learners of ASL (8
male, mean age 22.7 years, range: 19–31) and 19 normally-hearing, native English speakers (9
male, mean age 20.3 years, range: 19–26) participated in this study. All were right handed, had at
least one year of post-secondary education, had normal or corrected-to-normal vision, and
reported no neurological problems. Both parents of all Deaf participants used ASL, and all but 2
Deaf people reported using hearing aids. None of the participants in this study had participated in
previous studies from our lab involving these or any similar stimuli. None of the hearing
participants reported knowing ASL, although one lived with roommates who used ASL, one
reported knowing “a few signs”, and another reported knowing how to fingerspell. All subjects
gave informed consent and the study procedures were reviewed by the University of Rochester
Research Subjects Review Board. Participants received $100 financial compensation for
participating in the study.
Materials used to elicit ASL and gesture stimuli
The ASL and gesture stimuli used in the fMRI experiment were elicited by the same set
of materials, which included a set of short videos as well as a set of pictures. The first were a set
of short, stop-motion animations originally developed by author T.S. to elicit ASL verbs of
motion constructions (2). These videos depict small toys (e.g., people, animals, and videos)
moving along various paths, sometimes in relation to other objects. Figure S5 shows still frames
taken from several of these stimuli, as examples. This set comprised 85 movies, all of which
SI FOR NEWMAN ET AL. 6
were used for elicitation though only a subset of the ASL and gestures elicited by these were
ultimately used in the fMRI study (see below). The second set of elicitation stimuli were colored
“clipart” line drawings obtained from Internet searches. These were selected to show people
and/or objects, either involved in an action, or in particular positions relative to other objects. For
example, one picture showed a boy playing volleyball, while another depicted three chickens
drinking from a trough. One hundred such images were selected and used for ASL and gesture
elicitation, though again only a subset of the elicited gesture sequences were used in the fMRI
experiment.
Procedure used to elicit ASL and gesture stimuli
We recruited one native ASL signer and three non-signing, native English speakers as
models to produce the ASL and gesture sequences, respectively, to be used as stimuli in the fMRI
experiment. Our goal was to have the gesture sequences be as similar as possible to the ASL
stimuli in terms of fluidity of production, duration, and information represented. Of course, ASL
and gesture production are inherently different because a native ASL signer would be able to
readily access the necessary signs in her lexicon and produce a grammatical sequence describing
the scene; by contrast non-signers would have to generate appropriate gestures ``on the fly''
including deciding how to represent the individual referents and their relative positions and/or
movements. To help ensure fluidity, the gesture models were given opportunities to practice each
gesture prior to filming, and all were brought in to the studio on more than one occasion for
filming. We recruited three gesture models in order to generate a wider range of gestures from
which to select the most concise, fluid, and understandable versions. Since ASL is a language,
we used only one signer as a model because the signs and grammar used by other signers would
be expected not to vary across native signers. While the variety in appearance of the non-signers
SI FOR NEWMAN ET AL. 7
may have led to less habituation of visual responses to these stimuli than to the signer, we do not
see this as a problem because the control stimuli (backward-overlaid versions of the same
movies) were matched in this factor. Videos were recorded on a Sony digital video camera and
subsequently digitized for editing using Final Cut Pro (Apple, Inc., Cupertino, CA). The signer
was a congenitally deaf, native learner of ASL born to Deaf native ASL signing parents. For each
elicitation stimulus, she was instructed to describe the scene using ASL. Instructions were
provided by author T.S., a native signer. The gesturers were selected for their ability to generate
gestural descriptions of the elicitation stimuli. Two of the gesturers had some acting experience
but none were professional actors. The gesturers were never in the studio together and did not at
any time discuss their gestures with each other. Each gesturer was informed as to the purpose of
the recordings, and instructed by the hearing experimenters to describe the videos and pictures
that they saw using only their hands and body, without speaking, in a way that would help
someone watching their gestures choose between two subsequent pictures — one representing
the elicitation stimulus the gesturer had seen, and the other representing a different picture or
video. Each gesturer came to the studio on several occasions for stimulus recording, and had the
opportunity to produce each gesture several times both within and across recording sessions. We
found this resulted in the shortest, and most fluid gestures; initial attempts at generating gestures
were often very slow, during which time the gesturer spent time determining how best to depict
particular objects and receiving feedback from the experimenter regarding the clarity of the
gestures in the video recordings (i.e., positions of hands and body relative to the camera). The
experimenters who conducted the gesture recording (authors A.J.N. and N.F.) did not provide
suggestions about particular choices of gestures or the way scenes were described, nor were any
SI FOR NEWMAN ET AL. 8
attempts to make them similar to ASL — feedback was provided only to encourage clarity and
fluidity.
Subsequent to ASL and gesture recording, the videos were viewed by the experimenters
and a subset were selected for behavioral pilot testing. Author T.S. was responsible for selecting
the ASL stimuli, while A.J.N. and N.F. selected the gestural stimuli. Only one example of a
gesture was selected for each elicitation stimulus (i.e., the gesture sequence from one of the three
gesturers), using criteria of duration (preferring shorter videos) and clarity of the relationship
between the gesture and the elicitation stimulus, while balancing the number of stimuli depicting
each of the three gesturers. In total, 120 gestured stimuli and 120 ASL stimuli were selected for
behavioral pilot testing.
Selection of stimuli for fMRI experiment
As noted above, a total of 240 potential stimuli were selected for the fMRI experiment.
Next we conducted a behavioral pilot study using six normally hearing non-signers, with the goal
of identifying which stimuli were most accurately and reliably identified by non-signers. The
pilot study was conducted in a similar way to the fMRI experiment: on each trial, one video was
shown, followed by two pictures. One of the pictures was the target stimulus that had been used
to elicit the preceding video (gesture or ASL), and the other was a “foil”. For the stop-motion
elicitation stimuli, still images were created that showed a key frame from the video, along with
a red arrow drawn on the image to depict the path of motion. Foils for these images either
showed the same object(s) with a different path (``path foils'') or a different object(s) following
the same path (``object foils''). For the videos elicited by clipart, foils were other clipart pictures
that showed either a similar actor/object performing a different action, or different actors/objects
SI FOR NEWMAN ET AL. 9
performing a similar action. Subjects were asked to choose via button press which picture best
matched the preceding video.
From the set of 240 pilot stimuli, the final set of stimuli for the fMRI experiment were
selected as those that resulted in the highest mean correct responses across subjects. This resulted
in a final set of 80 items; 40 of these were elicited by the stop-motion animation stimuli, and 40
by the clipart stimuli. For each of these 80 stimuli, both the gestured and ASL versions of the
stimulus were used in the fMRI study. Presentation of these was counterbalanced, such that a
given participant would only see the gestured or the ASL version of a particular elicitation
stimulus, but never both. The movies ranged in duration from 2 – 10.8 sec, with the ASL movies
averaging 4 sec in length (range: 2.8 – 8.1 sec) and the gesture movies averaging 7.2 sec (range:
4.5 – 9.2). Although the ASL movies were on average shorter, because the backward-layered
control movies were generated from these same stimuli, they were of a comparable range of
durations. Thus the imaging results in which activation for the control stimuli are subtracted out
— and in particular the observed differences between ASL and gesture stimuli — should not be
attributable to differences in length of the movies between conditions.
fMRI Control Stimuli
Our goal in developing the control stimuli for the fMRI experiment was to match these as
closely as possible to the visual properties of the ASL and gesture stimuli, while preventing
participants from deriving meaning from the stimuli. We used the same approach as we had
employed successfully in previous fMRI studies of ASL (17,18). This involved making each
movie partially transparent as well as having it play backward, and then digitally overlaying
three such movies. The three movies selected for overlaying in each control video were chosen to
have similar lengths, and in the case of the gesture movies, all three overlaid movies showed the
SI FOR NEWMAN ET AL. 10
same person; equal numbers of control movies were produced showing each of the three
gesturers. Overlaying three movies was done because in previous pilot testing, we found that
signers were quite good at understanding single (non-overlaid) ASL sentences played backward;
this was not true for overlaid stimuli.
Procedure
Participants were provided with instructions for the task prior to going into the MRI
scanner, and were reminded of these instructions once they were inside the scanner, immediately
prior to starting the task. Participants were told to watch each movie (gestured or ASL) and try to
understand what was being communicated. They were told that two of the four runs would
contain gesture produced by non-signers, while the other two contained ASL. Signers were
instructed to try to make sense of the gestures; non-signers were instructed to try to make sense
of all the stimuli, including to get what they could out of the ASL stimuli. Each run started with
text indicating whether gesture or ASL would be presented. Participants were instructed to wait
until the response prompt after the end of each movie, and then choose which of two pictures
best matched the stimulus they had seen. The pictures shown were those described above under
Selection of stimuli for fMRI experiment — i.e., one target and one foil. For backward-layered
control stimuli, participants were instructed to watch for any point at which three of the hands in
the movie (there were 6 hands in each movie) had the same handshape (i.e., position of the
fingers and hand). The response options after these control trials were one picture depicting three
iconic hands in the same handshape, and a second picture showing three different handshapes.
The left/right position on the screen of each response prompt (target/foil or same/different
handshapes) was pseudo-randomized across trials to ensure that each possible response was
presented an equal number of times on each side of the screen. Participants made their responses
SI FOR NEWMAN ET AL. 11
with their feet (this was done to minimize activation of the hand representation of the motor
cortex associated with responding, since it was predicted that observing people's hands in the
stimuli might also activate hand motor cortex), via fiber optic response boxes.
Participants were given several practice trials with each type of stimulus prior to going
into the MRI scanner, and were allowed to ask for clarification and given feedback if it appeared
they did not understand the instructions. For signers, all communication was in ASL, either by a
research assistant fluent in ASL, or via an interpreter. While in the MRI scanner, hearing
participants could communicate with researchers via an audio intercom, while for signers a 2-
way video intercom was used. Stimulus presentation used DirectRT software (Empirisoft, New
York) running on a PC computer connected to a JVC DLA-SX21U LCD projector which
projected the video via a long-throw lens onto a mylar projection screen placed at the head end
of the MRI bore, which participants viewed via an angled mirror.
In the MRI scanner, a total of 4 stimulus runs were conducted for each participant; two of
these contained ASL stimuli and two contained gesture stimuli. The ordering of the stimuli were
counterbalanced across participants within each subject group. Each run comprised 40 trials,
with equal numbers of target stimuli (i.e., ASL or gestures, depending on the run) and control
stimuli presented in a pseudo-randomized order and with “jittered” inter-trial intervals between 0
– 10 sec (in steps of 2 sec), during which time a fixation cross was presented, to optimize
recovery of the hemodynamic function for each condition. Each trial began with a 1 sec visual
cue that indicated whether the task and stimuli on that trial required comprehension of ASL or
gestures, or attention to hand similarity in the control condition. The cue was followed by the
stimulus movie and then, after a random delay of 0.25 – 3 sec during which the screen was blank,
SI FOR NEWMAN ET AL. 12
the response prompt showing two alternatives as described above. The trial ended as soon as the
participant made a response, or after 4 sec if no response was made.
MRI data acquisition
MRI data were collected on a 3T Siemens Trio scanner using an 8 channel head coil.
Functional images were collected using a standard gradient-echo, echo planar pulse sequence
with TE = 30 ms, TR = 2 s, flip angle = 90 deg, field of view = 256 mm, 64 x 64 matrix
(resulting in 4 x 4 mm resolution in-plane), and 30, 4 mm thick axial slices collected in an
interleaved order. Each fMRI run started with a series of 4 “dummy” acquisitions (full-brain
volumes) which were discarded prior to analysis. T1-weighted structural images were collected
using a 3D MPRAGE pulse sequence, TE = 3.93 ms, TR = 2020 ms, TI = 1100 ms, flip angle =
15 deg, field of view = 256 mm, 256 x 256 matrix, and 160, 1 mm thick slices (resulting in 1 mm
isotropic voxels).
fMRI preprocessing and data analysis
FMRI data processing was carried out using FEAT (FMRI Expert Analysis Tool) Version
5.98, part of FSL (FMRIB's Software Library, www.fmrib.ox.ac.uk/fsl). Prior to statistical
analysis, the following preprocessing steps were applied to the data from each run, for each
subject: motion correction using MCFLIRT; non-brain removal using BET; spatial smoothing
using a Gaussian kernel of FWHM 8 mm; grand-mean intensity normalization of the entire 4D
dataset by a single multiplicative factor; and highpass temporal filtering (Gaussian-weighted
least-squares straight line fitting, with sigma = 36.0 s). Runs were removed from further
processing and analysis if they contained head motion in excess of 2 mm or other visible MR
artifacts; in total, 2 runs were rejected from one non-signing participant, and a total of 4 runs
across three signing participants.
SI FOR NEWMAN ET AL. 13
Statistical analysis proceeded through 3 levels, again using FEAT. The first level was the
analysis of each individual run, using general linear modelling (GLM). The time series
representing the “on” blocks for each of the 2 stimulus types (ASL or gesture, depending on the
run, and the backward-layered control condition) were entered as separate regressors into the
GLM, with pre-whitening to correct for local autocorrelation. Coefficients were obtained for
each stimulus type (effectively, the contrast between the stimuli and the fixation baseline periods
that occurred between trials, as well as for contrasts between the target stimuli (ASL or gesture)
and the backward-layered control condition.
To identify brain areas activated by each stimulus type relative to its backward-layered
control condition, a second-level analysis was performed for each participant, including all four
runs from that participant. The inputs to this were the coefficients (beta weights) obtained from
the first-level GLM for each contrast in the first-level analyses. This was done using a fixed
effects model, by forcing the random effects variance to zero in FLAME (FMRIB's Local
Analysis of Mixed Effects).
A third-level, across-subjects analysis was then performed separately for each group,
using the coefficients from each subject determined in the second-level GLM. This was used to
obtain the activation maps for each group, including both the contrasts with fixation baseline and
with backward-layered control stimuli. This was done using FLAME stage 1 and stage 2. The
resulting z statistic images were thresholded using clusters determined by z > 2.3 and a
(corrected) cluster significance threshold (35) of p < .05. The results of the contrasts with
backward-layered control stimuli were masked with the results of the contrast with fixation, to
SI FOR NEWMAN ET AL. 14
ensure that any areas identified as being more strongly activated by ASL or gestures relative to
control stimuli showed significantly increased signal relative to the low-level fixation baseline.
Finally, between-group analyses were performed, again using the coefficients obtained
from each participant from the second-level analyses, in FLAME stages 1 and 2. Thresholding
was the same as for the third-level analyses. Thresholded maps showing greater activation for
one group or the other were masked to ensure that significant group differences were restricted to
areas that showed significantly greater activation for ASL or gesture relative to its respective
control condition, within the group showing stronger activation in the between-group contrast.
SI FOR NEWMAN ET AL. 15
Figures
Figure S1: Behavioral data, including accuracy (left panel) and reaction time (right panel). Error
bars represent 95% confidence intervals around each mean.
Figure S2: Statistical maps for each stimulus type relative to the fixation cross baseline, in each
subject group. Thresholded at z >2.3, with a cluster size-corrected p < .05. In the coronal and
sagittal views, the right side of the brain is shown on the right side of each image.
Figure S3: Within-group, between-condition differences, for each stimulus condition relative to
fixation. Thresholded at z > 2.3, with a cluster size-corrected p < .05.
Figure S4: Between-group differences for the contrast of each stimulus type relative to the
fixation cross baseline. Thresholded at z > 2.3, with a cluster size-corrected p < .05.
Figure S5: Example stimuli. Top row: Example frames from ASL and gesture movies shown to
participants in the fMRI study. (A) and (B) are from movies elicited by stop-motion videos
produced by author TS (2); (C) are from movies elicited by clip art stimuli. Bottom row: Still
images that were shown to participants in the MRI study after each ASL or gesture stimulus. For
(A) and (B), these are also representative of the objects and motion paths present in the original
elicitation stimuli. Although as animated videos these were quite understandable, screen shots of
these videos were not very clear. Thus, these images are reconstructions of the scenes from the
movies, made using the same props as in the original movies, or very similar ones.
!
Table&S1"!#$%&'!'()*+,-!'+-,+.+/&,012!-$%&0%$!.345!$%'6),'%!0)!%&/(!/),7+0+),!8#9:!&,7!-%'0;$%<!0(&,!
.+=&0+),!>&'%1+,%?!.)$!%&/(!-$);6@!A%/&;'%!0(%!/),0$&'0'!).0%,!%1+/+0%7!%=0%,'+B%!&/0+B&0+),?!+,/1;7+,-!&!
'+,-1%!1&$-%!/1;'0%$!.)$!%&/(!/),0$&'0!0(&0!+,/1;7%7!0(%!)//+6+0&1?!0%C6)$&1?!6&$+%0&1?!&,7!).0%,!.$),0&1!
1)>%'!>+1&0%$&112?!0)!/$%&0%!0(%'%!0&>1%'!*%!.+$'0!0))D!0(%!E!C&6'!'()*,!+,!0(%!.+-;$%'?!*(+/(!(&7!>%%,!
0($%'()17%7!&0!z!F!G@H!&,7!0(%,!/1;'0%$!'+E%I/)$$%/0%7!.)$!p!J!@KL?!&,7!C&'D%7!0(%C!*+0(!&!'%0!).!
&,&0)C+/&112I7%.+,%7!$%-+),'!).!+,0%$%'0!84M5'<@!N(%'%!4M5'!*%$%!7%$+B%7!.$)C!0(%!O&$B&$7IM=.)$7!
P)$0+/&1!#01&'!6$)B+7%7!*+0(!0(%!Q9:!').0*&$%!6&/D&-%!8B%$'+),!L@K@R<!>2!/)C>+,+,-!&11!&,&0)C+/&112I
1&>%11%7!$%-+),'!+,!0(&0!&01&'!/)C6$+'+,-!0(%!1&$-%$!$%-+),'!1&>%11%7!+,!0(%!0&>1%@!4%';10'!.)$!%&/(!4M5!
*%$%!)>0&+,%7!>2!6%$.)$C+,-!Q9:S'!/1;'0%$!6$)/%7;$%!),!0(%!0($%'()17%7&z!C&6!C&'D%7!>2!%&/(!4M5?!
&,7!0(%,!)>0&+,+,-!0(%!C)'0!6$)>&>1%!&,&0)C+/&1!1&>%1!.)$!0(%!6%&D!).!%&/(!/1;'0%$!;'+,-!Q9:S'!
atlasquery!.;,/0+),@!N(;'!+0!+'!+C6)$0&,0!0)!$%/)-,+E%?!*+0(!$%.%$%,/%!0)!Q+-;$%!9G?!0(&0!C&,2!).!0(%!
&/0+B%!$%-+),'!1+'0%7!+,!0(+'!0&>1%!*%$%!6&$0!).!1&$-%$!/1;'0%$'!+,!0(%!*()1%I>$&+,!&,&12'+'@!O)*%B%$?!
>%/&;'%!/1;'0%$+,-!.)$!0(%!0&>1%!*&'!6%$.)$C%7!),!0(%!'&C%!'0&0+'0+/&1!C&6!'()*,!+,!Q+-;$%!9G?!0(%!
1)/&0+),!&,7!%=0%,0!).!0(%!&/0+B&0+),'!+,!0)0&1!&$%!%T;+B&1%,0!+,!0(%!.+-;$%!&,7!0&>1%@!
! !
! !
ASL$%$Deaf$Native$Signers$
Cluster$
Size$
$
!
!
!
ASL$%$Hearing$Non%Signers$
Cluster$
Size$
$
!
!
!
Brain$Region$
Max$z$
X$
Y$
Z$
Brain$Region$
Max$z$
X$
Y$
Z$
Frontal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
IFG,!pars!opercularis!
4398!
6.39!
<44!
16!
20!
Precentral!Gyrus!
3333!
5.43!
<36!
<20!
62!
!
!
!
!
!
!
Cingulate!Gyrus!
12!
2.85!
<18!
<24!
38!
!
!
!
!
!
!
Frontal!Pole!
8!
3.35!
0!
56!
<2!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
IFG,!pars!opercularis!
4126!
6.99!
58!
18!
22!
Precentral!Gyrus!
3286!
4.9!
46!
<14!
58!
Frontal!Pole!
82!
3.62!
36!
36!
<16!
Frontal!Pole!
78!
3.39!
2!
56!
<2!
Superior!Frontal!Gyrus!
49!
4.09!
14!
<6!
68!
Frontal!Pole!
37!
3.69!
34!
36!
<18!
Presentral!Gyrus!
12!
3.37!
2!
<14!
66!
Superior!Frontal!Gyrus!
35!
3.4!
12!
<10!
68!
!
!
!
!
!
!
Frontal!Pole!
12!
3.84!
2!
54!
<6!
Frontal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supplementary!Motor!Area!
702!
6.03!
<2!
0!
54!
Frontal!Medial!Cortex!
836!
6.06!
0!
50!
<10!
Subcallosal!Cortex!
361!
5.26!
<10!
26!
<18!
Supplementary!Motor!Area!
482!
4.14!
<2!
<2!
56!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supplementary!Motor!Area!
495!
5.01!
2!
0!
60!
Frontal!Medial!Cortex!
629!
5.61!
2!
50!
<10!
Frontal!Medial!Cortex!
467!
4.37!
4!
36!
<18!
Supplementary!Motor!Area!
342!
4.05!
8!
<10!
64!
Frontal!Orbital!Cortex!
15!
2.96!
30!
28!
6!
Frontal!Orbital!Cortex!
151!
4.22!
36!
30!
<18!
Temporal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Planum!Temporale/Supramarginal!
Gyrus!
3364!
6.29!
<52!
<42!
20!
Lateral!Occipital!Cortex,!inferior!
1054!
5.44!
<42!
<62!
8!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Middle!Temporal!Gyrus,!temporoccipital!
5368!
5.97!
50!
<58!
10!
Middle!Temporal!Gyrus,!temporoccipital!
2868!
5.6!
48!
<42!
8!
!
!
!
!
!
!
Planum!Temporale!
9!
2.77!
56!
<36!
20!
Temporal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Fusiform!Gyrus,!occipital!
2847!
5.58!
<36!
<78!
<8!
Fusiform!Gyrus,!occipital!
2860!
7.11!
<26!
<84!
<14!
Parahippocampus!Gyrus,!posterior!
137!
6.38!
<14!
<32!
<4!
Parahippocampal!gyrus,!posterior!
213!
5.93!
<12!
<32!
<2!
!
!
!
!
!
!
Parahippocampal!gyrus,!anterior!
26!
2.95!
<26!
<2!
<22!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lingual!Gyrus!
2847!
5.65!
6!
<88!
<8!
Fusiform!Gyrus,!temporal<occipital!
3151!
6.78!
36!
<58!
<16!
Fusiform!cortex,!anterior!temporal!
231!
5.01!
36!
<2!
<36!
Parahippocampal!gyrus,!posterior!
199!
6.64!
18!
<32!
<2!
Parahippocampal!gyrus,!anterior!
189!
7.17!
18!
<28!
<6!
Parahippocampal!gyrus,!anterior!
71!
3.9!
18!
<10!
<26!
Fusiform!cortex,!posterior!temporal!
13!
3.79!
46!
<18!
<18!
Parahippocampal!gyrus,!anterior!
28!
2.98!
26!
0!
<22!
Parietal%Superior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Superior!Parietal!Lobule!
600!
5.73!
<32!
<48!
42!
Superior!Parietal!Lobule!
1952!
5.98!
<26!
<50!
46!
Postcentral!Gyrus!
146!
4.15!
<58!
<10!
46!
!
!
!
!
!
!
Postcentral!Gyrus!
30!
3.3!
<48!
<12!
60!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Superior!Parietal!Lobule!
469!
3.69!
32!
<54!
48!
Superior!Parietal!Lobule!
2293!
5.43!
30!
<46!
50!
Postcentral!Gyrus!
67!
3.21!
54!
<10!
54!
Postcentral!Gyrus!
10!
3.71!
44!
<6!
26!
Postcentral!Gyrus!
54!
3.73!
54!
<22!
58!
!
!
!
!
!
!
Parietal%Inferior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!
1304!
6.65!
<54!
<42!
22!
Supramarginal!Gyrus,!posterior!
479!
3.96!
<30!
<46!
36!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!
426!
4.58!
<48!
<48!
10!
!
!
!
!
!
!
Supramarginal!Gyrus,!anterior!
12!
3.16!
<66!
<22!
32!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!
1013!
5.56!
60!
<38!
12!
Supramarginal!Gyrus,!posterior!
704!
5.9!
48!
<42!
10!
Supramarginal!Gyrus,!posterior!
39!
3.01!
32!
<40!
38!
Supramarginal!Gyrus,!posterior!
215!
4.5!
32!
<40!
40!
Angular!Gyrus!
9!
3.31!
30!
<56!
38!
Angular!Gyrus!
7!
3.48!
26!
<54!
36!
Occipital$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lateral!Occipital!Cortex,!inferior!
5365!
6.04!
<38!
<80!
<6!
Lateral!Occipital!Cortex,!inferior!
8386!
7.64!
<48!
<76!
<4!
Lateral!Occipital!Cortex,!superior!
255!
3.23!
<28!
<66!
38!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lateral!Occipital!Cortex,!inferior!
5348!
6.82!
50!
<66!
6!
Lateral!Occipital!Cortex,!inferior!
7446!
7!
38!
<82!
<10!
Posterior%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Cingulate!Gyrus,!posterior!
32!
8.25!
<16!
<28!
<6!
Parahippocampal!Gyrus,!posterior!
479!
5.91!
<20!
<30!
<6!
!
!
!
!
!
!
Cuneus!
137!
4.12!
<18!
<80!
32!
!
!
!
!
!
!
Precuneus!
22!
2.55!
<2!
<78!
50!
!
!
!
!
!
!
Cingulate!Gyrus,!posterior!
12!
2.79!
<18!
<26!
34!
!
!
!
!
!
!
Cuneus!
9!
3.41!
<12!
<88!
14!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Cingulate!Gyrus,!posterior!
50!
5.98!
20!
<30!
<4!
Parahippocampal!Gyrus,!posterior!
349!
6.36!
20!
<30!
<4!
!
!
!
!
!
!
Cuneus!
91!
3.95!
20!
<78!
30!
!
!
!
!
!
!
Precuneus!
9!
3.16!
24!
<52!
34!
!
!
!
!
!
!
Occipital!Pole!
6!
2.88!
14!
<86!
20!
Thalamus$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Posterior!Parietal!projections!
805!
8.25!
<16!
<28!
<6!
Temporal!projections!
647!
6.53!
<24!
<28!
<6!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Posterior!Parietal!projections!
795!
7.18!
8!
<30!
<4!
Temporal!projections!
617!
6.64!
18!
<32!
<2!
!
!
!
!
!
!
Pre<frontal!projections!
118!
3.32!
14!
<12!
2!
!
Gesture$%$Deaf$Native$Signers$
$
$
$
$
$
Gesture$%$Hearing$Non%Signers$
$
$
$
$
$
Brain$Region$
Cluster$
Size$
Max$z$
X$
Y$
Z$
Brain$Region$
Cluster$
Size$
Max$z$
X$
Y$
Z$
Frontal%Lateral$
$$
$$
$$
$$
$$
$$
$$
$$
$$
$$
$$
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!Inferior!Frontal!Gyrus,!pars!triangularis!
4284!
5.49!
<52!
26!
12!
Precentral!Gyrus!
2926!
5.33!
<32!
<12!
52!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!Precentral!Gyrus!
4340!
5.08!
50!
0!
38!
Precentral!Gyrus!
3242!
5.09!
62!
6!
34!
Superior!Frontal!Gyrus!
96!
4.07!
12!
<6!
66!
Supoerior!Frontal!Gyrus!
19!
3.21!
12!
<10!
68!
Precentral!Gyrus!
19!
3.64!
4!
<14!
66!
!
!
!
!
!
!
Frontal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supplementary!Motor!Area!
478!
4.75!
<8!
<10!
64!
Supplementary!Motor!Area!
193!
3.76!
<6!
<6!
64!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supplementary!Motor!Area!
432!
4.95!
8!
<4!
62!
Supplementary!Motor!Area!
156!
3.75!
8!
<8!
62!
Temporal%Lateral$
!!
!!
!!
!!
!!
!!
8!
2.87!
28!
14!
<24!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Middle!Temporal!Gyrus,!
temporooccipital!
2329!
6.23!
<46!
<62!
2!
Lateral!Occipital!Cortex,!inferior!
896!
5.43!
<42!
<62!
8!
!
!
!
!
!
!
Planum!Temporale!
37!
3.32!
<46!
<44!
18!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Middle!Temporal!Gyrus,!
temporooccipital!
3726!
6.16!
46!
<60!
4!
Middle!Temporal!Gyrus,!
temporooccipital!
1727!
5.72!
46!
<42!
8!
!
!
!
!
!
!
Planum!Temporale!
7!
2.79!
56!
<36!
22!
Temporal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lingual!Gyrus!
2423!
6.4!
<2!
<90!
<6!
Fusiform!Gyrus,!occipital!
2787!
6.75!
<34!
<78!
<16!
Parahippocampal!Gyrus,!posterior!
129!
5.55!
<20!
<28!
<8!
Parahippocampal!Gyrus,!posterior!
142!
5.9!
<14!
<32!
<6!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lingual!Gyrus!
2672!
6.36!
2!
<88!
<6!
Fusiform!Gyrus,!occipital!
2866!
6.56!
24!
<90!
<10!
Parahippocampal!Gyrus,!posterior!
148!
5.88!
16!
<32!
<2!
Parahippocampal!Gyrus,!posterior!
149!
6.7!
10!
<30!
<6!
Fusiform!Gyrus,!anterior!temporal!
119!
5.06!
34!
0!
<34!
Parahippocampal!Gyrus,!anterior!
60!
4.11!
18!
<8!
<20!
Fusiform!Gyrus,!posterior!temporal!
15!
2.67!
42!
<12!
<28!
!
!
!
!
!
!
Parietal%Superior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!Superior!Parietal!Lobule!
1380!
5.19!
<32!
<50!
52!
!Superior!Parietal!Lobule!
1309!
5.6!
<28!
<54!
54!
!
!
!
!
!
!
Postcentral!Gyrus!
52!
3.25!
<46!
<18!
62!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!Superior!Parietal!Lobule!
1170!
4.99!
30!
<48!
56!
!Superior!Parietal!Lobule!
1089!
5.71!
32!
<50!
54!
Postcentral!Gyrus!
506!
4.19!
64!
<4!
36!
Postcentral!Gyrus!
563!
5.29!
54!
<16!
54!
!
!
!
!
!
!
Postcentral!Gyrus!
8!
3.67!
44!
<6!
28!
Parietal%Inferior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!!
1726!
7.48!
<58!
<44!
18!
Supramarginal!Gyrus,!anterior!!
612!
4.14!
<34!
<40!
38!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!!
409!
5.1!
<48!
<48!
10!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!!
1207!
5.63!
58!
<38!
18!
Supramarginal!Gyrus,!posterior!!
765!
6.27!
48!
<42!
10!
Supramarginal!Gyrus,!posterior!!
68!
3.67!
32!
<40!
40!
Supramarginal!Gyrus,!posterior!!
583!
4.78!
32!
<40!
40!
Supramarginal!Gyrus,!anterior!!
20!
2.71!
52!
<26!
46!
!
!
!
!
!
!
Occipital$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lateral!Occipital!Cortex,!inferior!
5501!
6.43!
<46!
<64!
4!
Lateral!Occipital!Cortex,!inferior!
7280!
6.85!
<48!
<76!
<4!
Lateral!Occipital!Cortex,!superior!
328!
3.7!
<24!
<74!
30!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lateral!Occipital!Cortex,!inferior!
5426!
6.72!
50!
<66!
6!
Occipital!Pole!
6440!
6.51!
24!
<92!
<8!
Lateral!Occipital!Cortex,!superior!
83!
2.79!
30!
<56!
64!
Lateral!Occipital!Cortex,!superior!
260!
4.79!
24!
<58!
52!
Posterior%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Parahippocampal!Gyrus,!posterior!
33!
5.49!
<20!
<30!
<6!
Parahippocampal!Gyrus,!posterior!
28!
5.92!
<16!
<32!
<4!
!
!
!
!
!
!
Cuneus!
9!
3!
<20!
<74!
34!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Cingulate!Gyrus,!posterior!
38!
5.74!
20!
<30!
<4!
Parahippocampal!Gyrus,!posterior!
40!
6.13!
12!
<32!
<2!
!
!
!
!
!
!
Cuneus!
8!
3.21!
20!
<76!
36!
Thalamus$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Posterior!Parietal!projections!
397!
5.75!
<8!
<30!
<4!
Posterior!Parietal!projections!
492!
5.92!
<16!
<32!
<4!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Temporal!Projections!
502!
5.94!
16!
<32!
0!
Posterior!Parietal!projections!
546!
6.23!
16!
<32!
<2!
Table&S2:&Cluster(locations(and(z(values(of(significant(activations(in(the(contrasts(between(each(
condition(and(matched(backward7layered(control(condition,(for(each(group.(Maps(were(
masked(to(ensure(that(areas(shown(here(also(showed(significantly(greater(activation(than(
the(fixation(baseline,(i.e.,(these(voxels(are(a(subset(of(those(presented(in(Table(S1(and(
Figure(S2.(These(areas(correspond(to(the(statistical(map(shown(in(Figure(1.(Details(of(table(
creation(are(as(for(Table(S1.(
(
(
(
(
( (
ASL$%$Deaf$Native$Signers$
Cluster$
Size$
$
!
!
!
ASL$%$Hearing$Non%Signers$
Cluster$
Size$
$
!
!
!
Brain$Region$
Max$z$
X$
Y$
Z$
Brain$Region$
Max$z$
X$
Y$
Z$
Frontal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Inferior!Frontal!Gyrus,!pars!triangularis!
571!
7.03!
>50!
22!
12!
Inferior!Frontal!Gyrus,!pars!opercularis!
117!
4.25!
>42!
14!
26!
Supplementary!Motor!Area!
22!
3.39!
>4!
>14!
62!
Frontal!Pole!
8!
3.58!
0!
56!
4!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Inferior!Frontal!Gyrus,!pars!triangularis!
139!
5.49!
54!
32!
2!
!
!
!
!
!
!
Frontal!Pole!
66!
4.74!
8!
60!
14!
!
!
!
!
!
!
Frontal!Pole!
11!
3.78!
2!
54!
>6!
!
!
!
!
!
!
Frontal!Pole!
8!
3.45!
36!
34!
>14!
Frontal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Frontal!Medial!Cortex!
356!
4.98!
>2!
40!
>14!
Paracingulate!Gyrus!
807!
5.78!
>8!
46!
>8!
Supplementary!Motor!Area!
138!
3.84!
>6!
>6!
66!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Frontal!Medial!Cortex!
299!
4.8!
2!
40!
>14!
Paracingulate!Gyrus!
624!
5.72!
4!
52!
4!
!
!
!
!
!
!
Frontal!Orbital!Cortex!
80!
3.38!
36!
32!
>16!
Temporal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Middle!Temporal!Gyrus,!posterior!
1073!
5.37!
>62!
>40!
>2!
Middle!Temporal!Gyrus,!temporoccipital!
375!
5.69!
>52!
>46!
4!
!
!
!
!
!
!
Planum!Temporale!
16!
3.05!
>56!
>40!
20!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Middle!Temporal!Gyrus,!anterior!
1345!
4.09!
60!
>4!
>22!
Middle!Temporal!Gyrus,!temporoccipital!
952!
7.74!
52!
>38!
0!
!
!
!
!
!
!
Temporal!Pole!
22!
2.88!
28!
14!
>30!
Temporal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Fusiform!Gyrus,!posterior!temporal!
115!
6.36!
>34!
>34!
>18!
!
!
!
!
!
!
Fusiform!Gyrus,!posterior!temporal!
46!
3.69!
>38!
>10!
>30!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Fusiform!Gyrus,!posterior!temporal!
66!
3.84!
32!
>32!
>22!
Fusiform!Gyrus,!temporal>occipital!
40!
3.05!
42!
>40!
>22!
Parahippocampal!Gyrus,!anterior!
63!
3.74!
32!
>8!
>24!
Fusiform!Gyrus,!posterior!temporal!
6!
3.54!
44!
>24!
>14!
Fusiform!Gyrus,!posterior!temporal!
12!
3.45!
42!
>20!
>18!
!
!
!
!
!
!
Parietal%Superior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Parietal%Inferior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Angular!Gyrus!
238!
3.95!
>56!
>56!
18!
Angular!Gyrus!
362!
5.08!
>46!
>56!
14!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus!
7!
2.73!
40!
>38!
4!
Supramarginal!Gyrus!
181!
6.09!
46!
>38!
6!
Occipital$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Lateral!Occipital!Cortex!
9!
2.79!
>54!
>62!
18!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Posterior%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Precuneus!
295!
6.84!
>4!
>56!
12!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Precuneus!
161!
5.14!
4!
>56!
22!
Thalamus$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
Gesture$%$Deaf$Native$Signers$
$
Gesture$%$Hearing$Non%Signers$
$
Brain$Region$
Cluster$
Size$
Max$z$
X$
Y$
Z$
$$
Brain$Region$
Cluster$
Size$
Max$z$
X$
Y$
Z$
Frontal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Inferior!Frontal!Gyrus,!pars!triangularis!
312!
6.04!
>48!
26!
8!
!
Superior!Frontal!Gyrus!
51!
4.02!
>10!
>6!
64!
!
Right!Hemisphere!
!
Frontal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Supplementary!Motor!Area!
190!
4.04!
>6!
>8!
62!
!
Right!Hemisphere!
!
Temporal%Lateral$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Superior!Temporal!Gyrus,!anterior!
307!
4.92!
>54!
>6!
>12!
!
Middle!Temporal!Gyrus,!temporoccipital!
92!
3.23!
>56!
>52!
2!
!
Planum!Temporale!
6!
2.7!
>54!
>40!
20!
Right!Hemisphere!
!
Middle!Temporal!Gyrus,!temporoccipital!
880!
4.24!
46!
>40!
0!
!
Temporal%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Parietal%Superior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Parietal%Inferior$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Angular!Gyrus!
15!
3.28!
>52!
>58!
18!
!
Angular!Gyrus!
130!
3.71!
>44!
>52!
16!
Right!Hemisphere!
!
Supramarginal!Gyrus,!posterior!
121!
5.57!
42!
>42!
8!
Occipital$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Lateral!Occipital!Cortex,!superior!
20!
3.27!
>50!
>62!
16!
Right!Hemisphere!
!
Posterior%Medial$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Thalamus$
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
(
Table&S3:&!"#$%&'(")*+%,)-$(+-.(z(/+"#&$()0($,1-,0,*+-%(+*%,/+%,)-$(,-(%2&(*)-%'+$%$(3&%4&&-(%2&(%4)(
"+-1#+1&(*)-.,%,)-$(5678(+-.(1&$%#'&9:(0)'(&+*2(1')#;<(=2&$&(*)''&$;)-.(%)(%2&($%+%,$%,*+"(
>+;$($2)4-(,-(?,1#'&(@:("&0%(;+-&"<(=2&(,-;#%(%)(%2&(3&%4&&-A"+-1#+1&(*)-%'+$%(4+$(%2&(
$%+%,$%,*+"(>+;$($2)4-(,-(?,1#'&(B(+-.(=+3"&$(7C(+-.(7D:(,-()%2&'(4)'.$(4,%2(+*%,/+%,)-(,-(
%2&(3+*E4+'.A"+F&'&.(*)-%')"(*)-.,%,)-($#3%'+*%&.(+-.(>+$E&.(4,%2(%2&(*)-%'+$%$()0(%2&(
"+-1#+1&(*)-.,%,)-(4,%2(0,G+%,)-(3+$&",-&<(H-"F(*)-%'+$%$(42&'&(678(&",*,%&.(1'&+%&'(
+*%,/+%,)-(%2+-(1&$%#'&(+'&($2)4-:(3&*+#$&(-)('&1,)-$($2)4&.(1'&+%&'(+*%,/+%,)-(0)'(
1&$%#'&(%2+-(0)'(678<(I&%+,"$()0(%+3"&(*'&+%,)-(+'&(+$(0)'(=+3"&(7B<(
(
(
Deaf%Native%Signers%
%
Hearing%Non1Signers%
%
Brain%Region%
Cluster%
Size%
Max%z%
X%
Y%
Z%
Brain%Region%
Cluster%
Size%
Max%z%
X%
Y%
Z%
Frontal1Lateral%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!Inferior!Frontal!Gyrus,!pars!triangularis!
278!
3.09!
?46!
24!
16!
Right!Hemisphere!
!
!Inferior!Frontal!Gyrus,!pars!triangularis!
9!
2.49!
48!
28!
12!
Frontal1Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Temporal1Lateral%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Superior!Temporal!Gyrus,!posterior!
187!
3.02!
?56!
?42!
4!
Middle!Temporal!Gyrus,!temporoccipital!
82!
2.82!
?48!
?44!
6!
Right!Hemisphere!
!
Temporal1Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Fusiform!Gyrus,!posterior!temporal!
81!
3.34!
?34!
?34!
?22!
Right!Hemisphere!
!
Parietal1Superior%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Parietal1Inferior%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Angular!Gyrus!
79!
3.01!
?46!
?60!
16!
Supramarginal!Gyrus,!posterior!!
8!
2.68!
?50!
?44!
8!
Right!Hemisphere!
!
Occipital%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Lateral!Occipital!Cortex,!superior!
7!
2.93!
?50!
?62!
16!
Right!Hemisphere!
!
Posterior1Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Cingulate!Gyrus,!posterior!
42!
2.92!
0!
?52!
20!
Right!Hemisphere!
!
Cingulate!Gyrus,!posterior!
107!
3.03!
4!
?46!
24!
Thalamus%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
Right!Hemisphere!
!
Table&S4:&Cluster(locations(and(z(values(of(significant(activations(in(the(contrasts(between(the(two(
groups,(for(each(language(type.(These(correspond(to(the(statistical(maps(shown(in(Figure(
2,(right(panel.(The(input(to(the(between=group(contrast(was(the(statistical(maps(shown(in(
Figure(1(and(Table(S2,(in(other(words(with(activation(in(the(backward=layered(control(
condition(subtracted(and(masked(with(the(contrasts(of(the(language(condition(with(
fixation(baseline.(Note(that(deaf(signers(only(showed(greater(activation(than(non=signers(
for(ASL,(whereas(hearing(non=signers(only(showed(greater(activation(than(deaf(signers(for(
gesture.(Details(of(table(creation(are(as(for(Table(S1.(
(
Deaf%Native%Signers%>%Hearing%Non2Signers%
!
!
!
!
%
Hearing%Non2Signers%>%Deaf%Native%Signers!
!
!
ASL%
%
!
!
!
!
!
Gesture%
%
!
!
!
!
Brain%Region%
Cluster%
Size%
Max%z%
X%
Y%
Z%
%%
Brain%Region%
Cluster%
Size%
Max%z%
X%
Y%
Z%
Frontal2Lateral%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!Inferior!Frontal!Gyrus,!pars!triangularis!
16!
2.75!
>54!
22!
10!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!Precentral!Gyrus!
4340!
5.08!
50!
0!
38!
!
!
!
!
!
!
!
Superior!Frontal!Gyrus!
96!
4.07!
12!
>6!
66!
!
!
!
!
!
!
!
Precentral!Gyrus!
19!
3.64!
4!
>14!
66!
!
!
!
!
!
!
!
Frontal2Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Temporal2Lateral%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Superior!Temporal!Gyrus,!
posterior/Planum!Temporale!
73!
3.45!
>62!
>16!
2!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Superior!Temporal!Gyrus,!
posterior/Planum!Temporale!
37!
3.11!
64!
>20!
4!
!
Middle!Temporal!Gyrus,!temporoccipital!
174!
3.43!
42!
>48!
6!
Temporal2Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Parietal2Superior%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Parietal2Inferior%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Supramarginal!Gyrus,!posterior!
73!
3.67!
52!
>38!
18!
Occipital%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Posterior2Medial%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Thalamus%
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
!!
Left!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
Right!Hemisphere!
!
!
!
!
!
!
!
!
!
!
!
!
(
0.5
0.6
0.7
0.8
0.9
1.0
ASL Gesture
Proportion Correct Resposnes
1000
1250
1500
1750
2000
ASL Gesture
Reaction Time (ms)
Accuracy
Reaction Time
Deaf signers
Hearing non-signers
Figure S1
 
     
瀀
    
     
 
  
  
  
  
 
 
倀 
 
 
  
... Pantomime generation, on the other hand, showed more activation in the bilateral superior partial cortex of deaf signers, while hearing nonsigners recruited neural regions associated with episodic memory retrieval (Emmorey et al., 2011). Similarly, Newman et al. (2015) compared how sign languages and non-linguistic gestures are processed by the brain (using fMRI) in deaf signers and hearing non-signers. ...
... While non-signers engaged regions involved in human action perception, signers instead engaged left-lateralized language areas when processing both sign language and gesture. However, sign language activated these language areas more strongly than gestural sequences (Newman et al., 2015). Using a classic visual oddball paradigm (EEG), Deng et al. (2020) examined the neural responses to lexical information of signs in Hong Kong Sign Language in deaf signers versus hearing non-signers. ...
Article
Full-text available
The linguistic counting system of deaf signers consists of a manual counting format that uses specific structures for number words. Interestingly, the number signs from 1 to 4 in the Belgian sign languages correspond to the finger-montring habits of hearing individuals. These hand configurations could therefore be considered as signs (i.e., part of a language system) for deaf, while they would simply be number gestures (not linguistic) for hearing controls. A Fast Periodic Visual Stimulation design was used with electroencephalography recordings to examine whether these finger-number configurations are differently processed by the brain when they are signs (in deaf signers) as compared to when they are gestures (in hearing controls). Results showed that deaf signers show stronger discrimination responses to canonical finger-montring configurations compared to hearing controls. A second control experiment furthermore demonstrated that this finding was not merely due to the experience deaf signers have with the processing of hand configurations, as brain responses did not differ between groups for finger-counting configurations. Number configurations are therefore processed differently by deaf signers, but only when these configurations are part of their language system.
... Subsequently, Newmann and associates [155,156], also using fMRI during sentence comprehension with deaf ASL signers, replicated the largely left anterior cortex activation (pars triangularis or BA 45 of the IFG) but failed to replicate the bilateral activation of the middle part of the MTG, the superior temporal sulcus (STS) and the left AG. The same group [157] addressed the issue of whether ALS and meaningful yet not linguistic gestures would activate the same left-lateralized network, and concluded that symbolic gestures are indeed processed by the left lateralized network, indicating that they are treated as if they were linguistic in nature. ...
Article
Full-text available
This review consists of three main sections. In the first, the Introduction, the main theories of the neuronal mediation of linguistic operations, derived mostly from studies of the effects of focal lesions on linguistic performance, are summarized. These models furnish the conceptual framework on which the design of subsequent functional neuroimaging investigations is based. In the second section, the methods of functional neuroimaging, especially those of functional Magnetic Resonance Imaging (fMRI) and of Magnetoencephalography (MEG), are detailed along with the specific activation tasks employed in presurgical functional mapping. The reliability of these non-invasive methods and their validity, judged against the results of the invasive methods, namely, the “Wada” procedure and Cortical Stimulation Mapping (CSM), is assessed and their use in presurgical mapping is justified. In the third and final section, the applications of fMRI and MEG in basic research are surveyed in the following six sub-sections, each dealing with the assessment of the neuronal networks for (1) the acoustic and phonological, (2) for semantic, (3) for syntactic, (4) for prosodic operations, (5) for sign language and (6) for the operations of reading and the mechanisms of dyslexia.
... Authors in [59] found that ASL (American Sign Language) stimulated more strongly the middle STG (superior temporal gyrus) and left IFG (inferior frontal gyrus) in deaf native signers than gestures expressing roughly the same material. Here Graph Theoretical Analysis (GTA) is used on neural dependent cognition studies as essential alternative inactivation research. ...
... Sign language researchers, however, remain cognizant of the concerns resulting from the use of such controlled stimuli. For example, Moreno et al. (2018) specifically pointed out that, in their research, artificially constructed stimulus material might have caused a reduction in the activation of cortical language areas as compared to previous studies (e.g., Bavelier, 2010a, 2010b;Newman, Supalla, Fernandez, Newport, and Bavelier, 2015). In this fMRI study on processing of French Sign Language (LSF), the material was spliced together because signs were recorded in isolation and the hands always returned to an intermediate resting position. ...
Article
This paper reviews best practices for experimental design and analysis for sign language research using neurophysiological methods, such as electroencephalography (EEG) and other methods with high temporal resolution, as well as identifies methodological challenges in neurophysiological research on natural sign language processing. In particular, we outline the considerations for generating linguistically and physically well-controlled stimuli accounting for 1) the layering of manual and non-manual information at different timescales, 2) possible unknown linguistic and non-linguistic visual cues that can affect processing, 3) variability across linguistic stimuli, and 4) predictive processing. Two specific concerns with regard to the analysis and interpretation of observed event related potential (ERP) effects for dynamic stimuli are discussed in detail. First, we discuss the “trigger/effect assignment problem”, which describes the difficulty of determining the time point for calculating ERPs. This issue is related to the problem of determining the onset of a critical sign (i.e., stimulus onset time), and the lack of clarity as to how the border between lexical (sign) and transitional movement (motion trajectory between individual signs) should be defined. Second, we discuss possible differences in the dynamics within signing that might influence ERP patterns and should be controlled for when creating natural sign language material for ERP studies. In addition, we outline alternative approaches to EEG data analyses for natural signing stimuli, such as the timestamping of continuous EEG with trigger markers for each potentially relevant cue in dynamic stimuli. Throughout the discussion, we present empirical evidence for the need to account for dynamic, multi-channel, and multi-timescale visual signal that characterizes sign languages in order to ensure the ecological validity of neurophysiological research in sign languages.
... Finally, our results raise questions regarding what is special about the frontotemporal cortex in the two hemispheres that makes them capable of language processing and differentiates them from other cortical regions. Since signed languages are localized to the same brain areas (66)(67)(68), explanations focused on auditory and vocal-motor control regions will not be adequate. As noted above, prominent accounts in the literature focus on temporal and spectral processing explanations for why the LH is best suited for sentence processing and the RH is best suited for prosody (54,56). ...
Article
Full-text available
The mature human brain is lateralized for language, with the left hemisphere (LH) primarily responsible for sentence processing and the right hemisphere (RH) primarily responsible for processing suprasegmental aspects of language such as vocal emotion. However, it has long been hypothesized that in early life there is plasticity for language, allowing young children to acquire language in other cortical regions when LH areas are damaged. If true, what are the constraints on functional reorganization? Which areas of the brain can acquire language, and what happens to the functions these regions ordinarily perform? We address these questions by examining long-term outcomes in adolescents and young adults who, as infants, had a perinatal arterial ischemic stroke to the LH areas ordinarily subserving sentence processing. We compared them with their healthy age-matched siblings. All participants were tested on a battery of behavioral and functional imaging tasks. While stroke participants were impaired in some nonlinguistic cognitive abilities, their processing of sentences and of vocal emotion was normal and equal to that of their healthy siblings. In almost all, these abilities have both developed in the healthy RH. Our results provide insights into the remarkable ability of the young brain to reorganize language. Reorganization is highly constrained, with sentence processing almost always in the RH frontotemporal regions homotopic to their location in the healthy brain. This activation is somewhat segregated from RH emotion processing, suggesting that the two functions perform best when each has its own neural territory.
... [2][3][4][5] Additionally, many early imaging studies also corroborated this conclusion as they have shown that SL activates many similar prototypical language areas to SpL, such as perisylvian areas and frontotemporal networks. [6][7][8][9][10] Despite later studies finding similar results, 11,12 various studies had found differences in lateralization between SL and SpL, and this became the main argument for those who initially contested that SL and SpL are processed similarly. 1,13 However, lateralization differences, like other early discovered differences, have since been explained by similar activation patterns occurring in SL and SpL in non-native participants and in situations of complex syntactic discourselevel information requiring greater cognitive control. ...
Article
Full-text available
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences' importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages' auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Article
When brain regions that are critical for a cognitive function in adulthood are irreversibly damaged at birth, what patterns of plasticity support the successful development of that function in an alternative location? Here we investigate the consistency of language organization in the right hemisphere (RH) after a left hemisphere (LH) perinatal stroke. We analyzed fMRI data collected during an auditory sentence comprehension task on 14 people with large cortical LH perinatal arterial ischemic strokes (left hemisphere perinatal stroke (LHPS) participants) and 11 healthy sibling controls using a "top voxel" approach that allowed us to compare the same number of active voxels across each participant and in each hemisphere for controls. We found (1) LHPS participants consistently recruited the same RH areas that were a mirror-image of typical LH areas, and (2) the RH areas recruited in LHPS participants aligned better with the strongly activated LH areas of the typically developed brains of control participants (when flipped images were compared) than the weakly activated RH areas. Our findings suggest that the successful development of language processing in the RH after a LH perinatal stroke may in part depend on recruiting an arrangement of frontotemporal areas reflective of the typical dominant LH.
Book
Full-text available
This engrossing study investigates the infancy of American Sign Language (ASL). Authors Ted Supalla and Patricia Clark highlight the major events in ASL history, revealing much of what has not been clearly understood until now. According to tradition, ASL evolved from French Sign Language. The authors analyze the metalinguistic assumptions of these early accounts and also examine in depth a key set of films made by the National Association of the Deaf (NAD) between 1910 and 1920. Designed by the NAD to preserve classic ASL, the films feature 15 sign masters, the model signers of that time. In viewing these films, the authors discovered that the sign masters signed differently depending on their age. These variations provide evidence about the word formation process of early ASL, further supported by data collected from dictionaries of the 19th and early 20th centuries. By tracing the writings of selected individuals, this study reconstructs the historical context for early ASL grammar. It describes the language used in each century and how it changed, and focuses on the rediscovery of the literary legacy of the Deaf American voice. Sign Language Archaeology reveals the contrast between folk etymology and scientific etymology and allows readers to see ASL in terms of historical linguistics
Article
Full-text available
In the sign languages of the deaf some signs can meaningfully point toward things or can be meaningfully placed in the space ahead of the signer. Such spatial uses of signs are an obligatory part of fluent grammatical signing. There is no parallel for this in vocally produced languages. This book focuses on American Sign Language to examine the grammatical and conceptual purposes served by these directional signs and demonstrates a remarkable integration of grammar and gesture in the service of constructing meaning.
Article
Full-text available
Argues that gestures and speech are parts of the same psychological structure and share a computational stage. This argument is based on the close temporal, semantic, pragmatic, pathological, and developmental parallels between speech and referential and discourse-oriented gestures. The symbolic character of gestures is demonstrated with examples of gestures produced by 5 women who were narrating the same event from a cartoon story. Evidence is also presented in support of the conclusions that (1) gestures occur only during speech, (2) they have semantic and pragmatic functions that parallel those of speech, (3) they are synchronized with linguistic units, (4) they dissolve (as does speech) in aphasia, and (5) they develop together with speech in children. It is noted that a concept that unites outer speech and gesture is the hypothesis of inner speech. (57 ref)
Article
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual-spatial representations into a body-centered reference frame and reach toward target locations within signing space.
Article
Emblems are meaningful, culturally-specific hand gestures that are analogous to words. In this fMRI study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex (BA 6) and inferior parietal lobule (BA 40) for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system.