FIG 5 - uploaded by Gregory Hickok
Content may be subject to copyright.
Sample pictures from ''Paint Story,'' a wordless picture sequence designed to elicit spatialized discourse mechanisms of ASL.

Sample pictures from ''Paint Story,'' a wordless picture sequence designed to elicit spatialized discourse mechanisms of ASL.

Source publication
Article
Full-text available
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses tw...

Context in source publication

Context 1
... order to further explore AR and SJ's discourse abilities, the subjects were presented with a sequence of pictures which represented a story and were asked to produce a narrative describing the sequence of action. The story, which involved physical interaction among characters, was designed to elicit a range of spatialized discourse referential mechanisms (see Fig. 5). Subjects' narrations were videotaped, and the videotapes were analyzed by a deaf native signer of ASL. This analysis included, first, tabulation of the number of confabulatory or tangential utterances-defined as sentences whose content failed to relate to that of the previous sentence, or which included content not present in the pictorial story-and calculation of the ratio of such tangential utterances to the total number of utterances. Second, the analysis included tabulation of the number of errors in spatially organized discourse, defined as failure to maintain consistency of previously estab- lished spatial references, or ambiguity of content resulting from a failure to appropriately use spatial ...

Citations

... Macrostructure and microstructure are intertwined and often the attempt to disentangle them is difficult as the same constituents can perform both microstructure and macrostructure functions, yet the distinction is necessary for the description of discourse structure. Although, most language impairments associate with left hemisphere damage (17,21), as discourse involves speech (and writing (22,23)), language, emotions, social cognition, and cognitive domains, such as memory and attention, discourse impairments can results on neurodegenerative effects on the left and right hemispheres (24)(25)(26)(27). ...
Preprint
Full-text available
Neurodegeneration characterizes patients with different dementia subtypes (e.g., patients with Alzheimer's Disease, Primary Progressive Aphasia, and Parkinson's Disease), leading to progressive decline in cognitive, linguistic, and social functioning. Speech and language impairments are early symptoms in patients with focal forms of neurodegenerative conditions, coupled with deficits in cognitive, social, and behavioral domains. This paper reviews the findings on language and communication deficits and identifies the effects of dementia on the production and perception of discourse. It discusses findings concerning (i) language function, cognitive representation, and impairment , (ii) communicative competence, emotions, empathy, and theory-of-mind, and (iii) speech-in-interaction. It argues that clinical discourse analysis can provide a comprehensive assessment of language and communication skills in patients, which complements the existing neurolinguistic evaluation for (differential) diagnosis, prognosis, and treatment efficacy evaluation.
... From a medical aspect, a lot of research has been conducted on the effect of various medical conditions on sign speaking and analyzing the neuro-biological aspects of sign languages. Most notable, cases of deaf patients that have acquired significant brain injuries and the effects on their communication have been investigated, with specifics on different conditions, like left [38], [39] and right hemisphere [40] damage. The research gave insight into the parts of the brain that are responsible for speech and signing. ...
Article
Full-text available
Sign languages are critical in conveying meaning by the use of a visual-manual modality and are the primary means of communication of the deaf and hard of hearing with their family members and with the society. With the advances in computer graphics, computer vision, neural networks, and the introduction of new powerful hardware, the research into sign languages has shown a new potential. Novel technologies can help people learn, communicate, interpret, translate, visualize, document, and develop various sign languages and their related skills. This paper reviews the technological advancements applied in sign language recognition, visualization, and synthesis. We defined multiple research questions to identify the underlying technological drivers that strive to improve the challenges in this domain. This study is designed in accordance with the PRISMA methodology. We searched for articles published between 2010 and 2021 in multiple digital libraries ( i.e. , Elsevier, Springer, IEEE, PubMed, and MDPI). To automate the initial steps of PRISMA for identifying potentially relevant articles, duplicate removal and basic screening, we utilized a Natural Language Processing toolkit. Then, we performed a synthesis of the existing body of knowledge and identified the different studies that achieved significant advancements in sign language recognition, visualization, and synthesis. The identified trends based on analysis of almost 2000 papers clearly show that technology developments, especially in image processing and deep learning, are driving new applications and tools that improve the various performance metrics in these sign language-related task. Finally, we identified which techniques and devices contribute to such results and what are the common threads and gaps that would open new research directions in the field.
... Some studies on native signers, however, have also suggested that sign language classifiers are processed somewhat differently from lexical signs (e.g., Emmorey et al., 2002Emmorey et al., , 2005Emmorey et al., , 2013Emmorey, McCullough, Mehta, & Grabowski, 2014;MacSweeney et al., 2002;McCullough, Saygin, Korpics, & Emmorey, 2012). Neuroimaging and lesion studies suggest that the processing of classifiers engages spatial-processing networks in the right hemisphere and in bilateral parietal brain areas to a greater extent than the processing of lexical signs (e.g., Atkinson, Campbell, Marshall, Thacker, & Woll, 2004, Atkinson, Marshall, Woll, & Thacker, 2005Emmorey et al., 2002Emmorey et al., , 2005Emmorey et al., , 2013Emmorey et al., , 2014Hickok et al., 1999Hickok et al., , 2009MacSweeney et al., 2002;Poizner, Klima, & Bellugi, 1987). Whether this evidence reflects enhanced (nonlinguistic) visual-spatial processing or whether it shows "spatially based syntactic processing" is still an open question. ...
Article
Full-text available
Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... Some studies on native signers, however, have also suggested that sign language classifiers are processed somewhat differently from lexical signs (e.g., Emmorey et al., 2002Emmorey et al., , 2005Emmorey et al., , 2013Emmorey, McCullough, Mehta, & Grabowski, 2014;MacSweeney et al., 2002;McCullough, Saygin, Korpics, & Emmorey, 2012). Neuroimaging and lesion studies suggest that the processing of classifiers engages spatial-processing networks in the right hemisphere and in bilateral parietal brain areas to a greater extent than the processing of lexical signs (e.g., Atkinson, Campbell, Marshall, Thacker, & Woll, 2004, Atkinson, Marshall, Woll, & Thacker, 2005Emmorey et al., 2002Emmorey et al., , 2005Emmorey et al., , 2013Emmorey et al., , 2014Hickok et al., 1999Hickok et al., , 2009MacSweeney et al., 2002;Poizner, Klima, & Bellugi, 1987). Whether this evidence reflects enhanced (nonlinguistic) visual-spatial processing or whether it shows "spatially based syntactic processing" is still an open question. ...
Article
Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.
... There are interesting similarities and differences between Heather's signing and that of the signers who had visuospatial impairments following right hemisphere damage and who were studied by Poizner and colleagues (Hickok et al., 1999;Poizner et al., 1987;Poizner & Kegl, 1992). Heather's errors at both the sentential and the narrative levels are reminiscent of those found in signers with right hemisphere damage, and yet, unlike them, she exhibited a full range of affective and grammatical facial expression and normal prosody in spontaneous, everyday communication. ...
... This includes the inferior frontal gyrus (IFG, classically called Broca's area), superior temporal sulcus (STS) and adjacent superior and middle temporal gyri, and the inferior parietal lobe (IPL, classically called Wernicke's area) including the angular (AG) and supramarginal gyri (SMG; 4,[8][9][10][11][12][13][14][15][16][17][18]. Likewise, narrative and discourse-level aspects of signed language depend largely on right STS regions, as they do for spoken language (17,19). While the neural networks engaged by signed and spoken language are overall quite similar, some studies have suggested that the linguistic use of space in sign language engages additional brain regions. ...
Article
Full-text available
Significance Although sign languages and nonlinguistic gesture use the same modalities, only sign languages have established vocabularies and follow grammatical principles. This is the first study (to our knowledge) to ask how the brain systems engaged by sign language differ from those used for nonlinguistic gesture matched in content, using appropriate visual controls. Signers engaged classic left-lateralized language centers when viewing both sign language and gesture; nonsigners showed activation only in areas attuned to human movement, indicating that sign language experience influences gesture perception. In signers, sign language activated left hemisphere language areas more strongly than gestural sequences. Thus, sign language constructions—even those similar to gesture—engage language-related brain systems and are not processed in the same ways that nonsigners interpret gesture.
... Initial studies reported that signers with damage to the righthemisphere had impaired visual-spatial deficits but well-preserved language skills (Poizner et al., 1987). However, as further studies appeared, it became clear that right-hemisphere damage in users of signed languages also disrupted the meta-control of language use, which resulted in disruptions of discourse abilities (Hickok et al., 1999). This finding is similar to those suggesting righthemisphere damage in users of spoken language impacts so-called extra-linguistic functions, such as the interpretation of metaphors and humor Kaplan et al., 1990;Rehak et al., 1992). ...
Article
Full-text available
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
... This area of research has the potential to reveal whether double disassociations exist in clinical deaf and hearing samples, as they do in nonclinical samples, between linguistic and nonlinguistic cognitive processing (see e.g., Tucker 1992;Campbell, 1997;Neville et al., 1998;Corina et al., 1999;Hickok et al., 1999). Linguistic ability's influence on processes of encoding and decoding affective stimuli in schizophrenia can thus be further delineated. ...
Article
Full-text available
There has been a relative lack of research on deaf people with schizophrenia, and no data exist regarding symptom structure in this population. Thus, we determined the factor structure of the 24-item Brief Psychiatric Rating Scale (BPRS) in deaf (n = 34) and hearing (n = 31) people with schizophrenia and compared it to a standard four-factor solution. An obliquely rotated factor analysis produced a solution for the BPRS that resembled others in the literature. Symptom clusters were additionally compared to cognitive and social-cognitive abilities. Activity and disorganised symptoms were the most consistent correlates of visual- and thought and language-related skills for deaf and hearing subjects respectively. Affective symptoms and facial affect processing were positively correlated among deaf but not hearing subjects. The data suggest that current symptom models of schizophrenia are valid in both hearing and deaf patients. However, relations between symptoms, cognition, and outcome from the general (hearing) literature cannot be generalised to deaf patients. Findings are broadly consistent with pathophysiologic models of schizophrenia suggesting a fundamental cortical processing algorithm operating across several domains of neural activity including vision, and thought and language. Support is provided for recent advances in social-cognitive interventions for people with schizophrenia.
... For instance, the addressee needs to maintain the position of a number of elements of the text as they are placed in sign space in order to understand later pointings, or to which of the characters the facial emotions and, in our Tale text, the many (27) constructed actions apply. Right hemisphere lesions can impair mastery of spatial discourse organization during narration (Hickok et al., 1999 ), which is consistent with the larger right parietal involvement during Tale LSF minus Lect LSF. Furthermore, personal transfers require third-person perspective-taking, implying shifts of the spatial reference-frame. ...
Article
Full-text available
"Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers.
... In one case, Loew et al. (1997) report the deficits of a right-hemisphere damaged signer of ASL who has most grammatical structures intact but struggles with the use of role shift to depict different characters. Hickock et al. (1999) also note that one of the two right-hemisphere damaged individuals in their study suffered disruptions to his ability to correctly manage spatialized discourse. In particular, that signer consistently made errors regarding the lack of appropriate body shift to indicate the different characters in an account of two people interacting with each other. ...
Article
Communication commonly occurs with both linguistic and gestural signals. In spoken languages the gestural signal can be manual (e.g., meaningful hand gestures) or vocal (e.g., meaningful uses of pauses, volume, and intonation), but in signed languages non-linguistic gesture and language occupy the same visual–gestural channel. One type of gesture, constructed action, is characteristically mimetic and allows for depiction of a character's actions with the speaker's body. We examine that type of gesture as it complements narratives presented in American Sign Language (ASL) to various audiences.A single text (an account of a Deaf leader's life) was recounted by two native Deaf signers of ASL to three different audiences, a design that allows us to apply the sociolinguistic framework of style known as audience design (Bell, 1984).The data show that constructed action can occur in both non-formal and formal settings. Additionally, if constructed action is analyzed by body parts (i.e., head, torso, arms/hands, and legs/feet) and degree of production (i.e., slight, moderate, exaggerated), some trends appear across settings. We suggest these trends could be attributed to the signers accommodating to their audiences. Finally, we report an association between degree of emphasis of constructed action and audience/setting for the two signers.