Figure - available from: Frontiers in Psychology
This content is subject to copyright.
Cues in French Cued Speech: hand-shapes for consonants and hand placements for vowels. Adapted from http://sourdsressources.wordpress.com.

Cues in French Cued Speech: hand-shapes for consonants and hand placements for vowels. Adapted from http://sourdsressources.wordpress.com.

Source publication
Article
Full-text available
Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Co...

Similar publications

Article
Full-text available
For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal lang...
Article
Full-text available
Objective Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). Th...
Article
Full-text available
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted...
Article
Full-text available
Objectives: To determine whether spatial auditory cues provided by cochlear implants can improve postural balance in adults with severe deafness. Methods: In the presence of spatial white noise, 13 adult cochlear implantees wore head and lumbar-mounted inertial sensors while standing in the dark for 30 seconds in two auditory conditions: hearing...
Article
Full-text available
Single-sided deafness prevents access to the binaural cues that help normal-hearing listeners extract target speech from competing voices. Little is known about how listeners with one normal-hearing ear might benefit from access to severely degraded audio signals that preserve only envelope information in the second ear. This study investigated whe...

Citations

... The compromise between lipread input and auditory perception in the McGurk effect has been taken as evidence that audio and visual information automatically interact in speech processing. Building on these findings, a follow-up study used the McGurk paradigm to investigate whether CS cues interact with auditory speech information in HI adults who were exposed to CS at young ages and were regular users of the system [7]. First, participants were presented with classical McGurk stimuli (auditory /pa/, lipreading /ka/) to verify whether or not they were sensitive to the expected effect (illusory perception of /ta/), similar to typically hearing individuals (TH). ...
... Therefore, based on the available data from this study, it is challenging to determine whether the observed increase in the proportion of responses reporting/ta/is primarily attributed to visual speech decoding or to an augmented McGurk effect. While this behavioral study [7] provides valuable insights, it does not definitively establish a direct interaction between the perception of CS gestures and audiovisual (AV) speech processing. Further research is necessary to investigate how the perception of CS gestures interacts with natural speech cues in AV speech processing. ...
Article
Full-text available
Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers. Adding CS gestures to lipread information increased the magnitude of effects observed at the N1 time window, but did not enhance P2 amplitude attenuation. Interestingly, presenting CS gestures without lipreading information yielded distinct response patterns depending on participants’ experience with the system. In the group of CS producers, AV perception of CS gestures facilitated the early stage of speech processing, while in the group of naïve participants, it elicited a latency delay at the P2 time window. These results suggest that, for experienced CS users, the perception of gestures facilitates early stages of speech processing, but when people are not familiar with the system, the perception of gestures impacts the efficiency of phonological decoding.
... Algunas de las investigaciones que han analizado el papel desempeñado por la fonología en los procesos lectores (Cupples et alii, 2013;Mayer y Trezek, 2014;, han evidenciado que tanto estudiantes sordos como oyentes necesitan habilidades de comprensión de la lengua oral (morfosintaxis y vocabulario, esencialmente) y recursos fonológicos para identificar palabras escritas Nielsen y Luetke-Stahlman 2002). Si se analiza el origen de estas representaciones escritas, los estudios realizados en las dos últimas décadas han abandonado la concepción tradicional sobre el desarrollo de estas únicamente sobre la información acústica de la persona y han evidenciado que, tanto en personas sordas como en oyentes, las representaciones fonológicas de las palabras pueden crearse sobre información sensorial audiovisual percibida por diferentes vías: información acústica (si los estudiantes sordos presentan restos auditivos funcionales o utilizan implantes cocleares), visual (cuando perciben información a través de la lectura labio-facial) y kinestésica (cuando, además de la lectura labio-facial se utiliza la palabra complementada como sistema complementario de la comunicación que elimina las ambigüedades de la lengua oral) (Bayard, Colin y Leybaert, 2014;Leybaert, Aparicio y Alegría, 2011; véase capítulo 9 para una revisión). ...
Chapter
Full-text available
En el marco de un seguimiento longitudinal, este capítulo se plantea estudiar la evolución de la lengua oral y la comprensión lectora de un grupo de 13 niños sordos profundos con implante coclear (ic) precoz, escolarizados en centros de educación compartida de alumnos sordos y oyentes en los que se utiliza la lengua oral y la lengua de signos. Los niños fueron evaluados al finalizar 2.º y 4.º curso de Educación Primaria en comprensión de textos escritos y en dos componentes de la lengua oral: vocabulario comprensivo y comprensión gramatical. Los datos de este seguimiento apuntan a una evolución positiva a lo largo de la escolarización, dado que al finalizar 4.º curso la mayor parte de los alumnos obtienen resultados que los sitúan dentro de los parámetros de la normalidad respecto a la población oyente. ABSTRACT ENGLISH Within the framework of a longitudinal follow-up, this chapteraims to study the evolution of oral language and reading comprehension in a group of 13 profoundly deaf children with early cochlear implant (ci), enrolled in co-enrollment schools where deaf and hearing pupils are educated together and where oral language and sign language are used. The children were evaluated at the end of 2nd and 4th year of Primary Education in comprehension of written texts and in two components of oral language: comprehensive vocabulary and grammatical understanding. Data from thismonitoring are presented which point to a positive evolution throughout schooling: at the end of 4th grade of primary school most students obtain results that place them within the parameters of normality with respect to the hearing population.
... Dans la continuité de ces travaux, Bayard et al. (2014) ont réalisé plusieurs études à partir du paradigme de l'étude de Alegria & Lechat (2005) (1) le statut auditif influence le poids de l'information manuelle : les personnes sourdes exposées à la LfPC de manière précoce tiennent davantage compte de l'information manuelle que les personnes sourdes exposées tardivement ou encore que les personnes entendantes exposées à la LfPC. ...
... Réponse « compromis » vs réponse « combinaison » dans l'étudede Bayard et al. (2014) Adaptéede Bayard et al. (2015) Bayard et al. (2014) ont ainsi mis en évidence l'intégration des informations labiale et manuelle dans le traitement de la parole lorsque celle-ci est transmise sans information auditive. En 2015, cette même équipe de chercheurs a voulu observer les mécanismes d'intégration des informations manuelle et labiale lorsque la parole était transmise avec des informations auditives(Bayard et al., 2015). ...
Thesis
Full-text available
Bien que l'implant cochléaire (CI) améliore la perception de parole chez les enfants sourds, la perception de certains traits acoustiques peut être altérée, le développement du langage oral impacté et les compétences phonologiques limitées. Les gestes manuels de la Langue française Parlée Complétée (LfPC) peuvent alors compléter les informations phonologiques manquantes. Plusieurs études ont montré les apports de l’exposition à la LfPC sur la perception de parole et le développement phonologique en langue vocale. Toutefois, peu d’études ont exploré le lien entre CI et LfPC. Cette thèse examine les bénéfices à long terme de l’exposition à la LfPC sur le développement phonologique des enfants avec CI. Notre hypothèse est que l’exposition à la LfPC améliore la perception de parole, ce qui favorise le développement des représentations phonologiques chez l’enfant avec CI. Nous supposons que les compétences phonologiques développées en perception, par le biais de l’exposition à la LfPC, se transfèrent à la production de parole, ce qui améliorerait la production de phonèmes. Pour caractériser cette amélioration, des données acoustiques et articulatoires ont été recueillies.Dans un premier temps, la production de parole de 14 enfants avec CI et de 71 enfants normo-entendants, de 60 à 140 mois, a été analysée à l'aide de la tâche de dénomination d'images de la batterie EULALIES, conçue pour tester la précision de la production spontanée de phonèmes dans un contexte de mots isolés. L’analyse de ces données a montré une production de phonèmes plus précise chez les enfants avec CI lorsqu’ils ont développé un niveau élevé de décodage de la LfPC, par rapport aux enfants dont les compétences de décodage sont plus limitées. Comme établi, nos résultats indiquent que l'implantation précoce facilite le développement phonologique mais que la production de certains traits acoustiques, tels que le voisement, la nasalité, le mode ou le lieu d'articulation, restent dégradés même lors d’une implantation précoce. Nos analyses révèlent aussi qu’un décodage élevé de la LfPC réduit le nombre d'erreurs sur ces traits : nos données suggèrent qu'une exposition adéquate à la LfPC améliore la production du voisement, du contraste de nasalité ainsi que du mode et du lieu d’articulation.Dans un second temps, les productions acoustiques et les gestes articulatoires, recueillis par échographie linguale, de neuf enfants avec CI et exposés à la LfPC et de dix enfants normo-entendants, entre 51 et 137 mois, ont été étudiées. Les résultats suggèrent que l’exposition à la LfPC permet aux enfants avec CI de produire les gestes articulatoires linguaux de la même façon que leurs pairs normo-entendants, en particulier lorsqu’ils ont développé des compétences élevées de décodage. Les données montrent également qu’un niveau élevé de décodage de la LfPC favorise la distinction de lieu d’articulation des plosives et des fricatives chez l’enfant avec CI.Comme soutenu par plusieurs équipes de recherche, l’exposition à la LfPC est fonctionnellement bénéfique pour la perception de parole puisqu’elle fournit un accès visuel à tous les phonèmes du français. Les résultats de nos deux études mettent en avant ses effets à plus long terme sur la production de parole, probablement expliqués par le fait qu'un meilleur accès perceptif fournit de meilleures représentations phonologiques.Enfin, ce travail de thèse fournit deux corpus de données de référence sur la production de parole d’enfants au développement typique et d’enfants avec CI : un ensemble de données phonétiques et un ensemble de données acoustiques et articulatoires. Ces données peuvent informer la pratique clinique en fournissant des pistes d’intervention orthophonique pour faciliter la prise en charge de l’enfant avec CI mais également favoriser les interactions quotidiennes à domicile en confirmant le rôle crucial des repères visuels pour un développement optimal de la production et du traitement de la parole.
... Visual cues congruent to the auditory signal (audiovisual input) can enhance speech perception in both NH listeners and listeners with hearing loss (Erber 1971;Lachs et al. 2001;Bergeson et al. 2003). However, the benefit of visual cues depends on the available auditory information and hearing status of the listener (Bayard et al. 2014). Therefore, it is conceivable that CI users rely more on visual cues compared with NH listeners, particularly in difficult listening situations. ...
Article
Objectives: Previous research has shown that children with cochlear implants (CIs) encounter more communication difficulties than their normal-hearing (NH) peers in kindergarten and elementary schools. Yet, little is known about the potential listening difficulties that children with CIs may experience during secondary education. The aim of this study was to investigate the listening difficulties of children with a CI in mainstream secondary education and to compare these results to the difficulties of their NH peers and the difficulties observed by their teachers. Design: The Dutch version of the Listening Inventory for Education Revised (LIFE-R) was administered to 19 children (mean age = 13 years 9 months; SD = 9 months) who received a CI early in life, to their NH classmates (n = 239), and to their teachers (n = 18). All participants were enrolled in mainstream secondary education in Flanders (first to fourth grades). The Listening Inventory for Secondary Education consists of 15 typical listening situations as experienced by students (LIFEstudent) during class activities (LIFEclass) and during social activities at school (LIFEsocial). The teachers completed a separate version of the Listening Inventory for Secondary Education (LIFEteacher) and Screening Instrument for Targeting Educational Risk. Results: Participants with CIs reported significantly more listening difficulties than their NH peers. A regression model estimated that 75% of the participants with CIs were at risk of experiencing listening difficulties. The chances of experiencing listening difficulties were significantly higher in participants with CIs for 7 out of 15 listening situations. The 3 listening situations that had the highest chance of resulting in listening difficulties were (1) listening during group work, (2) listening to multimedia, and (3) listening in large-sized classrooms. Results of the teacher's questionnaires (LIFEteacher and Screening Instrument for Targeting Educational Risk) did not show a similar significant difference in listening difficulties between participants with a CI and their NH peers. According to teachers, NH participants even obtained significantly lower scores for staying on task and for participation in class than participants with a CI. Conclusions: Although children with a CI seemingly fit in well in mainstream schools, they still experience significantly more listening difficulties than their NH peers. Low signal to noise ratios (SNRs), distortions of the speech signal (multimedia, reverberation), distance, lack of visual support, and directivity effects of the microphones were identified as difficulties for children with a CI in the classroom. As teachers may not always notice these listening difficulties, a list of practical recommendations was provided in this study, to raise awareness among teachers and to minimize the difficulties.
... In particular, researchers suggested that infants, during the development of their auditory and visual perceptions, fuse facial cues together with audio information in an effort to better discriminate and recognize emotions. In the same spirit, authors in [3] performed a study of how deaf people perceive sounds of phonemes by eliciting their visual perceptual system with the purpose of performing lipreading (or speech-reading). In a similar manner, concerning the emotional cross-modal relationships, prosodic speech information (linguistics variation in speech like pitch tempo, loudness, etc.) and its correlation with facial features have been intensively studied in [4][5][6] . ...
Article
Full-text available
Accessing large, manually annotated audio databases in an effort to create robust models for emotion recognition is a notably difficult task, handicapped by the annotation cost and label ambiguities. On the contrary, there are plenty of publicly available datasets for emotion recognition which are based on facial expressivity due to the prevailing role of computer vision in deep learning research, nowadays. Thereby, in the current work, we performed a study on cross-modal transfer knowledge between audio and facial modalities within the emotional context. More concretely, we investigated whether facial information from videos could be used to boost the awareness and the prediction tracking of emotions in audio signals. Our approach was based on a simple hypothesis: that the emotional state’s content of a person’s oral expression correlates with the corresponding facial expressions. Research in the domain of cognitive psychology was affirmative to our hypothesis and suggests that visual information related to emotions fused with the auditory signal is used from humans in a cross-modal integration schema to better understand emotions. In this regard, a method called dacssGAN (which stands for Domain Adaptation Conditional Semi-Supervised Generative Adversarial Networks) is introduced in this work, in an effort to bridge these two inherently different domains. Given as input the source domain (visual data) and some conditional information that is based on inductive conformal prediction, the proposed architecture generates data distributions that are as close as possible to the target domain (audio data). Through experimentation, it is shown that classification performance of an expanded dataset using real audio enhanced with generated samples produced using dacssGAN (50.29% and 48.65%) outperforms the one obtained merely using real audio samples (49.34% and 46.90%) for two publicly available audio–visual emotion datasets.
... Due to the impoverished signal available, the acquisition of phonological structures of words remains difficult for CI users (Nittrouer, Sansom, Low, Rice, & Caldwell-Tarr, 2014). Deaf children who are less proficient with their CI rely more on lip-reading to perceive speech than hearing children, while those who are more proficient rely more on auditory information (Bayard, Colin, & Leybaert, 2014;Huyse, Berthommier, & Leybaert, 2013;Rouger, Fraysse, Deguine, & Barone, 2008). Therefore, mental speech representations, regarding the phonological structure and phoneme identity, are likely less accurate for less proficient CI users than proficient users. ...
Article
This study aims to compare word spelling outcomes for French-speaking deaf children with a cochlear implant (CI) with hearing children who matched for age, level of education and gender. A picture written naming task controlling for word frequency, word length, and phoneme-to-grapheme predictability was designed to analyze spelling productions. A generalized linear mixed model on the percentage of correct spelling revealed an effect of participant’s reading abilities, but no effect of hearing status. Word frequency and word length, but not phoneme-to-grapheme predictability, contributed to explaining the spelling variance. Deaf children with a CI made significantly less phonologically plausible errors and more phonologically unacceptable errors when compared to their hearing peers. Age at implantation and speech perception scores were related to deaf children’s errors. A good word spelling level can be achieved by deaf children with a CI, who nonetheless use less efficiently the phoneme-to-grapheme strategy than do hearing children.
... Strikingly, most studies assessing the role of CS were realized in a pure visual environment without sound. The combination of sound, lips, and manual cues was only recently explored by Bayard, Colin, and Leybaert (2014) and Bayard, Leybaert, and Colin (2015) who examined syllable perception by CI participants in a paradigm including various cases of congruent or incongruent combinations of auditory and visual speech stimuli. The results showed that, in quiet conditions, CS receivers do combine sound, lip shapes, and manual cues into a unitary percept. ...
... Indeed, the experimental data clearly show that audition is involved in the A condition, lipreading does play a role in the AV condition and manual cues do intervene for CS readers in the AVC condition. Moreover, Bayard et al. (2014) have shown that deaf participants do integrate sound, lips, and hands into a single percept. Finally, even if fusion per se was not tested in the present study, we can say at this stage that D/HH participants appear to be able to switch efficiently from A, AV, to AVC conditions. ...
Article
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.
... Dos nociones deben considerarse en este contexto. Primero, la fonología no es exclusivamente auditiva, sino audiovisual, tanto en personas oyentes como sordas (Bayard, Colin y Leybaert, 2014;Dodd, MacInstosh y Woodhouse, 1998). En consecuencia, los niños con sordera profunda poseen representaciones fonológicas de las palabras adquiridas visualmente, utilizando la lectura labial sola o apoyada por la palabra complementada (LaSasso, Crain y Leybaert, 2010;Trezek y Wang, 2006). ...
Article
Full-text available
Resumen Introducción El objetivo del trabajo fue determinar el papel de los códigos ortográficos y fonológicos en el aprendizaje de la lectura en estudiantes sordos con y sin implante coclear (IC). Se analizaron varias cuestiones: 1) si la calidad de las representaciones fonológicas que poseen y su capacidad para manipularlas depende o no del uso de IC; 2) si los niveles lectores que alcanzan dependen de la calidad de las representaciones fonológicas, y 3) si las representaciones ortográficas que almacenan provienen de las representaciones fonológicas correspondientes. Metodología Participaron 172 estudiantes sordos con edades entre los 6 y los 18 años, divididos en 4 grupos: alumnos con IC precoz, alumnos con IC tardío, alumnos sin IC con sordera moderada y alumnos sin IC con sordera profunda. Como grupo control, participó un grupo de 797 oyentes de igual edad. Todos fueron evaluados con una prueba de lectura, una de ortografía y 3 de metafonología. Resultados Los resultados muestran que los IC, sobre todo los colocados precozmente (antes de 30 meses), conducen a mejores resultados en todas las tareas experimentales. En el extremo opuesto encontramos el grupo con sordera profunda sin IC. Las representaciones ortográficas parecen almacenarse utilizando las representaciones fonológicas correspondientes, que a su vez mejoran gracias a la información ortográfica proporcionada por la actividad de lectura en sí. Discusión y conclusiones Se discute la necesidad de establecer una enseñanza explícita y sistemática en habilidades metafonológicas antes y durante la enseñanza de lectura de los estudiantes sordos.
... Our third research question concerned the relative activation created by manual cues only and lipreading cues only compared to the activation created by the combined movements of lips and hands in CS. Some authors have suggested that the manual component of CS delivers more useful information than the lipread component to get access to the lexicon (Alegria et al., 1999;Alegria and Lechat, 2005;Attina et al., 2006;Troille, 2009;Bayard et al., 2014Bayard et al., , 2015. We hypothesize that the cortical activations may reveal greater activation for unisensory manual than labial movements, but also indicate specific locus/loci for integration of manual and labial information, different from those reported for AV integration. ...
... First, temporal analyses of CS production have shown that manual cues are produced temporally in advance to the lips (Attina et al., 2004(Attina et al., , 2006Troille, 2009). Second, when lipreading and manual cues are incongruent (e.g., pronouncing with the lips the phoneme /v/ with handshape 1 coding /d/, /p/ /j/ phonemes), most of the answers from the perceiver are related to the manual cues and not to lipreading, especially when the participant is an early CS user (Alegria and Lechat, 2005;Bayard et al., 2014Bayard et al., , 2015. Third, deaf people who are early CS-users often succeed in daily natural communication with other CS users by producing manual cues alone, without lipreading. ...
... This condition would enable us to dissociate brain areas linked to phonological processing from those linked to lexical processing in visual CS. Another interesting study would be to investigate the neural correlates of incongruent lip and manual cue movements (for example, a mouthed syllable /va/ accompanied by handshape 1 [/p, d, j/]), i.e., a McGurk-like effect experiment (Alegria and Lechat, 2005;Bayard et al., 2014Bayard et al., , 2015. In CS, this would increase our understanding of the integration of visual speech features in deaf participants. ...
Article
Full-text available
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.
... Our third research question concerned the relative activation created by manual cues only and lipreading cues only compared to the activation created by the combined movements of lips P r o v i s i o n a l 14 and hands in CS. Some authors have suggested that the manual component of CS delivers more useful information than the lipread component to get access to the lexicon (Alegria, Charlier, & Mattys, 1999;Alegria & Lechat, 2005;Attina et al., 2006;Bayard, Colin & Leybaert, 2014;Troille, 2009). We hypothesize that the cortical activations may reveal greater activation for unisensory manual than labial movements, but also indicate specific locus/loci for integration of manual and labial information, different from those reported for audiovisual integration. ...
... First, temporal analyses of CS production have shown that manual cues are produced temporally in advance to the lips (Attina, et al., 2004(Attina, et al., , 2006Troille, 2009). Second, when lipreading and manual cues are incongruent (e.g., pronouncing with the lips the phoneme /v/ with handshape 1 coding /d/, /p/ /j/ phonemes), most of the answers from the perceiver are related to the manual cues and not to lipreading, especially when the participant is an early CS user (Alegria & Lechat, 2005;Bayard et al., 2014;. Third, deaf people who are early CS-users often succeed in daily natural communication with other CS users by producing manual cues alone, without lipreading. ...
... This condition would enable us to dissociate brain areas linked to phonological processing from those linked to lexical processing in visual CS. Another interesting study would be to investigate the neural correlates of incongruent lip and manual cue movements (for example, a mouthed syllable /va/ accompanied by handshape 1 [/p, d, j/]), i.e., a McGurk-like effect experiment (Alegria & Lechat, 2005;Bayard et al., 2014;. In CS, this would increase our understanding of the integration of visual speech features in deaf participants. ...
Article
Full-text available
Abstract We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual(AV) speech in normally‐hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo‐manual vs an audiovisual modality. The study included deaf adult participants who were early‐CS users and native hearing users of French who process speech audiovisually. Words were presented in an eventrelated fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio‐alone and lipread‐alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito‐temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.