Fig 1 - uploaded by Noah Sasson
Content may be subject to copyright.
Sample scanpaths from phase I of the experiment for three autistic participants (first column) and three control participants (second column). Participants were instructed to examine the faces in any manner they selected. 

Sample scanpaths from phase I of the experiment for three autistic participants (first column) and three control participants (second column). Participants were instructed to examine the faces in any manner they selected. 

Source publication
Article
Full-text available
The visual scanpaths of five high-functioning adult autistic males and five adult male controls were recorded using an infrared corneal reflection technique as they viewed photographs of human faces. Analyses of the scanpath data revealed marked differences in the scanpaths of the two groups. The autistic participants viewed nonfeature areas of the...

Context in source publication

Context 1
... and surprise. Extensive investigation has shown high agreement across cultures in the as- signment of these basic emotions to corresponding facial expressions (see Ekman, 1989, for a detailed review of these studies). The stimuli were presented on a 69-cm video monitor and subtended a horizontal visual angle of 10.7° and a vertical visual angle of 14.2°. Assessment took place in a darkened room. After the participants were acquainted with the apparatus and experimental set-up, they were seated in a comfortable chair in front of the video monitor and were asked to place their chin in a chin rest. The chin rest aided in the processes of point of regard tracking and ensured a dis- tance of 80 cm between the center of the stimulus display screen and the participant’s eyes. After a brief calibration procedure, the stimulus pictures were exposed. The experimental procedure was divided into two phases. In phase I, participants were shown 12 faces from the Ekman and Friesen series with one male and one female face for each of the six basic emotions. Participants were instructed to look at the photographs in any manner they selected (i.e., “Please look at the faces in any manner you wish.”). Each picture was presented for 2 seconds with a 2-second interstimulus interval. Eye movement data were recorded for post experimental processing. During phase II, participants were shown 24 addi- tional faces from the Ekman and Friesen series. These faces were selected from among the photographs that approximately 90% or more of the Ekman and Friesen (1976) normative sample had identified as portraying a particular emotion. The 24 photographs were balanced by gender and emotion so that they consisted of two male and two female faces for each of the six basic emotions. Participants were instructed to identify the emotion portrayed in each picture (i.e., “Please identify the emotions portrayed in these faces.”). Each picture was presented for 2 seconds with a 5-second interstimulus interval. A list of the six basic emotions was presented during the interstimulus interval to assist the participants in selecting from among the possible choices. Two practice trials (one trial with a face portraying happiness and another with a face portraying sadness) were administered before the start of this phase to ensure that each participant understood the emotion recognition task. All of the participants indicated that they understood the task and the meanings of the six emotion terms. Participants’ verbal responses were recorded by an experimenter and the eye movement data were recorded for post experimental analysis. Each 2-second epoch of eye movement data was manually checked for tracking integrity. Eye blinks were identified by a loss of corneal reflection and were excluded from subsequent data analysis, as were off screen gazes. Point of regard data were collected at a rate of 60 samples per second, which provided up to 120 data points during each of the twelve 2-second epochs for each participant. Regions of interest were defined for each face based on the location of the core facial features (i.e., eyes, nose, and mouth). To define these regions, each face was divided into 48 equally sized blocks subtending horizontal and vertical visual angles of 1.8°. Then, for each face, the blocks cover- ing the core facial features were identified. On average, 13 blocks (27%) outlined the key feature regions of the eyes, nose, and mouth, whereas 35 blocks (73%) defined the remaining nonfeature regions of the face. The fundamental variables for quantitative com- parative analyses included the percentage of time (i.e., number of sample points out of the points collected) in which the participant’s point of regard was on primary feature versus nonfeature areas of the face as well as the percentage of fixations made on primary facial features versus nonfeature areas of the face. Moreover, the average duration of fixations and the mean number of fixations per stimulus face were calculated for each subject. A fixation was defined as a set of consecutive gaze co- ordinates, confined within a diameter of 1° of visual arc for a duration of 200 milliseconds or more (Noton & Stark, 1971). A K-means cluster analysis algorithm was developed and used to compute the number of fixations and fixation duration for feature (eyes, nose, mouth) and nonfeature (remaining facial regions) areas (see Latimer, 1988, for a detailed review of similar cluster analytic techniques for use with eye movement data). Finally, in addition to these quantitative indices, each scanning pattern was plotted for inspection and qualitative analysis. Preliminary analyses indicated no differences in scanpaths as a function of stimulus face gender, identity, or emotion portrayed. Hence, subsequent analyses were conducted with the eye movement data collapsed across these three variables. Typical scanpaths from phase I for three participants with autism and three controls are shown in Figure 1. Each symbol represents a data-sampling point (0.0167 second). The lines connecting these points trace the visual scanpath taken by the subject over the face stimulus. As can be seen in the figure, qualitative differences in the scanpaths of autistic and control participants were evident. As illustrated in the top panel of Figure 2, individuals in the autism group devoted a smaller percentage of time ( M ϭ 56.17%, SD ϭ 7.08%) to examining the core features of the faces than did participants in the control group ( M ϭ 91.28%, SD ϭ 6.66%). This effect appeared to be strongest for the percentage of time spent examining the eyes (autism: M ϭ 50.87%), SD ϭ 9.16%, control: M ϭ 79.04%, SD ϭ 6.49%). Group differences were smaller for the percentage of time spent on the nose (autism: M ϭ 4.86%, SD ϭ 4.79%); control: M ϭ 10.63%, SD ϭ 2.81% and the mouth (autism: M ϭ 0.43%, SD ϭ 0.36%; control: M ϭ 1.60%, SD ϭ 1.02%). Similarly, as shown in the top panel of Table 1, autistic participants devoted a smaller percentage of their fixations to the core facial features during Phase I ( M ϭ 71.98%, SE ϭ 7.62%) than did control participants ( M ϭ 92.62%, SE ϭ 3.15%). The largest group difference was observed for the percentage of fixations on the eyes (autism: M ϭ 65.56, SE ϭ 5.39%; control: M ϭ 81.64, SE ϭ 1.87%). The average number of fixations per face (autism: M ϭ 4.28, SE ϭ 0.12; control: M ϭ 4.60, SE ϭ 0.24) and the average fixation duration (autism: M ϭ 0.35 second, SE ϭ 0.01 second; control: M ϭ 0.33 second, SE ϭ 0.02 second) did not appear to differ as a function of group membership. Figure 3 shows typical scanpaths for three participants with autism and three controls from phase II of the experimental procedure. Qualitative differences in the scanpaths of autistic and control participants were once more evident and the qualitative characteristics of the scanpaths did not appear to change across the two phases of the experiment. As illustrated in the bottom panel of Figure 2, individuals in the autism group again spent a smaller percentage of time ( M ϭ 61.58%, SD ϭ 19.42%) examining the core features of the faces than did participants in the control group ( M ϭ 83.22%, SD ϭ 8.92%). This effect was once more particularly strong for the percentage of time spent examining the eyes (autism: M ϭ 42.84%, SD ϭ 18.52%; control: M ϭ 56.28%, SD ϭ 13.43%). Group differences were again evident but somewhat smaller for the percentage of time spent on the nose (autism: M ϭ 10.76%, SD ϭ 4.79%; control: M ϭ 19.84%, SD ϭ 5.88%) and the mouth (autism: M ϭ 8.00%, SD ϭ 5.80%; control: M ϭ 7.14%, SD ϭ 4.88%). As shown in the bottom panel of Table 1, autistic participants once more devoted a smaller percentage of their fixations to the core facial features ( M ϭ 65.06%, SE ϭ 8.76%) during phase II than did control participants ( M ϭ 84.40%, SE ϭ 3.23%). However, both groups appeared to devote fewer fixations to primary facial features during this phase of the experiment. During phase II, the largest group difference was observed for the percentage of fixations on the nose (autism: M ϭ 12.36, SE ϭ 2.83%; control: M ϭ 20.42, SE ϭ 3.86%). The average number of fixations per face (autism: M ϭ 3.71, SE ϭ 0.40; control: M ϭ 4.01, SE ϭ 0.24) and the average fixation duration (autism: M ϭ 00.33 second, SE ϭ 0.01 second; control: M ϭ 00.33 seconds, SE ϭ 0.01 second) did not appear to differ as a function of group membership. The observations of group differences for the percentage of time spent on core facial features were explored further in the context of a 2 (Group: autism vs. control) ϫ 2 (Phase: I vs. II) mixed-design analysis of variance (ANOVA) procedure. The main effect of Group was significant, F (1, 8) ϭ 19.37, p Ͻ .05. That is, across both phases of the experiment, autistic participants devoted a significant smaller percentage of time to the examination of core feature regions compared with control participants (autism: M ϭ 58.87%, SE ϭ 4.56%; control: M ϭ 87.25, SE ϭ 4.55%). The percentage of time spent examining core features did not differ as a function of procedural phase (phase I: M ϭ 73.72%, SE ϭ 2.17%; phase II: M ϭ 72.40, SE ϭ 4.78%), F (1, 8) ϭ 0.13, p Ͼ .05, and the group ϫ phase interaction was not significant, F (1, 8) ϭ 3.35, p Ͼ .05. After collapsing across the two experimental phases, one-way ANOVAs were calculated to explore group differences at the level of the individual core facial feature (i.e., eyes, nose, mouth). These analyses are summarized in Table 2 and indicated that control participants spent a greater proportion of their scanning time examining the eyes (autism: M ϭ 46.86%, SE ϭ 5.28%; control: M ϭ 67.66%, SE ϭ 4.16%) and the nose (autism: M ϭ 7.81%, SE ϭ 2.14%; control: M ϭ 15.24%, SE 1.56%) than did autistic participants. Differences in the percentage of time spent examining the mouth were not significant (autism: M ϭ 4.22%, SE ϭ 1.33%; control: M ϭ 4.37%, SE ϭ 1.17%) Observations of group ...

Similar publications

Chapter
Full-text available
We introduce a new face recognition/authentication method based on a new face modality: 3D shape of face. The paper is organized as follows: Section (2) reviews the recent progress in the 3D face recognition research field. Section (3) gives an overview of the proposed approach. In Section (4), we focus on the developed works for 21 2D vs. 3D face...
Article
Full-text available
Stress and abnormal pain experienced by drivers during driving is one of the major causes of road accidents. Most of the existing systems focus on drivers being drowsy and monitoring fatigue. In this paper, an effective intelligent system for monitoring drivers' stress and pain from facial expressions is proposed. A novel method of detecting stress...
Conference Paper
Full-text available
This study investigates how young children between 4-6 years old interact with personified robots during a lying situation. To achieve this, a temptation resistance paradigm was used, in which children were instructed to not look at a toy (behind their back) while the instructor (a robot dog, a humanoid or a human) left the room. Results revealed t...
Article
Full-text available
In recent years, researchers have investigated the relationship between facial width-to-height ratio (FWHR) and a variety of threat and dominance behaviours. The majority of methods involved measuring FWHR from 2D photographs of faces. However, individuals can vary dramatically in their appearance across images, which poses an obvious problem for r...
Thesis
Full-text available
This study investigates how young children with the age of 4, 5 and 6 interact with personified robots in a lying situation. The research argues that children generally give more verbal responses to a robot dog and a humanlike robot in comparison with humans. Examination reveals that the mean pitch of children differs between the robot condition...

Citations

... Faces represent one of the most prevalent visual stimuli in our daily lives, and the impaired processing of human faces stands as a prominent and consistent neuropsychological finding in individuals with autism spectrum disorders (ASD). Extensive research has documented abnormal facial scanning patterns in ASD (Klin et al. 2002;Pelphrey et al. 2002;Neumann et al. 2006;Spezio et al. 2007a;Spezio et al. 2007b;Kliemann et al. 2010). Moreover, individuals with ASD exhibit deficits in recognizing emotions from facial expressions (Law Smith et al. 2010;Philip et al. 2010;Wallace et al. 2011;Kennedy and Adolphs 2012;Wang and Adolphs 2017;Webster et al. 2021), making social trait judgments of faces (Yu et al. 2022;Cao et al. 2022a;Cao et al. 2023b) (see Yu et al. 2023 for a review), and paying attention to faces (Wang et al. 2014;Wang et al. 2015), further highlighting the complex interplay between face processing and social cognition in this population. ...
... The atypical encoding of facial features may explain the atypical facial scanning patterns in individuals with ASD. Numerous studies have consistently documented atypical eye movement patterns when individuals with ASD view faces (Klin et al. 2002;Pelphrey et al. 2002;Neumann et al. 2006;Spezio et al. 2007a;Spezio et al. 2007b;Kliemann et al. 2010). For instance, individuals with ASD tend to avoid fixating on the eye region of faces, instead displaying a preference to fixate on the mouth (Neumann et al. 2006). ...
... Moreover, when viewing static faces, they focus more on non-feature areas and less on core feature areas (e.g. eyes and mouth) compared to typical controls (Pelphrey et al. 2002), utilizing piecemeal rather than configural strategies (Dawson et al. 2005). Additionally, some research suggests active avoidance of eye fixation, impacting emotion recognition (Kliemann et al. 2010), while other studies indicate gaze indifference and insensitivity to social signals from others' eyes at the time of initial diagnosis (Moriuchi et al. 2017). ...
Article
Full-text available
Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.
... Understanding the possible existence of a visual inspection pattern and selective attention to emotional faces in healthy groups would be a significant advancement in this field, as dysfunctional visual inspection has been indicative of developmental psychopathologies, such as autism spectrum disorder [12], affective psychopathologies like schizophrenia [13], social phobia [14], and neurological impairments [15]. This understanding could guide clinical research or even aid in the development of diagnostic methods for these populations. ...
Article
Full-text available
Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level. Keywords: visual inspection; facial expression recognition; eye tracking; gaze contingency; moving window technique
... Numerous studies have indicated differences in visual attention between individuals with ASD and those with TD [16,169], and those differences are closely correlated with the social impairments characteristic of ASD [170][171][172]. In terms of gaze patterns, individuals with ASD tend to focus more on non-facial areas, such as the mouth, ears, or background, and less on the eye region [173]. Furthermore, the duration of eye contact is positively correlated with social skills and sensitivity to emotion perception. ...
Article
Full-text available
Objective This study employs bibliometric and visual analysis to elucidate global research trends in Autism Spectrum Disorder (ASD) biomarkers, identify critical research focal points, and discuss the potential integration of diverse biomarker modalities for precise ASD assessment. Methods A comprehensive bibliometric analysis was conducted using data from the Web of Science Core Collection database until December 31, 2022. Visualization tools, including R, VOSviewer, CiteSpace, and gCLUTO, were utilized to examine collaborative networks, co-citation patterns, and keyword associations among countries, institutions, authors, journals, documents, and keywords. Results ASD biomarker research emerged in 2004, accumulating a corpus of 4348 documents by December 31, 2022. The United States, with 1574 publications and an H-index of 213, emerged as the most prolific and influential country. The University of California, Davis, contributed significantly with 346 publications and an H-index of 69, making it the leading institution. Concerning journals, the Journal of Autism and Developmental Disorders, Autism Research, and PLOS ONE were the top three publishers of ASD biomarker-related articles among a total of 1140 academic journals. Co-citation and keyword analyses revealed research hotspots in genetics, imaging, oxidative stress, neuroinflammation, gut microbiota, and eye tracking. Emerging topics included "DNA methylation," "eye tracking," "metabolomics," and "resting-state fMRI." Conclusion The field of ASD biomarker research is dynamically evolving. Future endeavors should prioritize individual stratification, methodological standardization, the harmonious integration of biomarker modalities, and longitudinal studies to advance the precision of ASD diagnosis and treatment.
... In the exploration of automatic emotion recognition in autism spectrum disorder (ASD), researchers have frequently utilized experimental approaches that involve presenting individuals with faces displaying various emotional expressions. One noteworthy study conducted by Pelphrey et al. [52] delved into the neural and behavioral responses exhibited by individuals with ASD while perceiving facial expressions. ...
Article
Full-text available
Technology has always represented the key to human progress. It is believed that the use of supportive technological mediators can facilitate teaching/learning processes and enable everyone to learn how to critically manage technology without being its slave or passive user while contributing to the collective well-being. Educational robotics is a new frontier for learning that can offer numerous benefits to students. The use of robots can offer the possibility of creating inclusive educational settings in which all students, regardless of their abilities or disabilities, can participate meaningfully. The article proposes an analysis of the evidence obtained from a systematic literature review with reference to general educational robotics and social robotics for emotion recognition. Finally, as a practical implementation of an educational robotic intervention on emotion recognition, the "Emorobot Project" as part of the EU-funded "Ecosystem of Innovation-Technopole of Rome" Project in NextGenerationEU will be presented. The project's aim is to foster the development of social skills in children with autism spectrum disorders through the creation of an open-source social robot that can recognize emotions. The project is intended to provide teachers with a supportive tool that allows them to design individual activities and later extend the activity to classmates. An educational robot can be used as a social mediator, a playmate during the learning phase that can help students develop social skills, build peer connection, reduce social isolation-one of the main difficulties of this disorder-and foster motivation and the acquisition of interpersonal skills through interaction and imitation. This can help ensure that all students have access to quality education and that no one is left behind.
... We conducted separate analyses for positively and negatively valenced facial expressions because previous research in expression labelling in autistic people has reported difficulty specific to negatively valenced expressions (e.g., Ashwin et al., 2006;Boraston et al., 2007;Pelphrey et al., 2002). Our approach task also allowed us to conduct separate analyses for genuine and posed expressions. ...
Article
Full-text available
People with autism and higher levels of autistic traits often have difficulty interpreting facial emotion. Research has commonly investigated the association between autistic traits and expression labeling ability. Here, we investigated the association between two relatively understudied abilities, namely, judging whether expressions reflect genuine emotion, and using expressions to make social approach judgements, in a nonclinical sample of undergraduates at an Australian university (N = 149; data collected during 2018). Autistic traits were associated with more difficulty discriminating genuineness and less typical social approach judgements. Importantly, we also investigated whether these associations could be explained by the co-occurring personality trait alexithymia, which describes a difficulty interpreting one’s own emotions. Alexithymia is hypothesized to be the source of many emotional difficulties experienced by autistic people and often accounts for expression labeling difficulties associated with autism and autistic traits. In contrast, the current results provided no evidence that alexithymia is associated with differences in genuineness discrimination and social approach judgements. Rather, differences varied as a function of individual differences in specific domains of autistic traits. More autistic-like social skills and communication predicted greater difficulty in genuineness discrimination, and more autistic-like social skills and attention to details and patterns predicted differences in approach judgements. These findings suggest that difficulties in these areas are likely to be better understood as features of the autism phenotype than of alexithymia. Finally, results highlight the importance of considering the authenticity of emotional expressions, with associations between differences in approach judgements being more pronounced for genuine emotional expressions.
... Researchers have consistently reported that autistic individuals spend less time looking at faces and eyes and instead focus more on objects or repetitive patterns in the environment and exhibit more dispersed gaze patterns in comparison to neurotypical individuals (Campbell et al., 2014;Chawarska et al., 2013;Jones & Klin, 2013;Jones et al., 2008). Similarly, autistic individuals are found to often exhibit atypical face scanning patterns, by fixating on different facial features in a less systematic manner, in comparison to neurotypical individuals (Falck-Ytter, 2008;Hernandez et al., 2009;Pelphrey et al., 2002;Reisinger et al., 2020;Shic et al., 2008Shic et al., , 2014Speer et al., 2007;Wang et al., 2020). Researchers have also revealed that autistic individuals display a greater preference for nonsocial stimuli, such as geometric patterns, than social stimuli (Gale et al., 2019;Pierce et al., 2011Pierce et al., , 2016Robain et al., 2022). ...
Article
Full-text available
This meta-analysis examined correlations between eye-tracking measures of gaze behaviors manifested during dynamic salient social stimuli and behavioral assessment measures of social communication skills of young autistic children. We employed a multilevel model with random effects to perform three separate meta-analyses for correlation between social communication skills and (a) all gaze behaviors, (b) gaze duration, and (c) gaze transition. Subsequently, we performed meta-regression to assess the role of four moderators, including age, continuum of naturalness of stimuli, gaze metric, and area of interest, on correlation effect sizes that were heterogeneous at the population level. A total of 111 correlation coefficients from 17 studies for 1132 young autistic children or children with high-likelihood for autism (Mage range = 6–95 months) were included in this meta-analysis. The correlation effect sizes for all three meta-analyses were significant, supporting the relation between improved gaze behaviors and better social communication skills. In addition, age, gaze metric, and area of interest were significant moderators. This suggests the importance of identifying meaningful gaze behaviors related to social communication skills and the increasingly influential role of gaze behaviors in shaping social communication skills as young autistic children progress through the early childhood stage. The continuum of naturalness of stimuli, however, was revealed to trend towards having a significant moderating effect. Lastly, it is important to note the evidence of potential publication bias. Our findings are discussed in the context of early identification and intervention and unraveling the complex nature of autism.
... Individuals with ASD tend to fixate on the eyes of a face for shorter periods than typically developing (TD) individuals (3,4). This tendency has frequently been observed, especially in adults with ASD (5,6). Given that attention towards the eyes enhances facial emotion recognition (7), atypical visual patterns would influence the impaired recognition of facial emotions. ...
Article
Full-text available
Introduction Individuals with Autism Spectrum Disorder (ASD) show atypical recognition of facial emotions, which has been suggested to stem from arousal and attention allocation. Recent studies have focused on the ability to perceive an average expression from multiple spatially different expressions. This study investigated the effect of autistic traits on temporal ensemble, that is, the perception of the average expression from multiple changing expressions. Methods We conducted a simplified temporal-ensemble task and analyzed behavioral responses, pupil size, and viewing times for eyes of a face. Participants with and without diagnosis of ASD viewed serial presentations of facial expressions that randomly switched between emotional and neutral. The temporal ratio of the emotional expressions was manipulated. The participants estimated the intensity of the facial emotions for the overall presentation. Results We obtained three major results: (a) many participants with ASD were less susceptible to the ratio of anger expression for temporal ensembles, (b) they produced significantly greater pupil size for angry expressions (within-participants comparison) and smaller pupil size for sad expressions (between-groups comparison), and (c) pupil size and viewing time to eyes were not correlated with the temporal ensemble. Discussion These results suggest atypical temporal integration of anger expression and arousal characteristics in individuals with ASD; however, the atypical integration is not fully explained by arousal or attentional allocation.
... Several hypotheses have been proposed to explain these differences between animals and humans, one of which is based on the atypical or dysfunctional visual information extraction of human stimuli as an explanatory pathway [30,80]. Indeed, the atypical visual exploration pattern of children with ASD of human faces (i.e., random or inefficient exploration [81]) could contribute to less efficient information intake and processing, possibly leading to the expression of inappropriate behaviors during interactions. On the other hand, this atypical visual exploration pattern does not seem to be present when the interaction partner is an animal. ...
... Studies using eye-tracking with social stimuli also showed that human faces were less attention-grabbing for people with ASD and that, when exploring a social scene, they spent less time looking at them while paying more attention to other elements of the visual scene (e.g., objects, bodies) [30,43,119]. Concerning human faces, people with ASD explored less socially relevant features of faces, and in particular, explored less the eye area than neurotypical people [81,111,[120][121][122], but explored the mouth area more [38,[123][124][125]. A study on neurotypical young adults reported that regardless of the species scanned (i.e., human, monkey, cat, or dog), the eyes were stared at the most frequently and inspected first, followed by the nose and mouth [126]. ...
Article
Full-text available
Autism spectrum disorder (ASD) is characterized by interaction and communication differences, entailing visual attention skill specificities. Interactions with animals, such as in animal-assisted interventions or with service dogs, have been shown to be beneficial for individuals with ASD. While interacting with humans poses challenges for them, engaging with animals appears to be different. One hypothesis suggests that differences between individuals with ASD's visual attention to humans and to animals may contribute to these interaction differences. We propose a scoping review of the research on the visual attention to animals of youths with ASD. The objective is to review the methodologies and tools used to explore such questions, to summarize the main results, to explore which factors may contribute to the differences reported in the studies, and to deduce how youth with ASD observe animals. Utilizing strict inclusion criteria, we examined databases between 1942 and 2023, identifying 21 studies in international peer-reviewed journals. Three main themes were identified: attentional engagement and detection, visual exploration, and behavior. Collectively, our findings suggest that the visual attention of youths with ASD towards animals appears comparable to that of neurotypical peers, at least in 2D pictures (i.e., eye gaze patterns). Future studies should explore whether these results extend to real-life interactions.
... In recent years, some studies have started to focus on the spatio-temporal model of combining fixations and saccades, known as scanpath. Scanpath has shown high potential in the medical field for diagnosis and treatment, as it has been used as a useful tool for identifying people with schizophrenia [23] and ASD [24], [25]. Analysis of fixation sequences can reveal the cognitive strategies that drive eye movements and provide useful clues for diagnosing whether a subject has ASD or not. ...
Article
Full-text available
Autism spectrum disorder (ASD) one of the fastest-growing diseases in the world is a group of neurodevelopmental disorders. Eye movement as a biomarker and clinical manifestation represents unconscious brain processes that can objectively disclose abnormal eye fixation of ASD. With the aid of eye-tracking technology, plentiful methods that identify ASD based on eye movements have been developed, but there are rarely works specifically for scanpaths. Scanpaths as visual representations describe eye movement dynamics on stimuli. In this paper, we propose a scanpath-based ASD detection method, which aims to learn the atypical visual pattern of ASD through continuous dynamic changes in gaze distribution. We extract four sequence features from scanpaths that represent changes and the differences in feature space and gaze behavior patterns between ASD and typical development (TD) are explored based on two similarity measures, multimatch and dynamic time warping (DTW). It indicates that ASD children show more individual specificity, while normal children tend to develop similar visual patterns. The most noticeable contrasts lie in the duration of attention and the spatial distribution of visual attention along the vertical direction. Classification is performed using Long Short-Term Memory (LSTM) network with different structures and variants. The experimental results show that LSTM network outperforms traditional machine learning methods.
... Although previous research in AV speech perception in autism has focused on school-aged children, a few studies have investigated AV speech perception in adults with ASD (Keane et al., 2010;Pelphrey et al., 2002;Saalasi et al., 2012). Keane et al. (2010) reported no differences in the processing of congruent and incongruent AV information between adults with high-functioning ASD compared to neurotypical peers, whereas Saalasi et al. (2012) reported weak AV integration on a McGurk task for adults with ASD when compared to neurotypical peers. ...
... Keane et al. (2010) reported no differences in the processing of congruent and incongruent AV information between adults with high-functioning ASD compared to neurotypical peers, whereas Saalasi et al. (2012) reported weak AV integration on a McGurk task for adults with ASD when compared to neurotypical peers. Further, Pelphrey et al. (2002) reported that adults with ASD do not attend to the key "T" elements of the faces (e.g., scanning from eye to eye and then down the center of the face to the mouth) as neurotypical adults do. ...
Article
Full-text available
Listening2Faces (L2F) is a therapeutic, application-based training program designed to improve audiovisual speech perception for persons with communication disorders. The purpose of this research was to investigate the feasibility of using the L2F application with young adults with autism and complex communication needs. Three young adults with autism and complex communication needs completed baseline assessments and participated in training sessions within the L2F application. Behavioral supports, including the use of cognitive picture rehearsal, were used to support engagement with the L2F application. Descriptive statistics were used to provide (1) an overview of the level of participation in L2F application with the use of behavioral supports and (2) general performance on L2F application for each participant. All three participants completed the initial auditory noise assessment (ANA) as well as 8 or more levels of the L2F application with varying accuracy levels. One participant completed the entire L2F program successfully. Several behavioral supports were used to facilitate participation; however, each individual demonstrated varied levels of engagement with the application. The L2F application may be a viable intervention tool to support audiovisual speech perception in persons with complex communication needs within a school-based setting. A review of behavioral supports and possible beneficial modifications to the L2F application for persons with complex communication needs are discussed.