Figure - available from: Current Neurology and Neuroscience Reports
This content is subject to copyright. Terms and conditions apply.
a Cognitive model of face recognition. Modality-specific visual components of the recognition process are shown in red; nonvisual components involved in multimodal person semantic knowledge retrieval, emotion processing, and executive control are shown in blue. b The distributed neural network for face-identity processing. Core network components are shown in red (OFA, FFA, ATFA) and extended network components in blue (ATL, the amygdala, PFC). See text for details

a Cognitive model of face recognition. Modality-specific visual components of the recognition process are shown in red; nonvisual components involved in multimodal person semantic knowledge retrieval, emotion processing, and executive control are shown in blue. b The distributed neural network for face-identity processing. Core network components are shown in red (OFA, FFA, ATFA) and extended network components in blue (ATL, the amygdala, PFC). See text for details

Source publication
Article
Full-text available
Purpose of Review Functional imaging studies, intracranial recordings, and lesion-deficit correlations in neurological patients have produced unique insights into the cognitive mechanisms and neural substrates of face recognition. In this review, we highlight recent advances in the field and integrate data from these complementary lines of research...

Citations

Article
Full-text available
End-stage kidney disease and mild cognitive impairment (ESKD-MCI) affect the quality of life and long-term treatment outcomes of patients affected by these diseases. Clarifying the morphological changes from brain injuries in ESKD-MCI and their relationship with clinical features is helpful for the early identification and intervention of MCI before it progresses to irreversible dementia. This study gathered data from 23 patients with ESKD-MCI, 24 patients with ESKD and non-cognitive impairment (NCI), and 27 health controls (HCs). Structural magnetic resonance studies, cognitive assessments, and general clinical data were collected from all participants. Voxel-based morphometry analysis was performed to compare grey matter (GM) volume differences between the groups. The patients’ GM maps and clinical features were subjected to univariate regression to check for possible correlations. Patients with ESKD-MCI displayed significantly more impairments in multiple cognitive domains, including global cognition, visuospatial and executive function, and memory, compared to patients with ESKD-NCI. Using a more liberal threshold (P < 0.001, uncorrected), we found that compared to patients with ESKD-NCI, patients with ESKD-MCI exhibited clusters of regions with lower GM volumes, including the right hippocampus (HIP), parahippocampal gyrus (PHG), Rolandic operculum, and supramarginal gyrus. The volumes of the right HIP and PHG were negatively correlated with serum calcium levels. ESKD-MCI was associated with a subtle volume reduction of GM in several brain areas known to be involved in memory, language, and auditory information processing. We speculate that these slight morphometric impairments may be associated with disturbed calcium metabolism.
Article
Full-text available
Our faces display socially important sex and identity information. How perceptually independent are these facial characteristics? Here, we used a sex categorization task to investigate how changing faces in terms of either their sex or identity affects sex categorization of those faces, whether these manipulations affect sex categorization similarly when the original faces were personally familiar or unknown, and, whether computational models trained for sex classification respond similarly to human observers. Our results show that varying faces along either sex or identity dimension affects their sex categorization. When the sex was swapped (e.g., female faces became male looking, Experiment 1), sex categorization performance was different from that with the original unchanged faces, and significantly more so for people who were familiar with the original faces than those who were not. When the identity of the faces was manipulated by caricaturing or anti-caricaturing them (these manipulations either augment or diminish idiosyncratic facial information, Experiment 2), sex categorization performance to caricatured, original, and anti-caricatured faces increased in that order, independently of face familiarity. Moreover, our face manipulations showed different effects upon computational models trained for sex classification and elicited different patterns of responses in humans and computational models. These results not only support the notion that the sex and identity of faces are processed integratively by human observers but also demonstrate that computational models of face categorization may not capture key characteristics of human face categorization.
Article
Full-text available
Accurately recognizing facial expressions is essential for effective social interactions. Non-human primates (NHPs) are widely used in the study of the neural mechanisms underpinning facial expression processing, yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans. In this study, we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks, namely the temporal discrimination task (TDT) and face scan task (FST). We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans. Monkey faces also reliably induced divergent pupil contraction in response to different expressions, while human faces and scrambled monkey faces did not. Furthermore, viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces. Finally, masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces. By probing facial expression processing in monkeys, our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans, thus shedding new light on inter-species communication through facial expressions between NHPs and humans.
Article
Full-text available
Faces are generally assumed to be processed holistically, that is, features are represented in an integrated fashion. Similarly, pictorial representations of faces (e.g., drawings) have been shown to elicit holistic processing. Some researchers, however, have contested the concept of holistic face processing, suggesting that the perception of a face is no more than the sum of individual face parts. In the present study, we ask whether faces in paintings are processed holistically and, if so, whether this holistic processing is consistent across art styles along the realism–distortion dimension. Additionally, we seek to understand whether other factors, such as interest in art and exposure to art (e.g., visiting museums), as well as general visual recognition abilities, contribute to the potential holistic processing of faces in paintings. We found holistic face processing across stimulus sets, suggesting that holistic processing of faces in art occurs regardless of the characteristics of the art style (i.e., realism/distortion).Moreover, general interest in art showed a marginally negative correlation with holistic face processing. In contrast, general visual recognition abilities correlated positively with holistic processing, suggesting that increased capacity to process purely visual information benefits perceptual integration and grouping.
Article
Full-text available
Background Facial morphology changes with aging, resulting in an aged appearance that is a great matter of concern for people. However, it is not clear whether people perceive their own facial appearance accurately, in part because there are few methods to evaluate this. Aim The aim of this study is firstly to establish an evaluation system for the perception gap of aged facial appearance between the self‐perceived status and the actual status, and then to use this evaluation system to quantify the perception gap and to clarify the mechanism of this gap Method Thirty‐six middle‐aged female volunteers were first asked to rate their facial aging‐related morphology according to a 6‐grade set of photos taken at a 45° angle from the front showing progressive stages of sagging severity, without looking either in a mirror or at photos of themselves (self‐ or “subjective” perception). Then they were shown photos of their face taken at a 45° angle from the front, and asked again to rate their sagging grade based on these photos (“objective” rating). In addition, facial photos taken from several angles from the front to the side were evaluated for sagging severity by trained evaluators. Results This system for analyzing perception gap revealed that the self‐perception of aged appearance was significantly younger than the actual situation in three facial areas, namely the cheek, around the eyes and the facial contour, and the gap corresponded to an age difference of as much as 8 years in middle‐aged females. Trained evaluators found that the severity of sagging judged from photos taken from a frontal direction was significantly less than in photos of the same subject taken from side angles. This suggests that recognition of sagging is more difficult from the front, which is the direction from which people view their own face in daily life. Indeed, viewing photos taken from the side, a rare viewing angle of one's own face, increased the motivation to improve aged appearance in more than 70% of the subjects in a questionnaire survey. Conclusion The results suggest that people perceive their own facial appearance as less aged than it actually is. The reason for this appears to be that viewing from the front, the usual viewing angle of one's own face in daily life, results in lower perceived sagging severity, likely due to reduced depth perception.
Article
Full-text available
Recent studies have bolstered the important role of the cerebellum in high-level socio-affective functions. In particular, neuroscientific evidence shows that the posterior cerebellum is involved in social cognition and emotion processing, presumably through its involvement in temporal processing and in predicting the outcomes of social sequences. We used cerebellar transcranial random noise stimulation (ctRNS) targeting the posterior cerebellum to affect the performance of 32 healthy participants during an emotion discrimination task, including both static and dynamic facial expressions (i.e., transitioning from a static neutral image to a happy/sad emotion). ctRNS, compared to the sham condition, significantly reduced the participants’ accuracy to discriminate static sad facial expressions, but it increased participants’ accuracy to discriminate dynamic sad facial expressions. No effects emerged with happy faces. These findings may suggest the existence of two different circuits in the posterior cerebellum for the processing of negative emotional stimuli: a first-time-independent mechanism which can be selectively disrupted by ctRNS, and a second time-dependent mechanism of predictive "sequence detection" which can be selectively enhanced by ctRNS. This latter mechanism might be included among the cerebellar operational models constantly engaged in the rapid adjustment of social predictions based on dynamic behavioral information inherent to others’ actions. We speculate that it might be one of the basic principles underlying the understanding of other individuals’ social and emotional behaviors during interactions.
Article
Background: Previous studies have found that patients with schizophrenia (SCZ), major depressive disorder (MDD), and bipolar disorder (BD) all have facial emotion recognition deficits, but the differences and similarities of these deficits in the three groups of patients under different social interaction situations are not clear. The present study aims to compare the ability of facial emotion recognition in three different conversation situations from a cross-diagnostic perspective. Methods: Thirty-three participants with SCZ, 35 participants with MDD, and 30 participants with BD were recruited, along with 31 healthy controls. A computer-based task was given to assess the ability of Facial Emotion Categorization (FEC) under three different conversational situations (praise, blame, and inquiry). Results: In the "praise" situation, patients with SCZ, MDD and BD were all slower to recognize anger emotion than the healthy controls. In all three clinical groups, patients with SCZ recognized angry faces faster than those with MDD and BD on a continuum from happy faces to angry faces in the "inquiry" situation, while no significant difference was found in the latter two groups. In addition, no significant defect was found in the percentage and threshold of angry face recognition in all three patient groups. Conclusions: Our findings indicate that patients with SCZ, MDD, and BD share both common and distinct deficits in facial emotion recognition during social interactions, which may be beneficial for early screening and precise intervention for these mental disorders.