Article

Processing of gaze direction within the N170 / M170 time window: A combined EEG / MEG study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Figures

Content may be subject to copyright.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While the N170 is sensitive to eyes as such (Itier & Batty, 2009), studies focusing on the N170 and gaze direction reported contradictory results. Whereas some researchers failed to find any N170 differences between straight and averted gaze (Burra et al., 2018;Latinus et al., 2015;Taylor et al., 2001), others report a larger N170 (or M170 as its magnetoencephalographic (MEG) counterpart) for an averted versus straight gaze using heads either oriented frontally (Sato et al., 2008;Watanabe et al., 2002) or deviated to the side (Burra et al., 2017). Thus, EEG and MEG studies in human adults have so far failed to consistently confirm the role of gaze direction during face encoding as reflected by the N170 (for a comment about this inconsistency, see . ...
... To measure the relationship between attachment and eye gaze detection, we employed an adapted version of a previous study that assessed electrophysiological measures and gaze direction (Burra et al., 2017). Besides eyes gaze, head orientation is another important factor in social attention (Wollaston, 1824), and gaze and head orientation are known to interact within a behavioral effect, the stare-in-the crowd effect (Conty et al., 2006;, as well as a better memorizing of straight gaze (Vuilleumier et al., 2005). ...
... Besides eyes gaze, head orientation is another important factor in social attention (Wollaston, 1824), and gaze and head orientation are known to interact within a behavioral effect, the stare-in-the crowd effect (Conty et al., 2006;, as well as a better memorizing of straight gaze (Vuilleumier et al., 2005). Furthermore, the N170 is enlarged for straight gaze when the head is deviated as compared to frontal (Conty et al., 2007;Itier et al., 2007), despite EEG seeming to be less sensitive to this interaction than MEG (Burra et al., 2017). We therefore manipulated both eye gaze direction and hear orientation in this study. ...
Article
Full-text available
Attachment theory suggests that interindividual differences in attachment security versus insecurity (anxiety and avoidance) contribute to the ways in which people perceive social emotional signals, particularly from the human face. Among different facial features, eye gaze conveys crucial information for social interaction, with a straight gaze triggering different cognitive and emotional processes as compared to an averted gaze. It remains unknown, however, how interindividual differences in attachment associate with early face encoding in the context of a straight versus averted gaze. Using electroencephalography (EEG) and recording event-related potentials (ERPs), specifically the N170 component, the present study (N = 50 healthy adults) measured how the characteristics of attachment anxiety and avoidance relate to the encoding of faces with respect to gaze direction and head orientation. Our findings reveal a significant relationship between gaze direction (irrespective of head orientation) and attachment anxiety on the interhemispheric (i.e. right) asymmetry of the N170 and thus provide evidence for an association between attachment anxiety and eye gaze processing during early visual face encoding.
... While the N170 is sensitive to eyes as such (Itier & Batty, 2009), studies focusing on the N170 regarding gaze direction have produced contradictory results. Whereas some researchers failed to find any N170 differences between straight and averted gaze (Burra, Framorando, & Pegna, 2018;Latinus et al., 2015;Taylor, Itier, Allison, & Edmonds, 2001), others report a larger N170 (or M170 as its magnetoencephalographic (MEG) counterpart) for an averted versus straight gaze using heads either oriented frontally (Sato, Kochiyama, Uono, & Yoshikawa, 2008;Watanabe, Miki, & Kakigi, 2002) or deviated to the side (Burra, Baker, & George, 2017). Note that a dissociation between straight and averted gaze, irrespective of head direction, was previously found at the offset of the N170 (Burra et al., 2017). ...
... Whereas some researchers failed to find any N170 differences between straight and averted gaze (Burra, Framorando, & Pegna, 2018;Latinus et al., 2015;Taylor, Itier, Allison, & Edmonds, 2001), others report a larger N170 (or M170 as its magnetoencephalographic (MEG) counterpart) for an averted versus straight gaze using heads either oriented frontally (Sato, Kochiyama, Uono, & Yoshikawa, 2008;Watanabe, Miki, & Kakigi, 2002) or deviated to the side (Burra, Baker, & George, 2017). Note that a dissociation between straight and averted gaze, irrespective of head direction, was previously found at the offset of the N170 (Burra et al., 2017). Overall, EEG and MEG studies in human adults have so far failed to consistently confirm the role of gaze direction during face encoding as reflected by the N170 (for a comment about this inconsistency, see Burra, Mares, & Senju, 2019). ...
... To measure the relation between attachment orientation and eye gaze detection, we employed an adapted version of a previous study that assessed electrophysiological measures and gaze direction (Burra et al., 2017). During the employed task, participants are asked to categorise on single display faces as male or female. ...
Preprint
Full-text available
Attachment theory suggests that interindividual differences in attachment security versus insecurity (anxiety and avoidance) contribute to the ways in which people perceive social emotional signals, particularly from the human face. Among different facial features, eye gaze conveys crucial information for social interaction, with a straight gaze triggering different emotional and cognitive processes as compared to an averted gaze. It remains unknown, however, how interindividual differences in attachment associate with early face encoding in the context of a straight versus averted gaze. Using electroencephalography (EEG) and recording event-related potentials (ERPs), specifically the N170 component, the present study (N = 50) measured how the characteristics of attachment anxiety and avoidance relate to the encoding of faces with respect to gaze direction and head orientation. Our findings reveal a significant relationship between gaze direction (irrespective of head orientation) and attachment anxiety on the interhemispheric (i.e. right) bias of the N170 and thus provide evidence for an influence of gaze on early visual face encoding mediated by attachment anxiety.
... A large proportion of studies have focused on the N170, a face-sensitive ERP component that occurs approximately 130-200 ms post face presentation over occipitotemporal sites, and is thought to reflect the structural encoding of the face (Bentin et al., 1996;George et al., 1996;Eimer, 2000). Some have found this component to be larger for averted gaze faces or averted gaze shifts (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), while others have found it to be larger for direct gaze static faces or direct gaze shifts (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), yet others have found no N170 gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). Gaze modulations have also been reported before the N170, around 100-140 ms with both greater amplitudes for direct than averted gaze (e.g., Burra et al., 2018) and greater amplitudes for averted gaze than direct gaze (Schmitz et al., 2012). ...
... We should also note that it was surprising to find neither a main effect of gaze direction, nor an interaction between gaze and task, over posterior sites during the 130-200 ms window encompassing the N1710, given past reports of gaze effects on this ERP component (Puce et al., 2000;Watanabe et al., 2002Watanabe et al., , 2006Conty et al., 2007;Itier et al., 2007;George and Conty, 2008;Itier and Batty, 2009;Pönkänen et al., 2010;Latinus et al., 2015;Rossi et al., 2015;Burra et al., 2017). These previous reports have been quite mixed, with some finding enhanced N170 amplitudes in response to averted gaze (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), some to direct gaze (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), and others, like the present study, finding no gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). ...
... We should also note that it was surprising to find neither a main effect of gaze direction, nor an interaction between gaze and task, over posterior sites during the 130-200 ms window encompassing the N1710, given past reports of gaze effects on this ERP component (Puce et al., 2000;Watanabe et al., 2002Watanabe et al., , 2006Conty et al., 2007;Itier et al., 2007;George and Conty, 2008;Itier and Batty, 2009;Pönkänen et al., 2010;Latinus et al., 2015;Rossi et al., 2015;Burra et al., 2017). These previous reports have been quite mixed, with some finding enhanced N170 amplitudes in response to averted gaze (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), some to direct gaze (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), and others, like the present study, finding no gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). One possibility is that there is a lot of variation in how gaze is processed at the individual level over these sites (the N170 itself can range in latency from 130 to 200 ms between individuals). ...
Article
Full-text available
The perception of eye-gaze is thought to be a key component of our everyday social interactions. While the neural correlates of direct and averted gaze processing have been investigated, there is little consensus about how these gaze directions may be processed differently as a function of the task being performed. In a within-subject design, we examined how perception of direct and averted gaze affected performance on tasks requiring participants to use directly available facial cues to infer the individuals’ emotional state (emotion discrimination), direction of attention (attention discrimination) and gender (gender discrimination). Neural activity was recorded throughout the three tasks using EEG, and ERPs time-locked to face onset were analyzed. Participants were most accurate at discriminating emotions with direct gaze faces, but most accurate at discriminating attention with averted gaze faces, while gender discrimination was not affected by gaze direction. At the neural level, direct and averted gaze elicited different patterns of activation depending on the task over frontal sites, from approximately 220–290 ms. More positive amplitudes were seen for direct than averted gaze in the emotion discrimination task. In contrast, more positive amplitudes were seen for averted gaze than for direct gaze in the gender discrimination task. These findings are among the first direct evidence that perceived gaze direction modulates neural activity differently depending on task demands, and that at the behavioral level, specific gaze directions functionally overlap with emotion and attention discrimination, precursors to more elaborated theory of mind processes.
... A large proportion of studies have focused on the N170, a face-sensitive ERP component that occurs approximately 130-200 ms post face presentation over occipitotemporal sites, and is thought to reflect the structural encoding of the face (Bentin et al., 1996;George et al., 1996;Eimer, 2000). Some have found this component to be larger for averted gaze faces or averted gaze shifts (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), while others have found it to be larger for direct gaze static faces or direct gaze shifts (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), yet others have found no N170 gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). Gaze modulations have also been reported before the N170, around 100-140 ms with both greater amplitudes for direct than averted gaze (e.g., Burra et al., 2018) and greater amplitudes for averted gaze than direct gaze (Schmitz et al., 2012). ...
... We should also note that it was surprising to find neither a main effect of gaze direction, nor an interaction between gaze and task, over posterior sites during the 130-200 ms window encompassing the N1710, given past reports of gaze effects on this ERP component (Puce et al., 2000;Watanabe et al., 2002Watanabe et al., , 2006Conty et al., 2007;Itier et al., 2007;George and Conty, 2008;Itier and Batty, 2009;Pönkänen et al., 2010;Latinus et al., 2015;Rossi et al., 2015;Burra et al., 2017). These previous reports have been quite mixed, with some finding enhanced N170 amplitudes in response to averted gaze (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), some to direct gaze (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), and others, like the present study, finding no gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). ...
... We should also note that it was surprising to find neither a main effect of gaze direction, nor an interaction between gaze and task, over posterior sites during the 130-200 ms window encompassing the N1710, given past reports of gaze effects on this ERP component (Puce et al., 2000;Watanabe et al., 2002Watanabe et al., , 2006Conty et al., 2007;Itier et al., 2007;George and Conty, 2008;Itier and Batty, 2009;Pönkänen et al., 2010;Latinus et al., 2015;Rossi et al., 2015;Burra et al., 2017). These previous reports have been quite mixed, with some finding enhanced N170 amplitudes in response to averted gaze (Puce et al., 2000;Watanabe et al., 2002;Itier et al., 2007;Latinus et al., 2015;Rossi et al., 2015), some to direct gaze (Watanabe et al., 2006;Conty et al., 2007;Pönkänen et al., 2010;Burra et al., 2017), and others, like the present study, finding no gaze effect at all (Taylor et al., 2001;Schweinberger et al., 2007;Brefczynski-Lewis et al., 2011). One possibility is that there is a lot of variation in how gaze is processed at the individual level over these sites (the N170 itself can range in latency from 130 to 200 ms between individuals). ...
... For instance, an observer's attention is directed by their perceived direction of gaze, which is determined, in part, by the brightness of the sclera on each side of the iris (Ando, 2004). The direction of perceived gaze also affects the N170, which is increased when gaze is directed at the viewer (e.g., Conty et al., 2007;Burra et al., 2017) and when it is dynamically averted Rossi et al., 2015). Reversing contrast of just the eyes disrupts the ability to determine gaze direction (Ricciardelli et al., 2000;Olk et al., 2008) and may account for previous findings. ...
... Although the specific impact of eye gaze direction on the N170 is inconsistent and may be task and context dependent (Hadders-Algra, 2022), evidence supports the general conclusion that information about gaze direction is extracted early in face processing. Burra et al. (2017) and others (see Hadders-Algra, 2022 for a recent review) reported an increase in N170 amplitude in response to briefly presented faces depicting direct compared to averted gaze suggesting that face processing is sensitive to gaze direction. Rossi et al. (2015) further showed a larger N170 in response to dynamic shifts of gaze away from compared to toward the viewer, for real but not line-drawn faces, suggesting that the N170 is sensitive to the low-level light-dark contrast. ...
Article
Full-text available
Previous research has demonstrated that reversing the contrast of the eye region, which includes the eyebrows, affects the N170 ERP. To selectively assess the impact of just the eyes, the present study evaluated the N170 in response to reversing contrast polarity of just the iris and sclera in upright and inverted face stimuli. Contrast reversal of the eyes increased the amplitude of the N170 for upright faces, but not for inverted faces, suggesting that the contrast of eyes is an important contributor to the N170 ERP.
... Furthermore, real faces generated a more prominent N170 component for averted gaze than direct gaze Watanabe et al., 2002). Burra et al. (2017) found an opposite right hemisphere lateralized effect, with greater N170 for direct gaze than for averted gaze. Some studies dissected whether early components are specific for automatically orienting attention by gaze cues. ...
Article
Full-text available
joint attention gaze cueing paradigm eyetracking mobile brain/body imaging real-life situations Joint attention has been long investigated using behavioral measures. However, there is a need to know more about humans' processing of gaze cueing, linking both mind, brain, and body dynamics. This review aims at presenting past and current research concerning mechanisms of joint attention, encompassing electroencephalographic (EEG), oculomotor, heart rate (HR), facial elec-tromyography, and skin conductance responses (SCR) results and extracting the most important methodological factors diversifying effects, that lead to partial inconsistent knowledge. Particular focus is given to the gaze-cueing effect. Literature analysis reveals four main experimental procedure factors diversifying results. Effects reported in the review show sensitivity to the gaze cueing duration time, ecological accuracy, research paradigm, and contextual variables. Three brain, body, and eye response levels were proposed adequately to the conceptual framework for systems engaged in the joint attention phenomenon. As a synthesis of the review, we have presented inte-grative methodological scheme leading to the revealing holistic reaction pattern in gaze cueing research. Analyzing one type of psychophysiological data gives limited and incomplete insight into the mechanism of imitation of eye movement in response to gaze-cueing. In these cases, the oculomotor, cognitive and affective processes are usually separated. The review indicates the need for simultaneous data collection and the analysis of joint brain/body/eye activity to understand the phenomenon of joint attention in complex and multi-level ways. As a new research paradigm, the mobile brain/body imaging (MoBI) framework allows capturing how a person responds to gaze cueing in realistic, real-life situations.
... Thus, participants who feel guilty are more likely to be in uenced by the victim's eyes (Yu et al., 2017). Moreover, previous research has found that the superior temporal sulcus region, which is known to play a prominent role in gaze cueing (Burra et al., 2017), is also involved in the processing of guilt (Takahashi et al., 2004). ...
Preprint
Full-text available
Gaze direction can trigger social attentional orientation, characterised by an enhancement of the reaction in detecting targets appearing in a gazed-at location compared with those appearing in other locations, called the gaze-cueing effect. Here, we investigated whether a feeling of guilt established from prior interaction with a cueing face could modulate the gaze-cueing effect. Participants first completed a guilt-induction task using a modified dot-estimation paradigm to associate the feeling of guilt with a specific face, and then the face that established the binding relationship was used as the stimulus in a gaze-cueing task. The results showed that guilt-directed faces and control faces induce equal magnitudes of gaze-cueing effect in 200 ms stimulus onset asynchrony (SOA), while guilt-directed faces induce a smaller gaze-cueing effect than control faces in 700 ms SOA. These findings provide first evidence for the role of guilt emotion on social attention triggered by eye gaze.
... Along the occipito-temporal visual stream the STS plays a key role in the human face-perception system, but it is also one of the key components of the 'social brain' . Indeed, brain regions in and around the superior temporal sulcus of both hemispheres may be involved in the analysis of actual or implied facial movements and related cues that provide socially relevant information, such as emotional expression [24][25][26][27][28] and gaze direction [29]. ...
Article
Full-text available
Background Williams syndrome (WS) and Autism Spectrum Disorders (ASD) are neurodevelopmental conditions associated with atypical but opposite face-to-face interactions patterns: WS patients overly stare at others, ASD individuals escape eye contact. Whether these behaviors result from dissociable visual processes within the occipito-temporal pathways is unknown. Using high-density electroencephalography, multivariate signal processing algorithms and a protocol designed to identify and extract evoked activities sensitive to facial cues, we investigated how WS (N = 14), ASD (N = 14) and neurotypical subjects (N = 14) decode the information content of a face stimulus. Results We found two neural components in neurotypical participants, both strongest when the eye region was projected onto the subject's fovea, simulating a direct eye contact situation, and weakest over more distant regions, reaching a minimum when the focused region was outside the stimulus face. The first component peaks at 170 ms, an early signal known to be implicated in low-level face features. The second is identified later, 260 ms post-stimulus onset and is implicated in decoding salient face social cues. Remarkably, both components were found distinctly impaired and preserved in WS and ASD. In WS, we could weakly decode the 170 ms signal based on our regressor relative to facial features, probably due to their relatively poor ability to process faces’ morphology, while the late 260 ms component was highly significant. The reverse pattern was observed in ASD participants who showed neurotypical like early 170 ms evoked activity but impaired late evoked 260 ms signal. Conclusions Our study reveals a dissociation between WS and ASD patients and points at different neural origins for their social impairments.
... Along the occipito-temporal visual stream the STS plays a key role in the human face-perception system, but it is also one of the key components of the 'social brain'. Indeed, brain regions in and around the superior temporal sulcus of both hemispheres may be involved in the analysis of actual or implied facial movements and related cues that provide socially relevant information, such as emotional expression (Pitcher, Dilks, et al., 2011;Puce et al., 1998;Schobert et al., 2018;Winston et al., 2004) and gaze direction (Burra et al., 2017). Although establishing which cortical areas generate the N170 is problematic owing to source localization issues (Slotnick, 2004), electrophysiological studies suggest that the STS might be crucially involved in generating this ERP in response to face stimuli (Nguyen & Cunnington, 2014). ...
Preprint
Full-text available
Background: Williams syndrome (WS) and Autism Spectrum Disorders (ASD) are psychiatric conditions associated with atypical but opposite face-to-face interactions patterns: WS patients overly stare at others, ASD individuals escape eye contact. Whether these behaviors result from dissociable visual processes within the occipito-temporal pathways is unknown. Using high-density electroencephalography, multivariate signal processing algorithms and a protocol designed to identify and extract evoked activities sensitive to facial cues, we investigated how WS (N=14), ASD (N=14) and neurotypical subjects (N=14) decode the information content of a face stimulus. Results: We found two neural components in neurotypical participants, both strongest when the eye region was projected onto the subject's fovea, simulating a direct eye contact situation, and weakest over more distant regions, reaching a minimum when the focused region was outside the stimulus face. The first component peaks at 170ms, an early signal known to be implicated in low-level face features. The second is identified later, 260ms post-stimulus onset and is implicated in decoding salient face social cues. Remarkably, both components were found distinctly impaired and preserved in WS and ASD. In WS, we could weakly decode the 170ms signal based on our regressor relative to facial features, probably due to their relatively poor ability to process faces’ morphology, while the late 260ms component was highly significant. The reverse pattern was observed in ASD participants who showed neurotypical like early 170ms evoked activity but impaired late evoked 260ms signal. Conclusions: Our study reveals a dissociation between WS and ASD patients and point at different neural origins for their social impairments.
... This suggests that the basic requirements for detecting the eyes are met quite soon after birth. In adults (between 24 and 28 years old in the studies of Burra, Baker, & George (2017); George et al. (2001); Kampe et al. (2001); Taylor, Itier, Allison, & Edmonds (2001)), the perception of the direction of the gaze tends to be more nuanced with behavioral responses to a direct gaze, which are faster than responses to an averted gaze. These behavioral responses also seem to be modulated by the orientation of the head, since the perception of gaze direction is more evident with a frontal view than with a 30°-deviated view (Coelho et al., 2006;Conty et al., 2007). ...
Article
The eyes and the gaze are important stimuli for social interaction in humans. Impaired recognition of facial identity, facial emotions, and inference of the intentions of others may result from difficulties in extracting information relevant to the eye region, mainly the direction of gaze. Therefore, a review of these data is of interest. Behavioral data demonstrating the importance of the eye region and how humans respond to gaze direction are reviewed narratively, and several theoretical models on how visual information on gaze is processed are discussed to propose a unified hypothesis. Several issues that have not yet been investigated are identified. The authors tentatively suggest experiments that might help progress research in this area. The neural aspects are subsequently reviewed to best describe the low-level and higher-level visual information processing stages in the targeted subcortical and cortical areas. A specific neural network is proposed on the basis of the literature. Various gray areas, such as the temporality of the processing of visual information, the question of salience priority, and the coordination between the two hemispheres, remain unclear and require further investigations. Finally, disordered gaze direction detection mechanisms and their consequences on social cognition and behavior are discussed as key deficiencies in several conditions, such as autism spectrum disorder, 22q11.2 deletion, schizophrenia, and social anxiety disorder. This narrative review provides significant additional data showing that the detection and perception of someone’s gaze is an essential part of the development of our social brain.
... Along the occipito-temporal visual stream the STS plays a key role in the human face-perception system, but it is also one of the key components of the 'social brain'. Indeed, brain regions in and around the superior temporal sulcus of both hemispheres may be involved in the analysis of actual or implied facial movements and related cues that provide socially relevant information, such as emotional expression [29][30][31][32] and gaze direction 33 . Although establishing which cortical areas generate the N170 is problematic owing to source localization issues 34 , electrophysiological studies suggest that the STS might be crucially involved in generating this ERP in response to face stimuli 35. ...
Preprint
Full-text available
Williams syndrome (WS) and Autism Spectrum Disorders (ASD) are psychiatric conditions associated with atypical but opposite face-to-face interactions patterns: WS patients overly stare at others, ASD individuals escape eye contact. Whether these behaviors result from dissociable visual processes within the occipito-temporal pathways is unknown. Using high-density electroencephalography, multivariate pattern classification and group blind source separation, we searched for face-related neural signals that could best discriminate WS (N = 14), ASD (N = 14) and neurotypical populations (N = 14). We found two peaks in neurotypical participants: the first at 170ms, an early signal known to be implicated in low-level face features, the second at 260ms, a late component implicated in decoding salient face social cues. The late 260ms signal varied as a function of the distance of the eyes in the face stimulus with respect to the viewers’ fovea, meaning that it was strongest when the eyes were projected on the fovea and weakest when projected in the retinal periphery. Remarkably, both components were found distinctly impaired and preserved in WS and ASD. In WS, we could weakly decode the 170ms signal probably due to their relatively poor ability to process faces’ morphology while the late 260ms component shown to be eye sensitive was highly significant. The reverse pattern was observed in ASD participants who showed neurotypical like early 170ms evoked activity but impaired late evoked 260ms signal. Our study reveals a dissociation between WS and ASD patients and point at different neural origins for their social impairments.
... Along the occipito-temporal visual stream the STS plays a key role in the human faceperception system, but it is also one of the key components of the 'social brain'. Indeed, brain regions in and around the superior temporal sulcus of both hemispheres may be involved in the analysis of actual or implied facial movements and related cues that provide socially relevant information, such as emotional expression [29][30][31][32] and gaze direction 33 . Although establishing which cortical areas generate the N170 is problematic owing to source localization issues 34 , electrophysiological studies suggest that the STS might be crucially involved in generating this ERP in response to face stimuli 35. ...
Preprint
Full-text available
Williams syndrome (WS) and Autism Spectrum Disorders (ASD) are psychiatric conditions associated with atypical but opposite face-to-face interactions patterns: WS patients overly stare at others, ASD individuals escape eye contact. Whether these behaviors result from dissociable visual processes within the occipito-temporal pathways is unknown. Using high-density electroencephalography, multivariate signal processing algorithms and a protocol designed to identify and extract evoked activities sensitive to facial cues, we investigated how WS (N=14), ASD (N=14) and neurotypical subjects (N=14) decode the information content of a face stimulus. We found two neural components in neurotypical participants, both strongest when the eye region was projected onto the subject’s fovea, simulating a direct eye contact situation, and weakest over more distant regions, reaching a minimum when the focused region was outside the stimulus face. The first component peaks at 170ms, an early signal known to be implicated in low-level face features. The second is identified later, 260ms post-stimulus onset and is implicated in decoding salient face social cues. Remarkably, both components were found distinctly impaired and preserved in WS and ASD. In WS, we could weakly decode the 170ms signal based on our regressor relative to facial features, probably due to their relatively poor ability to process faces’ morphology, while the late 260ms component was highly significant. The reverse pattern was observed in ASD participants who showed neurotypical like early 170ms evoked activity but impaired late evoked 260ms signal. Our study reveals a dissociation between WS and ASD patients and point at different neural origins for their social impairments.
... Some studies have shown greater N170 elicited for observed gaze shifts away from participants compare with gaze shifts toward participants (Latinus et al., 2014;, some the opposite effect (e.g., Conty, N'Diaye, Tijus, & George, 2007), and others no modulation at all (e.g., Myllyneva & Hietanen, 2016). More recently, researchers using combined EEG and magnetoencephalography (MEG) found that the N170 was greater for observed direct gaze than for averted gaze only in the right hemisphere for both frontal and deviated head presentations (Burra, Baker, & George, 2017). Contextual factors such as task and stimulus types could explain these disparate findings, and because many N170 studies have employed only one task, it is not possible to disentangle stimulus effects from task effects. ...
Article
Full-text available
When two people look at the same object in the environment and are aware of each other's attentional state, they find themselves in a shared-attention episode. This can occur through intentional or incidental signaling and, in either case, causes an exchange of information between the two parties about the environment and each other's mental states. In this article, we give an overview of what is known about the building blocks of shared attention (gaze perception and joint attention) and focus on bringing to bear new findings on the initiation of shared attention that complement knowledge about gaze following and incorporate new insights from research into the sense of agency. We also present a neurocognitive model, incorporating first-, second-, and third-order social cognitive processes (the shared-attention system, or SAS), building on previous models and approaches. The SAS model aims to encompass perceptual, cognitive, and affective processes that contribute to and follow on from the establishment of shared attention. These processes include fundamental components of social cognition such as reward, affective evaluation, agency, empathy, and theory of mind.
... To maximize the test power and resolve statistical activations in space, we performed statistical analysis of the source-space data within the time windows, where significant activations were detected based on the sensor-space data. A similar approach has been used in recent MEG studies [67][68][69] . The paired spatiotemporal cluster-corrected permutation test with a cluster inclusion threshold of p < 0.01 (uncorrected p value, two-tailed test) was performed to compare the conditions. ...
Article
Full-text available
People often change their beliefs by succumbing to an opinion of others. Such changes are often referred to as effects of social influence. While some previous studies have focused on the reinforcement learning mechanisms of social influence or on its internalization, others have reported evidence of changes in sensory processing evoked by social influence of peer groups. In this study, we used magnetoencephalographic (MEG) source imaging to further investigate the long-term effects of agreement and disagreement with the peer group. The study was composed of two sessions. During the first session, participants rated the trustworthiness of faces and subsequently learned group rating of each face. In the first session, a neural marker of an immediate mismatch between individual and group opinions was found in the posterior cingulate cortex, an area involved in conflict-monitoring and reinforcement learning. To identify the neural correlates of the long-lasting effect of the group opinion, we analysed MEG activity while participants rated faces during the second session. We found MEG traces of past disagreement or agreement with the peers at the parietal cortices 230 ms after the face onset. The neural activity of the superior parietal lobule, intraparietal sulcus, and precuneus was significantly stronger when the participant’s rating had previously differed from the ratings of the peers. The early MEG correlates of disagreement with the majority were followed by activity in the orbitofrontal cortex 320 ms after the face onset. Altogether, the results reveal the temporal dynamics of the neural mechanism of long-term effects of disagreement with the peer group: early signatures of modified face processing were followed by later markers of long-term social influence on the valuation process at the ventromedial prefrontal cortex.
... Stimuli consisted of 40 face stimuli (20 men and 20 women) which were selected from a database of digitized color portraits of young adult faces collected by George and colleagues (e.g. Burra, Baker, & George, 2017;Burra, Framorando, & Pegna, 2018;George, Driver, & Dolan, 2001;Latinus et al., 2015;Vuilleumier, George, Lister, Armony, & Driver, 2005). All faces had a neutral expression and were unknown to the participants. ...
Preprint
Full-text available
Direct gaze is an important and highly salient social signal with multiple effects on cognitive processes and behavior. It is disputed whether the effect of direct gaze is caused by attentional capture or increased arousal. Time estimation may provide an answer because attentional capture predicts an underestimation of time whereas arousal predicts an overestimation. In a temporal bisection task, observers were required to classify the duration of a stimulus as short or long. Stimulus duration was randomly selected between 988 and 1479 ms. When gaze was directed at the observer, participants underestimated stimulus duration. Critically, this effect was limited to dynamic stimuli where gaze appeared to move toward or away from the participant. The underestimation with direct gaze was present with stimuli showing a full face, but also with stimuli showing only the eye region. In contrast, the underestimation was absent with static pictures of full faces. The underestimation persisted with dynamic gaze shifts for inverted faces and eye-like stimuli. However, the effect vanished when nonfigurative stimuli were presented. Overall, we show that direct gaze biases temporal perception with dynamic gaze shifts, but not with static gaze. The underestimation is consistent with the idea that direct gaze captures attention but provides no support for increased arousal with direct gaze. Critically, effects of direct gaze depended on motion, which is common in naturalistic scenes. Therefore, the ecological validity of stimuli needs to be given more consideration in the study of social attention.
... Some found that static averted gaze faces or dynamic averted gaze shifts elicited a larger N170 amplitude than direct gaze faces or gaze shifts (Watanabe et al., 2002;Itier et al., 2007;Puce et al., 2000;Latinus et al., 2015;Rossi, Parada, Latinus, & Puce, 2015; see also Caruana et al., 2014 with intracranial recordings). Others, in contrast, found larger N170 s for direct than averted gaze static faces or gaze shifts (Conty et al., 2007;Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2010;Burra, Baker, & George, 2017). Yet others found no gaze modulations of the N170 (Taylor, Itier, Allison, & Edmonds, 2001;Schweinberger, Kloth, & Jenkins, 2007;Brefczynski-Lewis, Berrebi, McNeely, Prostko, & Puce, 2011). ...
Article
Most face processing research has investigated how we perceive faces presented by themselves, but we view faces everyday within a rich social context. Recent ERP research has demonstrated that context cues, including self-relevance and valence, impact electrocortical and emotional responses to neutral faces. However, the time-course of these effects is still unclear, and it is unknown whether these effects interact with the face gaze direction, a cue that inherently contains self-referential information and triggers emotional responses. We primed direct and averted gaze neutral faces (gaze manipulation) with contextual sentences that contained positive or negative opinions (valence manipulation) about the participants or someone else (self-relevance manipulation). In each trial, participants rated how positive or negative, and how affectively aroused, the face made them feel. Eye-tracking ensured sentence reading and face fixation while ERPs were recorded to face presentations. Faces put into self-relevant contexts were more arousing than those in other-relevant contexts, and elicited ERP differences from 150-750 ms post-face, encompassing EPN and LPP components. Self-relevance interacted with valence at both the behavioural and ERP level starting 150 ms post-face. Finally, faces put into positive, self-referential contexts elicited different N170 ERP amplitudes depending on gaze direction. Behaviourally, direct gaze elicited more positive valence ratings than averted gaze during positive, self-referential contexts. Thus, self-relevance and valence contextual cues impact visual perception of neutral faces and interact with gaze direction during the earliest stages of face processing. The results highlight the importance of studying face processing within contexts mimicking the complexities of real world interactions.
... Some found that static averted gaze faces or dynamic averted gaze shifts elicited a larger N170 amplitude than direct gaze faces or gaze shifts (Watanabe et al., 2002;Itier et al., 2007;Puce et al., 2000;Latinus et al., 2015;Rossi, Parada, Latinus, & Puce, 2015; see also Caruana et al., 2014 with intracranial recordings). Others, in contrast, found larger N170 s for direct than averted gaze static faces or gaze shifts (Conty et al., 2007;Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2010;Burra, Baker, & George, 2017). Yet others found no gaze modulations of the N170 (Taylor, Itier, Allison, & Edmonds, 2001;Schweinberger, Kloth, & Jenkins, 2007;Brefczynski-Lewis, Berrebi, McNeely, Prostko, & Puce, 2011). ...
Article
We are rarely exposed to faces in the absence of situational context. Previous ERP research using faces primed with contextual sentences suggests that valence and self-relevance modulate electrocortical responses to faces. However, the time-course of these effects is unclear and no studies have investigated whether they interact with another key sign of self-relevance: whether the face is looking at or away from the participant. We used emotional sentences to vary the context within which neutral faces were placed. These sentences referred to the face having a positive or negative opinion (valence manipulation) of the participant or of someone else (self-relevance manipulation). Participants read each sentence before viewing the face that the sentence referred to, and faces had either direct or averted gaze (gaze manipulation). Eye-tracking was used to ensure that participants read the sentences and enforced fixation to the face, while ERPs were recorded. In a preliminary sample of 15, mean amplitude analyses of 100ms time-windows from 150-750ms after face onset were performed on a subset of occipito-temporal electrodes to track the time-course of contextual modulation. Self-relevance elicited a larger amplitude response for other versus self-relevant context on the right hemisphere, consistently from 250 to 650ms. Gaze direction interacted with valence consistently from 250 to 750ms for faces in the other-relevant, but not self-relevant context. In the other-relevant positive context, direct gaze produced a smaller amplitude response than averted gaze. In contrast, direct gaze produced a larger amplitude response than averted gaze in the other-relevant negative context. Participants also rated faces in a self-relevant context as more arousing than faces in the other-relevant context, suggesting a potential arousal-based mechanism for self-relevance effects. These results indicate that the contextual cues of self-relevance and valence impact perception of neutral faces in a complex way depending on gaze direction. Meeting abstract presented at VSS 2016
Article
Background: Patients with schizophrenia show abnormal gaze processing, which is associated with social dysfunction. These abnormalities are associated with aberrant connectivity among brain regions associated with visual processing, social cognition, and cognitive control. In this study, we investigated 1) how effective connectivity during gaze processing is disrupted in schizophrenia, and 2) how this might contribute to social dysfunction and clinical symptoms. Methods: Thirty-nine patients with schizophrenia/schizoaffective disorder (SZ) and 33 healthy controls (HC) completed an eye gaze processing task during fMRI. Participants viewed faces with different gaze angles and performed explicit and implicit gaze processing. Four brain regions-the secondary visual cortex (Vis), posterior superior temporal sulcus (pSTS), inferior parietal lobule (IPL), and posterior medial frontal cortex (pMFC)-were identified as nodes for dynamic causal modeling analysis. Results: SZ and HC showed similar model structures for general gaze processing. Explicit gaze discrimination led to changes in effective connectivity, including stronger excitatory, bottom-up connections from Vis to pSTS and IPL and inhibitory, top-down connections from pMFC to Vis. Group differences in top-down modulation from pMFC to pSTS and IPL were noted, such that these inhibitory connections were attenuated in HC while further strengthened in SZ. Connectivity was associated with social dysfunction and symptom severity. Discussion: SZ showed notably stronger top-down inhibition during explicit gaze discrimination, which was associated with more social dysfunction but less severe symptoms among patients. Findings help pinpoint neural mechanisms of aberrant gaze processing and may serve as future targets for interventions that combine neuromodulation with social-cognitive training.
Article
Direction of another person's eye gaze provides crucial information about their attention and intentions, which is essential for an effective social interaction. Event-related potential (ERP) measures offer precise temporal tracking of neural processes related to gaze perception. While the sensitivity of the ERP component N170 to face processing is principally agreed, the research on gaze direction effect on this component is thus far inconsistent. Here, we systematically reviewed literature on the sensitivity of N170 to gaze direction. We analysed if four factors, known to affect the face N170 (i.e., emotion, face orientation, task demand, and stimuli motion), were modulated by gaze direction. N170 sensitivity to gaze was reported the most in the studies that involved deviated faces, dynamic stimuli, and that used explicit tasks directly related to gaze or face processing. The present review provides a much-needed summary of the literature to date, highlighting the complexity of the effect of gaze direction on the N170 component, and the need of systematic studies investigating the combination of these factors.
Article
Looking at someone's eyes is thought to be important for affective theory of mind (aTOM), our ability to infer their emotional state. However, it is unknown whether an individual's gaze direction influences our aTOM judgements and what the time course of this influence might be. We presented participants with sentences describing individuals in positive, negative or neutral scenarios, followed by direct or averted gaze neutral face pictures of those individuals. Participants made aTOM judgements about each person's mental state, including their affective valence and arousal, and we investigated whether the face gaze direction impacted those judgements. Participants rated that gazers were feeling more positive when they displayed direct gaze as opposed to averted gaze, and that they were feeling more aroused during negative contexts when gaze was averted as opposed to direct. Event-related potentials associated with face perception and affective processing were examined using mass-univariate analyses to track the time-course of this eye-gaze and affective processing interaction at a neural level. Both positive and negative trials were differentiated from neutral trials at many stages of processing. This included the early N200 and EPN components, believed to reflect automatic emotion areas activation and attentional selection respectively. This also included the later P300 and LPP components, thought to reflect elaborative cognitive appraisal of emotional content. Critically, sentence valence and gaze direction interacted over these later components, which may reflect the incorporation of eye-gaze in the cognitive evaluation of another's emotional state. The results suggest that gaze perception directly impacts aTOM processes, and that altered eye-gaze processing in clinical populations may contribute to associated aTOM impairments.
Article
Little is known about how perceived gaze direction and head orientation may influence human categorization of visual stimuli as faces. To address this question, a sequence of unsegmented natural images, each containing a random face or a non-face object, was presented in rapid succession (stimulus duration: 91.7 ms per image) during which human observers were instructed to respond immediately to every face presentation. Faces differed in gaze and head orientation in 7 combinations – full-front views with perceived gaze (1) directed to the observer, (2) averted to the left, or (3) averted to the right, left ¾ side views with (4) direct gaze or (5) averted gaze, and right ¾ side views with (6) direct gaze or (7) averted gaze – were presented randomly throughout the sequence. We found highly accurate and rapid behavioural responses to all kinds of faces. Crucially, both perceived gaze direction and head orientation had comparable, non-interactive effects on response times, where direct gaze was responded faster than averted gaze by 48 ms and full-front view faster than ¾ side view also by 48 ms on average. Presentations of full-front faces with direct gaze led to an additive speed advantage of 96 ms to ¾ faces with averted gaze. The results reveal that the effects of perceived gaze direction and head orientation on the speed of face categorization probably depend on the degree of social relevance of the face to the viewer.
Article
Full-text available
Gaze directed at the observer (direct gaze) is an important and highly salient social signal with multiple effects on cognitive processes and behavior. It is disputed whether the effect of direct gaze is caused by attentional capture or increased arousal. Time estimation may provide an answer because attentional capture predicts an underestimation of time whereas arousal predicts an overestimation. In a temporal bisection task, observers were required to classify the duration of a stimulus as short or long. Stimulus duration was selected randomly between 988 and 1479 ms. When gaze was directed at the observer, participants underestimated stimulus duration, suggesting that effects of direct gaze are caused by attentional capture, not increased arousal. Critically, this effect was limited to dynamic stimuli where gaze appeared to move toward the participant. The underestimation was present with stimuli showing a full face, but also with stimuli showing only the eye region, inverted faces and high-contrast eye-like stimuli. However, it was absent with static pictures of full faces and dynamic nonfigurative stimuli. Because the effect of direct gaze depended on motion, which is common in naturalistic scenes, more consideration needs to be given to the ecological validity of stimuli in the study of social attention.
Article
Full-text available
Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emotional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We presented 44 participants with contextual sentences describing positive, negative and neutral events happening to other people (e.g. “Her newborn was saved/killed/fed yesterday afternoon.”). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions.. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condition. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli.
Article
Background Abnormal eye gaze perception is related to symptoms and social functioning in schizophrenia. However, little is known about the brain network mechanisms underlying these abnormalities. Here, we employed dynamic causal modeling (DCM) of fMRI data to discover aberrant effective connectivity within networks associated with eye gaze processing in schizophrenia. Methods Twenty-seven patients (schizophrenia/schizoaffective disorder, SZ) and 22 healthy controls (HC) completed an eye gaze processing task during fMRI. Participants viewed faces with different gaze angles and performed explicit gaze discrimination (Gaze: “Looking at you?” yes/no) or implicit gaze processing (Gender: “male or female?”). Four brain regions, the secondary visual cortex (Vis), posterior superior temporal sulcus (pSTS), inferior parietal lobule (IPL), and posterior medial frontal cortex (pMFC) were identified as nodes for subsequent DCM analysis. Results SZ and HC showed similar generative model structure, but SZ showed altered connectivity for specific self-connections, inter-regional connections during all gaze processing (reduced excitatory bottom-up and enhanced inhibitory top-down connections), and modulation by explicit gaze discrimination (increased frontal inhibition of visual cortex). Altered effective connectivity was significantly associated with poorer social cognition and functioning. Conclusions General gaze processing in SZ is associated with distributed cortical dysfunctions and bidirectional connectivity between regions, while explicit gaze discrimination involves predominantly top-down abnormalities in the visual system. These results suggest plausible neural mechanisms underpinning gaze processing deficits and may serve as bio-markers for intervention.
Article
Full-text available
Paying attention to faces is of special interest for humans as well as for scientific research. The experimental manipulation of facial information offers an ecologically valid approach to investigate emotion, attention and social functioning. Humans are highly specialized in face perception and event-related brain potentials (ERP) provide insights into the temporal dynamics of involved neuronal mechanisms. Here, we summarize ERP research from the last decade, examining the processing of emotional compared to neutral facial expressions along the visual processing stream. A particular focus lies on exploring the impact of attention tasks on early (P1, N170), mid-latency (P2, EPN) and late (P3, LPP) stages of processing. This review systematizes facial emotion effects as a function of different attention tasks: 1) When faces serve as mere distractors, 2) during passive viewing designs, 3) directing attention at faces in general, and 4) paying attention to facial expressions. We find fearful and angry expressions to reliably modulate the N170, EPN, and LPP component, the latter benefiting from attention directed at the emotional facial expression.
Article
Full-text available
Gender categorisation of human faces is facilitated when gaze is directed toward the observer (i.e., a direct gaze), compared to situations where gaze is averted or the eyes are closed (Macrae, Hood, Milne, Rowe, & Mason, 2002). However, the temporal dynamics underlying this phenomenon remain to some extent unknown. Here, we used electroencephalography (EEG) to assess the neural correlates of this effect, focusing on the event-related potential (ERP) components known to be sensitive to gaze perception, i.e., P1, N170, and P3b. We first replicated the seminal findings of Macrae et al. (Experiment 1) regarding facilitated gender discrimination, and subsequently measured the underlying neural responses. Our data revealed an early preferential processing of direct gaze as compared to averted gaze and closed eyes at the P1, which reverberated at the P3b (Experiment 2). Critically, using the same material, we failed to reproduce these effects when gender categorisation was not required (Experiment 3). Taken together, our data confirm that direct gaze enhances both early and late cortical responses to face processing, although this effect appears to be task-dependent, especially at P1 level.
Article
Full-text available
Across three experiments, we examined the efficacy of three cues from the human body—body orientation, head turning, and eye-gaze direction—to shift an observer’s attention in space. Using a modified Posner cueing paradigm, we replicate the previous findings of gender differences in the gaze-cueing effect whereby female but not male participants responded significantly faster to validly cued than to invalidly cued targets. In contrast to the previous studies, we report a robust cueing effect for both male and female participants when head turning direction was used as the central cue, whereas oriented bodies proved ineffectual as cues to attention for both males and females. These results are discussed with reference to the time course of central cueing effects, gender differences in spatial attention, and current models of how cues from the human body are combined to judge another person’s direction of attention.
Article
Full-text available
This study aimed at investigating the conditions under which eyes with a straight gaze capture attention more than eyes with an averted gaze, a phenomenon called the stare-in-the-crowd effect. In Experiment 1, we measured attentional capture by distractor faces with either straight or averted gaze that were shown among faces with closed eyes. Gaze direction of the distractor face was irrelevant because participants searched for a tilted face and indicated its gender. The presence of the distractor face with open eyes resulted in slower reaction times, but gaze direction had no effect, suggesting that straight gaze does not result in more involuntary attentional capture than averted gaze. In three further experiments with the same stimuli, the gaze direction of the target, and not the distractor, was varied. Better performance with straight than averted gaze of the target face was observed when the gaze direction or gender of the target face had to be discriminated. However, no difference between straight and averted was observed when only the presence of a face with open eyes had to be detected. Thus, the stare-in-the crowd effect is only observed when eye gaze is selected as part of the target and only when features of the face have to be discriminated. Our findings suggest that preference for straight gaze bears on target-related processes rather than on attentional capture per se.
Article
Full-text available
Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transition (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and ERP data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'Default mode' that may focus on spatial information; a 'Socially Aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Article
Full-text available
Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs) to viewing gaze aversions vs. direct gaze in real faces (Puce et al., 2000). In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014), suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards) while continuous electroencephalographic (EEG) activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from Event-Related Spectral Perturbations (ERSPs) were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150–350 ms, potentially reflecting signal detection of facial motion. Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.
Article
Full-text available
Contrast weights for factorial designs that include a group factor (e.g. 2 different groups of subjects) are often a cause of confusion because the contrast weights that have to be entered in SPM diverge from the simple scheme for a single group. The reason for this is that the group sizes have to be taken into account when deriving contrast weights in order to yield an estimable contrast. The situation is further complicated because the definition of factors in an experiment is dissociated from the configuration of the design matrix in SPM5. While the former are defined under the "Factors" section of the flexible factorial design, the actual regressors of the design matrix are configured under "Main Effects and Interactions". SPM5 does not impose any restriction on which main effect or interaction to include in the design matrix, but the decision affects the necessary contrast weights dramatically 1 . Although termed "Main Effects and Interactions", this part only configures the design matrix, but does not provide the contrast weights to test for these factorial effects. Understanding how to enter the contrast weights is the goal of this tutorial. This tutorial provides algorithms for computing the contrast weights for the main effects and interactions in experiments that includes a group factor (e. g. patients and control subjects). We will demonstrate these for various design matrices that include regressors for different combinations of main effects and interactions (as configured in the above mentioned section). Finally, we will provide some preliminary advice regarding which effects should be included in the design matrix.
Article
Full-text available
Humans are highly sensitive to another's gaze direction, and use this information to support a range of social cognitive functions. Here we review recent studies that have begun to delineate a neural system for gaze perception. We focus in particular on a set of core gaze processes: perceptual coding of another's eye gaze direction, which may involve anterior superior temporal sulcus (STS); gaze-cued attentional orienting, which may be mediated by lateral parietal regions; and the experience of joint attention with another individual, which recruits medial prefrontal cortex. We conclude that understanding this gaze processing system will require a combination of multivariate pattern analysis approaches to characterise the role of individual nodes as well as connectivity-based methods to study interactions at the systems level.
Article
Full-text available
To understand social interactions we must decode dynamic social cues from seen faces. Here we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others.
Article
Full-text available
Human neuroimaging and event-related potential (ERP) studies suggest that ventral and lateral temporo-occipital cortex is sensitive to static faces and face parts. Recent fMRI data also show activation by facial movements. In this study we recorded from 22 posterior scalp locations in 20 normal right-handed males to assess ERPs evoked by viewing: (1) moving eyes and mouths in the context of a face; (2) moving and static eyes with and without facial context. N170 and P350 peak amplitude and latency data were analysed. N170 is an ERP previously shown to be preferentially responsive to face and eye stimuli, and P350 immediately follows N170. Major results were: (1) N170 was significantly larger over the bilateral temporal scalp to viewing opening mouths relative to dosing mouths, and to eye aversion relative to eyes gazing at the observer; (2) at a focal region over the right inferior temporal scalp, N170 was significantly earlier to mouth opening relative to closing, and to eye aversion relative to eyes gazing at the observer; (3) the focal ERP effect of eye aversion occurred independent of facial context; (4) these differences cannot be attributable to movement per se, as they did not occur in a control condition in which checks moved in comparable areas of the visual field; (5) isolated static eyes produced N170s that were not significantly different from N170s to static full faces over the right inferior temporal scalp, unlike in the left hemisphere where face N170s were significantly larger than eye N170s; (6) unlike N170, P350 exhibited nonspecific changes as a function of stimulus movement. These results suggest that: (1) bilateral temporal cortex forms part of a system sensitive to biological motion, of which facial movements form an important subset; (2) there may be a specialised system for facial gesture analysis that provides input for neuronal circuitry dealing with social attention and the actions of others.
Article
Full-text available
Magnétoencéphalographie et électroencéphalographie mesurent respectivement le champ magnétique et le potentiel engendrés à la surface de la tête par les courants neuronaux. Le développement de méthodes de plus en plus performantes, qui s'appuient essentiellement sur l'anatomie du sujet obtenue à partir d'examen IRM, est une nécessité en MEG/EEG si l'on veut obtenir de hautes résolutions spatiales et temporelles en imagerie cérébrale fonctionnelle.
Article
Full-text available
Humans show a remarkable ability to discriminate others' gaze direction, even though a given direction can be conveyed by many physically dissimilar configurations of different eye positions and head views. For example, eye contact can be signaled by a rightward glance in a left-turned head or by direct gaze in a front-facing head. Such acute gaze discrimination implies considerable perceptual invariance. Previous human research found that superior temporal sulcus (STS) responds preferentially to gaze shifts [1], but the underlying representation that supports such general responsiveness remains poorly understood. Using multivariate pattern analysis (MVPA) of human functional magnetic resonance imaging (fMRI) data, we tested whether STS contains a higher-order, head view-invariant code for gaze direction. The results revealed a finely graded gaze direction code in right anterior STS that was invariant to head view and physical image features. Further analyses revealed similar gaze effects in left anterior STS and precuneus. Our results suggest that anterior STS codes the direction of another's attention regardless of how this information is conveyed and demonstrate how high-level face areas carry out fine-grained, perceptually relevant discrimination through invariance to other face features.
Article
Full-text available
Since the detection of the first biomagnetic signals in 1963 there has been continuous discussion on the properties and relative merits of bioelectric and biomagnetic measurements. In this review article it is briefly discussed the early history of this controversy. Then the theory of the independence and interdependence of bioelectric and biomagnetic signals is explained, and a clinical study on ECG and MCG that strongly supports this theory is presented. The spatial resolutions of EEG and MEG are compared in detail, and the issue of the maximum number of electrodes in EEG is also discussed. Finally, some special properties of EEG and MEG methods are described. In brief, the conclusion is that EEG and MEG are only partially independent and their spatial resolutions are about the same. Recording both of them brings some additional information on the bioelectric activity of the brain. These two methods have certain unique properties that make either of them more beneficial in certain applications.
Article
Full-text available
Face perception requires representation of invariant aspects that underlie identity recognition as well as representation of changeable aspects, such as eye gaze and expression, that facilitate social communication. Using functional magnetic resonance imaging (fMRI), we investigated the perception of face identity and eye gaze in the human brain. Perception of face identity was mediated more by regions in the inferior occipital and fusiform gyri, and perception of eye gaze was mediated more by regions in the superior temporal sulci. Eye-gaze perception also seemed to recruit the spatial cognition system in the intraparietal sulcus to encode the direction of another's gaze and to focus attention in that direction.
Article
Full-text available
Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).
Article
Full-text available
Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task in which they mentally counted the number of occurrences of pictorial stimuli from a designated category such us butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces, and other nonface stimuli. Unman faces evoked a negative potential at 172 msec (N170), which was absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the "structural encoder" suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.
Article
Full-text available
Several recent studies have begun to examine the neurocognitive mechanisms involved in perceiving and responding to eye contact, a salient social signal of interest and readiness for interaction. Laboratory experiments measuring observers' responses to pictorial instead of live eye gaze cues may, however, only vaguely approximate the real-life affective significance of gaze direction cues. To take this into account, we measured event-related brain potentials and subjective affective responses in healthy adults while viewing live faces with a neutral expression through an electronic shutter and faces as pictures on a computer screen. Direct gaze elicited greater face-sensitive N170 amplitudes and early posterior negativity potentials than averted gaze or closed eyes, but only in the live condition. The results show that early-stage processing of facial information is enhanced by another person's direct gaze when the person is faced live. We propose that seeing a live face with a direct gaze is processed more intensely than a face with averted gaze or closed eyes, as the direct gaze is capable of intensifying the feeling of being the target of the other's interest and intentions. These results may have implications for the use of pictorial stimuli in the social cognition studies.
Article
Full-text available
An important difference between magnetoencephalography (MEG) and electroencephalography (EEG) is that MEG is insensitive to radially oriented sources. We quantified computationally the dependency of MEG and EEG on the source orientation using a forward model with realistic tissue boundaries. Similar to the simpler case of a spherical head model, in which MEG cannot see radial sources at all, for most cortical locations there was a source orientation to which MEG was insensitive. The median value for the ratio of the signal magnitude for the source orientation of the lowest and the highest sensitivity was 0.06 for MEG and 0.63 for EEG. The difference in the sensitivity to the source orientation is expected to contribute to systematic differences in the signal-to-noise ratio between MEG and EEG.
Article
Full-text available
Event-related potentials were recorded from adults and 4-month-old infants while they watched pictures of faces that varied in emotional expression (happy and fearful) and in gaze direction (direct or averted). Results indicate that emotional expression is temporally independent of gaze direction processing at early stages of processing, and only become integrated at later latencies. Facial expressions affected the face-sensitive ERP components in both adults (N170) and infants (N290 and P400), while gaze direction and the interaction between facial expression and gaze affected the posterior channels in adults and the frontocentral channels in infants. Specifically, in adults, this interaction reflected a greater responsiveness to fearful expressions with averted gaze (avoidance-oriented emotion), and to happy faces with direct gaze (approach-oriented emotions). In infants, a larger activation to a happy expression at the frontocentral negative component (Nc) was found, and planned comparisons showed that it was due to the direct gaze condition. Taken together, these results support the shared signal hypothesis in adults, but only to a lesser extent in infants, suggesting that experience could play an important role.
Article
Full-text available
his paper seeks to bring together two previously separate research traditions: research on spatial orienting within the visual cueing paradigm and research into social cognition, addressing our tendency to attend in the direction that another person looks. Cueing methodologies from mainstream attention research were adapted to test the automaticity of orienting in the direction of seen gaze. Three studies manipulated the direction of gaze in a computerized face, which appeared centrally in a frontal view during a peripheral letter-discrimination task. Experiments 1 and 2 found faster discrimination of peripheral target letters on the side the computerized face gazed towards, even though the seen gaze did not predict target side, and despite participants being asked to ignore the face. This suggests reflexive covert and/or overt orienting in the direction of seen gaze, arising even when the observer has no motivation to orient in this way. Experiment 3 found faster letter discrimination on the side the computerized face gazed towards even when participants knew that target letters were four times as likely on the opposite side. This suggests that orienting can arise in the direction of seen gaze even when counter to intentions. The experiments illustrate that methods from mainstream attention research can be usefully applied to social cognition, and that studies of spatial attention may profit from considering its social function.
Article
Editor’s Note: Science has always relied on reproducibility to build confidence in experimental results. Now, the most comprehensive investigation ever done about the rate and predictors of reproducibility in social and cognitive sciences has found that regardless of the analytic method or criteria used, fewer than half of the original findings were successfully replicated. While a failure to reproduce does not necessarily mean the original report was incorrect, the results suggest that more rigorous methods are long overdue.
Book
Magnetoencephalography (MEG) is an invaluable functional brain imaging technique that provides direct, real-time monitoring of neuronal activity necessary for gaining insight into dynamic cortical networks. Our intentions with this book are to cover the richness and transdisciplinary nature of the MEG field, make it more accessible to newcomers and experienced researchers and to stimulate growth in the MEG area. The book presents a comprehensive overview of MEG basics and the latest developments in methodological, empirical and clinical research, directed toward master and doctoral students, as well as researchers. There are three levels of contributions: 1) tutorials on instrumentation, measurements, modeling, and experimental design; 2) topical reviews providing extensive coverage of relevant research topics; and 3) short contributions on open, challenging issues, future developments and novel applications. The topics range from neuromagnetic measurements, signal processing and source localization techniques to dynamic functional networks underlying perception and cognition in both health and disease. Topical reviews cover, among others: development on SQUID-based and novel sensors, multi-modal integration (low field MRI and MEG; EEG and fMRI), Bayesian approaches to multi-modal integration, direct neuronal imaging, novel noise reduction methods, source-space functional analysis, decoding of brain states, dynamic brain connectivity, sensory-motor integration, MEG studies on perception and cognition, thalamocortical oscillations, fetal and neonatal MEG, pediatric MEG studies, cognitive development, clinical applications of MEG in epilepsy, pre-surgical mapping, stroke, schizophrenia, stuttering, traumatic brain injury, post-traumatic stress disorder, depression, autism, aging and neurodegeneration, MEG applications in cognitive neuropharmacology and an overview of the major open-source analysis tools. © 2014 Springer-Verlag Berlin Heidelberg. All rights are reserved.
Article
In recent years, many new cortical areas have been identified in the macaque monkey. The number of identified connections between areas has increased even more dramatically. We report here on (1) a summary of the layout of cortical areas associated with vision and with other modalities, (2) a computerized database for storing and representing large amounts of information on connectivity patterns, and (3) the application of these data to the analysis of hierarchical organization of the cerebral cortex. Our analysis concentrates on the visual system, which includes 25 neocortical areas that are predominantly or exclusively visual in function, plus an additional 7 areas that we regard as visual-association areas on the basis of their extensive visual inputs. A total of 305 connections among these 32 visual and visual-association areas have been reported. This represents 31% of the possible number of pathways it each area were connected with all others. The actual degree of connectivity is likely to be closer to 40%. The great majority of pathways involve reciprocal connections between areas. There are also extensive connections with cortical areas outside the visual system proper, including the somatosensory cortex, as well as neocortical, transitional, and archicortical regions in the temporal and frontal lobes. In the somatosensory/motor system, there are 62 identified pathways linking 13 cortical areas, suggesting an overall connectivity of about 40%. Based on the laminar patterns of connections between areas, we propose a hierarchy of visual areas and of somato sensory/motor areas that is more comprehensive than those suggested in other recent studies. The current version of the visual hierarchy includes 10 levels of cortical processing. Altogether, it contains 14 levels if one includes the retina and lateral geniculate nucleus at the bottom as well as the entorhinal cortex and hippocampus at the top. Within this hierarchy, there are multiple, intertwined processing streams, which, at a low level, are related to the compartmental organization of areas V1 and V2 and, at a high level, are related to the distinction between processing centers in the temporal and parietal lobes. However, there are some pathways and relationships (about 10% of the total) whose descriptions do not fit cleanly into this hierarchical scheme for one reason or another. In most instances, though, it is unclear whether these represent genuine exceptions to a strict hierarchy rather than inaccuracies or uncertainties in the reported assignment.
Article
Activation in or near the fusiform gyrus was estimated to faces and control stimuli. Activation peaked at 165 ms and was strongest to digitized photographs of human faces, regardless of whether they were presented in color or grayscale, suggesting that face- and color-specific areas are functionally separate. Schematic sketches evoked ~30% less activation than did face photographs. Scrambling the locations of facial features reduced the response by ~25% in either hemisphere, suggesting that configurational versus analytic processing is not lateralized at this latency. Animal faces evoked ~50% less activity, and common objects, animal bodies or sensory controls evoked ~80% less activity than human faces. The (small) responses evoked by meaningless control images were stronger when they included surfaces and shading, suggesting that the fusiform gyrus may use these features in constructing its facespecific response. Putative fusiform activation was not significantly related to stimulus repetition, gender or emotional expression. A midline occipital source significantly distinguished between faces and control images as early as 110 ms, but was more sensitive to sensory qualities. This source significantly distinguished happy and sad faces from those with neutral expressions. We conclude that the fusiform gyrus may selectively encode faces at 165 ms, transforming sensory input for further processing.
Article
Four experiments investigate the hypothesis that cues to the direction of another's social attention produce a reflexive orienting of an observer's visual attention. Participants were asked to make a simple detection response to a target letter which could appear at one of four locations on a visual display. Before the presentation of the target, one of these possible locations was cued by the orientation of a digitized head stimulus, which appeared at fixation in the centre of the display. Uninformative and to-be-ignored cueing stimuli produced faster target detection latencies at cued relative to uncued locations, but only when the cues appeared 100 msec before the onset of the target (Experiments 1 and 2). The effect was uninfluenced by the introduction of a to-be-attended and relatively informative cue (Experiment 3), but was disrupted by the inversion of the head cues (Experiment 4). It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orienting mechanism which can be engaged by social attention signals.
Article
This study investigated whether the direct gaze of others influences attentional disengagement from faces in an experimental situation. Participants were required to fixate on a centrally presented face with varying gaze directions and to detect the appearance of a peripheral target as quickly as possible. Results revealed that target detection was delayed when the preceding face was directly gazing at the subject (direct gaze), as compared with an averted gaze (averted gaze) or with closed eyes (closed eyes). This effect disappeared when a temporal gap was inserted between the offset of the centrally presented face and the onset of a peripheral target, suggesting that attentional disengagement contributed to the delayed response in the direct gaze condition. The response delay to direct gaze was not found when the contrast polarity of eyes in the facial stimuli was reversed, reinforcing the importance of gaze perception in delayed disengagement from direct gaze.
Article
In this chapter we shall try to determine how the human eye examines complex objects and what principles govern this process. For example, it may seen to some people that when we examine an object we must trace its outlines with our eye and, by analogy with tactile sensation, “palpate” the object. Others may consider that, when looking at a picture, we scan the whole of its surface more or less uniformly with our eyes.
Article
The stare-in-the crowd effect refers to the finding that a visual search for a target of staring eyes among averted-eyes distracters is more efficient than the search for an averted-eyes target among staring distracters. This finding could indicate that staring eyes are prioritized in the processing of the search array so that attention is more likely to be directed to their location than to any other. However, visual search is a complex process, which not only depends upon the properties of the target, but also the similarity between the target of the search and the distractor items and between the distractor items themselves. Across five experiments, we show that the search asymmetry diagnostic of the stare-in-the-crowd effect is more likely to be the result of a failure to control for the similarity among distracting items between the two critical search conditions rather than any special attention-grabbing property of staring gazes. Our results suggest that, contrary to results reported in the literature, staring gazes are not prioritized by attention in visual search.
Article
Humans detect faces with direct gaze more rapidly than they do faces with averted gaze. Evidence suggests that the visual information of faces with direct gaze reaches conscious awareness faster than that of faces with averted gaze. This suggests that faces with direct gaze are effectively processed in the brain before they reach conscious awareness; however, it is unclear how the unconscious perception of faces with direct gaze is processed in the brain. To address this unanswered question, we recorded event-related potentials while observers viewed faces with direct or averted gaze that were either visible or rendered invisible during continuous flash suppression. We observed that invisible faces with direct gaze elicited significantly larger negative deflections than did invisible faces with averted gaze at 200, 250, and 350ms over the parietofrontal electrodes, whereas we did not observe such effects when facial images were visible. Our results suggest that the visual information of faces with direct gaze is preferentially processed in the brain when they are presented unconsciously. (178 words).
Article
A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.
Article
It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to "confirm" true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r<.50) from random noise (α =.05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper present the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.
Article
Various pathway models for emotional processing suggest early prefrontal contributions to affective stimulus evaluation. Yet, electrophysiological evidence for such rapid modulations is still sparse. In a series of four MEG/EEG studies which investigated associative learning in vision and audition using a novel MultiCS Conditioning paradigm, many different neutral stimuli (faces, tones) were paired with aversive and appetitive events in only two to three learning instances. Electrophysiological correlates of neural activity revealed highly significant amplified processing for conditioned stimuli within distributed prefrontal and sensory cortical networks. In both, vision and audition, affect-specific responses occurred in two successive waves of rapid (vision: 50-80ms, audition: 25-65ms) and mid-latency (vision: >130ms, audition: >100ms) processing. Interestingly, behavioral measures indicated that MultiCS Conditioning successfully prevented contingency awareness. We conclude that affective processing rapidly recruits highly elaborate and widely distributed networks with substantial capacity for fast learning and excellent resolving power.
Article
Normal subjects were presented with a simple line drawing of a face looking left, right, or straight ahead. A target letter F or T then appeared to the left or the right of the face. All subjects participated in target detection, localization, and identification response conditions. Although subjects were told that the line drawing’s gaze direction (the cue) did not predict where the target would occur, response time in all three conditions was reliably faster when gaze was toward versus away from the target. This study provides evidence for covert, reflexive orienting to peripheral locations in response to uninformative gaze shifts presented at fixation. The implications for theories of social attention and visual orienting are discussed, and the brain mechanisms that may underlie this phenomenon are considered.
Article
As an expert reviewer, it is sometimes necessary to ensure a paper is rejected. This can sometimes be achieved by highlighting improper statistical practice. This technical note provides guidance on how to critique the statistical analysis of neuroimaging studies to maximise the chance that the paper will be declined. We will review a series of critiques that can be applied universally to any neuroimaging paper and consider responses to potential rebuttals that reviewers might encounter from authors or editors.
Article
We present an empirical Bayesian scheme for distributed multimodal inversion of electromagnetic forward models of EEG and MEG signals. We used a generative model with common source activity and separate error components for each modality. Under this scheme, the weightings of error for each modality, relative to source components, are estimated automatically from the data, by optimising the model-evidence. This obviates the need for arbitrary user-defined weightings. To evaluate the scheme, we acquired three types of data simultaneously from twelve participants: total magnetic flux (as recorded by 102 magnetometers), orthogonal in-plane gradients of the magnetic field (as recorded by 204 planar gradiometers) and voltage differences in the electrical field (recorded by 70 electrodes). We assessed the relative precision of each sensor-type in terms of signal-to-noise ratio (SNR); using empirical sample variances and optimised estimators from the generative model. We then compared the localisation of face-evoked responses, using each modality separately, with that obtained by their “fusion” under the common generative model. Finally, we quantified the conditional precisions of the source estimates using their posterior covariance, confirming that EEG can improve MEG-based source reconstructions.
Article
The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic vs. static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition.
Article
Recorded electric potentials and magnetic fields due to cortical electrical activity have spatial spread even if their underlying brain sources are focal. Consequently, as a result of source cancellation, loss in signal amplitude and reduction in the effective signal-to-noise ratio can be expected when distributed sources are active simultaneously. Here we investigate the cancellation effects of EEG and MEG through the use of an anatomically correct forward model based on structural MRI acquired from 7 healthy adults. A boundary element model (BEM) with four compartments (brain, cerebrospinal fluid, skull and scalp) and highly accurate cortical meshes (~300,000 vertices) were generated. Distributed source activations were simulated using contiguous patches of active dipoles. To investigate cancellation effects in both EEG and MEG, quantitative indices were defined (source enhancement, cortical orientation disparity) and computed for varying values of the patch radius as well as for automatically parcellated gyri and sulci. Results were calculated for each cortical location, averaged over all subjects using a probabilistic atlas, and quantitatively compared between MEG and EEG. As expected, MEG sensors were found to be maximally sensitive to signals due to sources tangential to the scalp, and minimally sensitive to radial sources. Compared to EEG, however, MEG was found to be much more sensitive to signals generated antero-medially, notably in the anterior cingulate gyrus. Given that sources of activation cancel each other according to the orientation disparity of the cortex, this study provides useful methods and results for quantifying the effect of source orientation disparity upon source cancellation.
Article
Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than faces with averted gaze. Control experiments ruled out the influence of low-level stimulus differences and differential response criteria. These results indicate an enhanced unconscious representation of direct gaze, enabling the automatic and rapid detection of other individuals making eye contact with the observer.
Article
Perceived gaze contact in seen faces may convey important social signals. We examined whether gaze perception affects face processing during two tasks: Online gender judgement, and later incidental recognition memory. Individual faces were presented with eyes directed either straight towards the viewer or away, while these faces were seen in either frontal or three-quarters view. Participants were slower to make gender judgements for faces with direct versus averted eye gaze, but this effect was particularly pronounced for faces with opposite gender to the observer, and seen in three-quarters view. During subsequent surprise recognition-memory testing, recognition was better for faces previously seen with direct than averted gaze, again especially for the opposite gender to the observer. The effect of direct gaze was stronger in both tasks when the head was seen in three-quarters rather than in frontal view, consistent with the greater salience of perceived eye contact for deviated faces. However, in the memory test, face recognition was also relatively enhanced for faces of opposite gender in front views when their gaze was averted rather than direct. Together, these results indicate that perceived eye contact can interact with facial processing during gender judgements and recognition memory, even when gaze direction is task-irrelevant, and particularly for faces of opposite gender to the observer (an influence which controls for stimulus factors when considering observers of both genders). These findings appear consistent with recent neuroimaging evidence that social facial cues can modulate visual processing in cortical regions involved in face processing and memory, presumably via interconnections with brain systems specialized for gaze perception and social monitoring.
Article
The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.
Article
fMRI studies have reported three regions in human ventral visual cortex that respond selectively to faces: the occipital face area (OFA), the fusiform face area (FFA), and a face-selective region in the superior temporal sulcus (fSTS). Here, we asked whether these areas respond to two first-order aspects of the face argued to be important for face perception, face parts (eyes, nose, and mouth), and the T-shaped spatial configuration of these parts. Specifically, we measured the magnitude of response in these areas to stimuli that (i) either contained real face parts, or did not, and (ii) either had veridical face configurations, or did not. The OFA and the fSTS were sensitive only to the presence of real face parts, not to the correct configuration of those parts, whereas the FFA was sensitive to both face parts and face configuration. Further, only in the FFA was the response to configuration and part information correlated across voxels, suggesting that the FFA contains a unified representation that includes both kinds of information. In combination with prior results from fMRI, TMS, MEG, and patient studies, our data illuminate the functional division of labor in the OFA, FFA, and fSTS.
Article
Social attention is conveyed primarily by gaze, but also by head and body orientation. These cues not only signal a seen agent's direction of attention but are also used to infer their current goals and intentions. Here, we review recent research showing that different gaze, head and body orientations are represented by distinct neural mechanisms, and show that a statistical summary of recent neuroimaging studies reveals a widespread neural network for gaze processing. We discuss how this network subserves visual analysis of social attention cues, and imitative attention shifts and mental state attributions from these cues. We also review new research indicating that the posterior superior temporal sulcus region responds to the inferred intentionality of social cues, and consider the development of the gaze perception system.
Article
To determine how emotional information modulates subsequent traces for repeated stimuli, we combined simultaneous electro-encephalography (EEG) and magneto-encephalography (MEG) measures during long-lag incidental repetition of fearful, happy, and neutral faces. Repetition effects were modulated by facial expression in three different time windows, starting as early as 40-50 ms in both EEG and MEG, then arising at the time of the N170/M170, and finally between 280-320 ms in MEG only. The very early repetition effect, observed at 40-50 ms over occipito-temporo-parietal regions, showed a different MEG topography according to the facial expression. This differential response to fearful, happy and neutral faces suggests the existence of very early discriminative visual processing of expressive faces, possibly based on the low-level physical features typical of different emotions. The N170 and M170 face-selective components both showed repetition enhancement selective to neutral faces, with greater amplitude for emotional than neutral faces on the first but not the second presentation. These differential repetition effects may reflect valence acquisition for the neutral faces due to repetition, and suggest a combined influence of emotion- and experience-related factors on the early stage of face encoding. Finally, later repetition effects consisted in enhanced M300 (MEG) between 280 and 320 ms for fearful relative to happy and neutral faces that occurred on the first presentation, but levelled out on the second presentation. This effect may correspond to the higher arousing value of fearful stimuli that might habituate with repetition. Our results reveal that multiple stages of face processing are affected by the repetition of emotional information.