ArticlePDF AvailableLiterature Review

Electrophysiological Studies of Face Perception in Humans

Authors:

Abstract and Figures

Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task in which they mentally counted the number of occurrences of pictorial stimuli from a designated category such us butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces, and other nonface stimuli. Unman faces evoked a negative potential at 172 msec (N170), which was absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the "structural encoder" suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.
Content may be subject to copyright.
Electrophysiological Studies of Face Perception in Humans
Shlomo Bentin,
Hebrew University, Israel
Truett Allison,
West Haven VA Medical Center and Yale University School of Medicine
Aina Puce,
West Haven VA Medical Center and Yale University School of Medicine
Erik Perez, and
Hebrew University, Israel
Gregory McCarthy
West Haven VA Medical Center and Yale University School of Medicine
Abstract
Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes
from normal volunteers. Subjects performed a visual target detection task in which they mentally
counted the number of occurrences of pictorial stimuli from a designated category such us butterflies.
In separate experiments, target stimuli were embedded within a series of other stimuli including
unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces,
and other nonface stimuli. Unman faces evoked a negative potential at 172 msec (N170), which was
absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest
over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was
delayed when faces were presented upside-down, but its amplitude did not change. When presented
in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces,
while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human
faces, in which the locations of inner face components were altered, elicited an N170 similar in
amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items
of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to
detect (as opposed to identify) human faces, similar to the “structural encoder” suggested by Bruce
and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded
from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans
(Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170
to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex.
The voltage distribution of N170 over the scalp is consistent with a neural generator located in the
occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.
INTRODUCTION
Face recognition has been investigated extensively in humans and monkeys using behavioral
(e.g., Bruce, 1988), neuroimaging (Haxby, Grady, Horwitz, Salerno, Ungerleider, Mishkin, &
Schapiro, 1993; Puce, Allison, Gore, & McCarthy, 1995; Sergent, Ohta, & MacDonald,
1992), and electrophysiological methods (Allison, Ginter, McCarthy, Nobre, Puce, Luby, &
© 1996 Massachusetts Institute of Technology
Reprint requests should be sent to Shlomo Bentin, Department of Psychology, Hebrew University, Jerusalem 91905, Israel.
NIH Public Access
Author Manuscript
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
Published in final edited form as:
J Cogn Neurosci. 1996 November ; 8(6): 551–565. doi:10.1162/jocn.1996.8.6.551.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Spencer, 1994a; Allison, McCarthy, Belger, Puce, Luby, Spencer, & Bentin, 1994b; Allison
et at., 1994c; Desimone, 1991; Gross, Rodman, Gochin, & Colombo, 1993; Jeffreys, 1993;
Kendrick & Baldwin, 1987; Perrett, Mistlin, Chitty, Smith, Potter, Broennimann, & Harries,
1988; Seeck & Grüsser, 1992). That specialized brain areas contribute to face recognition is
suggested by occasional patients with posterior lesions who suffer a specific deficiency in
identifying familiar faces, a syndrome labeled prosopagnosia (Bodamer, 1947; see reviews by
Damasio, Tranel, & Damasio, 1990: Whitely & Warrington, 1977). Initial studies indicated
that lesions within parietooccipital cortical regions were responsible (Benton & Van Allen,
1972), but more recent studies have demonstrated that prosopagnosia is prevalent following
lesions in the inferior occiptotemporal region (Damasio, Damasio, & Van Hoesen, 1982;
Meadows, 1974). Although frequently observed with bilateral damage (e.g., Damasio et al.,
1982; Nardelli, Buananno, Coccia, Fiaschi, Terzian, & Rizzuto, 1982), there is evidence that
prosopagnosia may occur following unilateral damage to the right, but not the left, cerebral
hemisphere (De Renzi, 1986; Landis, Cummings, Christen, Bogen, & Imhof, 1986; Michel,
Poncet, & Signoret, 1989).
Additional information concerning the localization of neural mechanisms for face recognition
has been provided by neuroimaging studies using positron emission tomography (PET). Haxby
et al. (1993) found that face matching was associated with bilateral increases of blood flow in
occipital-temporal cortex. To evaluate blood flow changes specific to faces, Haxby et al.
(1993) compared face and location matching tasks and found that regions within the anterior
and posterior fusiform gyrus showed the greatest differential activity for faces. These regions
were activated bilaterally, but were larger in the right hemisphere. Sergent et al. (1992)
compared two active face tasks (face identity and gender discrimination) and found significant
differential activity bilaterally in the medial anterior temporal gyri, fusiform gyri, and temporal
pole. Again, activation was somewhat larger on the right side. The right lingual and
parahippocampal gyri also showed increased activity, as did the left middle temporal gyrus.
Bilateral, but right predominant activation of the fusiform gyri was common to both Haxby et
al. (1993) and Sergent et al. (1992). Recent studies using functional magnetic resonance
imaging (fMRI) have also shown activation of the fusiform gyri to unfamiliar faces (Clark,
Keil, Lalonde, Maisog, Courtney, Karni, Ungerleider, & Haxby, 1994; Puce et al., 1995).
One conclusion that can be drawn from the existing neuropsychological and neuroimaging
literature is that face recognition utilizes a specialized neural subsystem for processing
physiognomic information and relating the perceived input to prestored face representations.
This subsystem appears to be localized in posterior temporal and inferior occipitotemporal
regions, particularly in the right hemisphere (e.g Kay & Levin, 1982; Overman & Doty,
1982). Such an interpretation is consistent with some cognitive models of face recognition
(e.g., Bruce & Young, 1986). However, other investigators have questioned whether face
recognition is performed by a specialized neural subsystem. Deficits in face recognition may
result from a mild form of a more general visual agnosia that appears specific simply because
faces are more complex than other objects (Gloning, Gloning, Jellinger, & Quatember,
1970). Alternately, deficits in face recognition may reflect a general difficulty to discriminate
within category items with preserved ability to discriminate between categories (Damasio et
al., 1982). However, these interpretations are inconsistent with the double dissociation found
among studies of those (relatively rare) patients who exhibit object agnosia with intact face
recognition (e.g., McCarthy & Warrington, 1986), and patients who suffer exclusively from
an inability to recognize familiar faces (DeRenzi, 1986).
Data concerning the neural mechanisms of face recognition have also been obtained in
electrophysiological studies in monkeys. Single unit recordings have revealed cells in the
inferotemporal cortex that respond to monkey and human faces (Bruce, Desimone, & Gross,
1981; Desimone, Albright, Gross, & Bruce, 1984; Young & Yamane, 1992) and to face
Bentin et al. Page 2
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
components (Perrett, Rolls, & Caan, 1982), but not to other complex stimuli such as snakes,
spiders, or food (Baylis, Rolls, & Leonard, 1985; Desimone et al., 1984; Rolls & Baylis,
1986; Saito, Yukie, Tanaka, Hikosoka, Fukada, & Iwai, 1986). This pattern supports the
existence of a neural circuit specialized for face recognition that includes the inferior temporal
gyrus and the banks of the superior temporal sulcus (Baylis, Rolls, & Leonard, 1987). Face-
specific cells are highly sensitive to the natural appearance of the stimuli; line drawings or
schematic representations of faces elicit only weak responses (Bruce et al., 1981; Perrett et al.,
1982). Although most face-specific cells respond to rotated or inverted faces (Bruce, 1982;
Hasselmo, Rolls, Baylis, & Nalwa, 1989; Overman & Doty, 1982), those responses are weaker
and longer in latency compared to those evoked by upright faces (Perrett et al., 1988). Some
units respond differentially depending upon the angle at which the face is viewed (e.g.,
Desimone et al., 1984). Some cells in inferotemporal cortex respond selectively to face
components such as eyes, mouth, or hair (Perrett, Mitslin, & Chitty, 1987), but none has
responded to pictures of faces in which these components were spatially rearranged (Desimone
et al., 1984; Perrett et al., 1988). Other authors reported that small spatial distortions in the
distances between the eyes, or between the eyes and mouth, reduced the overall probability of
a cell’s response (Yamane, Kaji, & Kawano, 1988).
Recording brain electrical activity elicited by faces in humans may identify the
neuroanatomical organization and the functional properties of the putative face recognition
subsystem. In a recent study Allison et al. (1994a) recorded evoked field potentials directly
from the surface of the occipitotemporal cortex and found that faces evoked a negative
component with a mean latency of 192 msec (N200). Face-specific N200s were recorded from
discrete regions that were not activated by other complex stimuli. However, nearby regions
were selectively activated by other stimulus categories, such as letterstrings or colored
checkerboards (Allison et al., 1994c). These results suggest a considerable degree of functional
specialization within the ventral visual pathway.
In the present studies, we have recorded event-related potentials (ERPs) from scalp electrodes
in normal volunteer subjects. ERPs recorded from electrodes over the lateral posterior scalp
were elicited by faces and some face components, but not by other complex stimuli. The
sensitivity of these ERPs to manipulations of face stimuli provides additional information
regarding the functional properties of human neuronal subsystems related to face recognition.
EXPERIMENT 1
Experiment 1 was conducted to determine whether face-specific ERPs could be recorded from
scalp electrodes using the task of Allison et al. (1994a) in which face-specific ERPs were
recorded from subdural electrodes. Subjects were presented with live categories of visual
stimuli (FACES, SCRAMBLED FACES, CARS, SCRAMBLED CARS, and BUTTERFLIES) and asked to mentally count the number
of occurrences of a specified target category (BUTTERFLIES). ERPs were averaged separately for
each category.
Results
Figure 1A presents grand-averaged ERPs elicited by FACES (solid line) and SCRAMBLED FACES (dashed
line) from a 14-electrode montage. The ERPs in Figure 1 are arranged to approximate the
electrode locations on the scalp. Of particular interest is the large negative ERP with a peak
latency of 172 msec (N170) recorded from T5 and T6, which was largest for FACES and smaller
for the equally luminant SCRAMBLED FACES. An earlier negative ERP (N100) was recorded from Oz
(Fig. 1A), which was larger for scrambled than unscrambled stimuli [F(3,27) = 2.99, MSe =
6.54, P < 0.05]. A positive ERP at longer latency (P190) was evoked by FACES at frontocentral
scalp locations (Fig. 1A, Cz. ERPs elicited by FACES and by SCRAMBLED FACES also differed in the 250–
500 msec latency range; at frontal locations (e.g., Fz) a positivity evoked by FACES was not seen
Bentin et al. Page 3
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
to SCRAMBLED FACES, and at temporal locations (e.g., T6 SCRAMBLED FACES evoked a larger positivity than
did FACES. The target category BUTTERFLIES elicited a large P300, which was maximal over the
posterior scalp (Pz) at 490 msec (Fig. 1C). In this paper we will focus upon N170, the earliest
ERP related to face processing.
Figure 1B compares N170 amplitudes elicited by all nontarget stimulus categories. An
ANOVA showed that the amplitude of N170 at T5 (left hemisphere) and T6 (right hemisphere)
was significantly larger for FACES (3.55 µV) than SCRAMBLED FACES (0.20 µV), CARS (0.22 µV), and
SCRAMBLED CARS (1.38 µV) [F(l,9) = 11.43, MSe = 30.37, p < 0.0001], with no significant differences
among these latter three categories. The target category BUTTERFLIES also did not evoke an
appreciable N170 (not shown). N170 for FACES was larger (i.e., more negative) over the right
(4.07 µV) than the left hemisphere (3.02 µV), but this difference failed to reach statistical
significance (p = 0.3).
Discussion
Experiment 1 demonstrated that ERPs differentially sensitive to face stimuli can be recorded
with scalp electrodes, FACES elicited a negative ERP, N170, which was distributed focally over
the lateral posterior scalp. N170 was not elicited by stimuli of another complex stimulus
category, CARS, which shared some face-like characteristics by virtue of their grilles and
headlamps, nor by BUTTERFLIES, an animate category. This pattern of results for N170 is similar to
the subdural recordings of Allison et al. (1994a) in which faces (but not cars, butterflies, or
scrambled stimuli) elicited an N200 from the inferior surface of the temporal lobe. The
relationship of the N170 recorded in the present study to the N200 recorded by Allison et al.
(1994a) will be considered in the General Discussion. Scrambled stimuli did not elicit N170,
although scrambling increased the amplitude of the occipital N100. Allison et al. (1994c)
reported that scrambled stimuli elicited larger ERPs from peristriate cortex presumably due to
their many high-contrast edges. N170 was present at both T5 (left hemisphere) and T6, (right
hemisphere), and was (nonsignificantly) larger in amplitude over the right hemisphere.
A positive ERP (P190) was recorded from the fronto-central scalp. P190 was also face-specific,
and is similar to the vertex-positive potential recorded in previous ERP studies of face
perception (e.g., Jeffreys, 1993; Seeck & Grüsser, 1992). The relationship between N170 and
P190 is unclear. It is possible that N170 and P190 reflect a dipolar pattern due to a single neural
generator, but latency differences and the reported sensitivity of the vertex-positive potential
to animal faces and other complex stimuli (Jeffreys & Tukmachi, 1992) suggest that they may
reflect different neural activity. In this and the following experiments, target stimuli generated
a robust P300 (reviewed by Donchin & Coles, 1988). P300 was not elicited by faces or other
nontarget stimuli, demonstrating that the subjects were correctly performing the target
detection task.
All of the faces in the present study were unfamiliar to the subjects and their physiognomic
features were irrelevant to the task. It is unlikely that subjects were engaged in an extensive
process of face recognition, suggesting that the neural activity associated with N170 was
activated automatically, perhaps reflecting mandatory processing of facial information. Such
a mechanism might provide the neural basis for the “structural encoding” stage of face
processing suggested by Bruce and Young (1986). Experiment 2 sought to examine further
this hypothesis.
EXPERIMENT 2
Experiment 1 demonstrated that human faces evoke N170. In Experiment 2 we sought to
determine whether N170 is specific for faces per se, or could be evoked by any familiar body
Bentin et al. Page 4
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
part such as hands. The tuning of N170 for human faces was also tested by comparing Nl70s
elicited by human and animal faces.
In this experiment, CARS were designated as targets and subjects were required to keep a mental
count of the number presented. Nontarget categories included human FACES, ANIMAL FACES, and human
HANDS. All ANIMAL FACES had distinct eyes. Faces of nonhuman primates were excluded because of
their similarity to human faces. Individual items of FURNITURE were also presented to test further
the apparent lack of the responsiveness of N170 to inanimate stimulus categories.
Results
As in Experiment 1, human FACES elicited a robust N170 (at 176 msec) that was slightly larger
at T6 than T5 (Fig. 2). The amplitude at T6 of N170 elicited by FACES was 2.18 µV, significantly
more negative than the negative-going ERPs elicited by ANIMAL FACES, human HANDS, and FURNITURE
(0.30, 0.11, and 0.71 µV, respectively). Repeated-measures ANOVA showed that these
amplitude differences were statistically significant [F(4,44) = 13.7, MSe = 7.3, P < 0.0001].
Post-hoc comparisons revealed that while the N170 elicited by FACES was significantly larger
than that of all other stimulus categories, ANIMAL FACES, human HANDS, and FURNITURE did not
significantly differ among themselves. The small variation between the peak latency of N170
across stimulus conditions was not significant [F(4,44) = 1.9, MSe = 136.3, P > 0.12].
Finally, it is interesting to note that despite the large difference in N170 amplitude between
human FACES and ANIMAL FACES, at longer latencies (300–600 msec) the ERPs elicited by both types
of faces were similar compared to ERPs elicited by nonface stimuli (Fig. 2).
Discussion
The present experiment demonstrated that the N170 elicited by human FACES was significantly
larger than that elicited by ANIMAL FACES and human HANDS. In the N170 latency range, the negative-
going ERPs elicited by ANIMAL FACES, HANDS, and FURNITURE were statistically indistinguishable. The
specificity of N170 to human FACES and its insensitivity to human HANDS suggests that it reflects
the activity of cells tuned to detect human faces and/or face components rather than being a
general detector of information about body parts.
EXPERIMENT 3
Normal subjects and prosopagnosic patients are worse in recognizing inverted faces compared
to upright faces (reviewed by Benton & Van Allen, 1972; Valentine, 1988). If N170 reflects
activity associated with face recognition, it should therefore be affected by face inversion.
However, if N170 reflects activity in a neural circuit tuned to detect facial features prior to face
recognition, it may not be sensitive to face inversion.
A 28-electrode montage was used in this experiment to provide a more complete description
of the voltage distribution of N170 and related ERPs. As in Experiment 1, subjects were
instructed to count mentally the number of target BUTTERFLIES. FACES and CARS were presented in both
upright and inverted positions.
Results
The ERPs elicited by upright FACES in the present experiment were similar to those observed for
FACES in Experiments 1 and 2. Figure 3A–D presents the distribution of voltage in color-coded
topographic maps for selected latencies for the ERPs elicited by upright FACES. The distribution
at 88 msec corresponds to N100 (Fig. 3A). The distribution at 128 msec (Fig. 3B) corresponds
to the positive peak recorded from the lateral posterior scalp that preceded N170. A broadly
distributed frontal midline negative region was also evident at the same latency. The
Bentin et al. Page 5
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
distribution at 172 msec (Fig. 3C) corresponds to the N170 elicited by FACES in Experiments 1
and 2. A focal negative region over the posterior lateral scalp was present bilaterally, but was
largest at T6. This hemispheric asymmetry was significant (p < 0.05). The positive focus over
the anterior midline corresponded to the onset of P190. The distribution at 230 msec (Fig. 3D)
corresponds to a positive peak in the ERPs recorded from the posterior lateral scalp. The
distribution at 230 msec was similar to that at 128 msec (Fig. 3B) except that the positive
voltage extended more posteriorly at this later latency.
The voltage distributions for the other stimulus categories of Experiment 3 were virtually
identical prior to 170 msec. The focal negative distribution maximal at T6 was obtained only
for upright FACES and INVERTED FACES. The N170 elicited by INVERTED FACES was distributed similarly to
that elicited by upright FACES and was significantly more negative at T6 (3.18 and 3.93 µV,
for upright FACES and INVERTED FACES, respectively) than at T5 (2.11 and 2.2 µV, respectively) [F
(1,11) = 5.32, MSe = 18.2, p < 0.051. At T6, INVERTED FACES elicited a slightly larger N170 than
upright FACES, while at T5 the N170 elicited by FACES and INVERTED FACES was equal in amplitude (Fig.
4). At both T5 and T6, the N170 elicited by INVERTED FACES peaked 10 msec later than that elicited
by upright FACES. The latency difference was statistically significant [F(1,11) = 9.00, MSe 147,
p < 0.012]. CARS and INVERTED CARS elicited identical ERPs with no N170.
Discussion
Both upright FACES and INVERTED FACES elicited an N170 while CARS, INVERTED CARS, and BUTTERFLIES did not.
The scalp voltage distribution for all nontarget stimuli was similar up until the latency of N170
for FACES and INVERTED FACES. At this point a focal negative region was observed bilaterally over the
lateral posterior scalp, which was significantly larger in amplitude over the right hemisphere.
A positive region over the frontocentral midline region corresponding to P190 made this
distribution appear dipolar as noted in the discussion of Experiment 1.
The overall similarity between the N170 elicited by FACES and INVERTED FACES suggested that N170
was not related to an attempt to recognize particular faces. It is instead congruent with a view
that the neural mechanism generating N170 is involved in the structural analysis of visual
stimuli leading to the categorization of a pictorial stimulus as “face.” N170 to INVERTED FACES was
delayed relative to FACES. A delayed N200 for inverted relative to upright faces was also recorded
subdurally in one patient by Allison et al. (1994b). In that patient, however, the difference was
found only in the right hemisphere. Allison et al. (1994b) also found an amplitude difference
in the right posterior fusiform gyrus where the amplitude of N200 elicited by inverted faces
was smaller than that elicited by upright faces. We did not find that the N170 elicited by INVERTED
FACES was smaller than that elicited by FACES; indeed, the N170 to INVERTED FACES was somewhat larger
in amplitude.
The similarities in N170 elicited by INVERTED FACES and upright FACES raises the question of N170’s
sensitivity to the integrity of facial components. Experiment 4 examined this hypothesis by
comparing the N170 elicited by full-face stimuli with those elicited by isolated face
components.
EXPERIMENT 4
As in Experiments 1 and 3, subjects kept a mental count of the number of BUTTERFLIES appearing
among non-target categories including full human FACES and isolated EYES, LIPS, and NOSES. We
compared the ERPs elicited by FACES to the ERPs elicited by each of the face components.
Results
The ERPs elicited by FACES were similar in shape and scalp distribution to those elicited in the
previous experiments. Again N170 was elicited by FACES with a focal distribution at posterior-
Bentin et al. Page 6
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
lateral sites (T5 and T6). As evident in Figure 5, the peak latency of N170 elicited by EYES (186
msec) was later than that elicited by FACES (173 msec), and its amplitude was larger both at T5
(4.2 vs. 2.8 µV) and T6 (5.9 vs. 3.6 µV). The negative ERPs elicited by LIPS and NOSES were
considerably later (215 and 210 msec, respectively) and were smaller (3.1 and 2.0 µV,
respectively) than the N170s elicited by FACES and EYES. Repeated-measures ANOVA showed
that both the latency and the amplitude differences were significant [F(3,33) = 26.43, MSe =
293, p < 0.0001, and F(3,33) = 8.30, MSe = 17.3, p < 0.0001, for latencies and amplitudes,
respectively]. Post-hoc comparisons revealed that the N170 latencies elicited at T6 by FACES and
EYES were not significantly different, but both were significantly shorter than the latencies of the
negative ERPs elicited by LIPS and NOSES. EYES elicited a significantly larger N170 at T6, than FACES,
LIPS, or NOSES, FACES elicited a significantly larger N170 than the ERPs elicited by NOSES or by LIPS.
Figure 6A–D presents voltage distributions corresponding to the peak latencies of the negative
ERPs elicited by FACES, EYES, LIPS, and NOSES, respectively. The distribution for FACES (Fig. 6A) is
shown for 172 msec and is similar to that obtained for FACES in Experiment 3 (Fig. 3C) at the
same latency. The difference between the amplitude of N170 at T5 (2.8 µV) and T6 (3.6
µV) was statistically significant (p < 0.05). The distribution for EYES (Fig. 6B) at 188 msec was
similarly asymmetric (p = 0.02). The scalp distributions for LIPS (212 msec) and NOSES (232 msec)
were different from those obtained for FACES and EYES, but were similar to each other. For both
LIPS and NOSES, the negative focus at T6, was less extensive and the positive midline focus was
centered more posteriorly at Pz. The amplitude asymmetry between T6 and T5 was not
statistically significant for either LIPS or NOSES.
Discussion
The pattern of responses for faces and face components supports the conclusion of Experiment
2 that N170 reflects activity in a neural mechanism involved in the early detection of structural
features characterizing human faces. The delayed and attenuated ERPs to LIPS and NOSES relative
to the vigorous response to EYES is consistent with previous studies of the relative salience of
facial features (reviewed by Shepherd, Davies, & Ellis, 1981). Most of these studies were
concerned with the recognition of particular faces and showed that the facial outline was the
most important feature followed in decreasing order of importance by the eyes, mouth, and
nose (Davies, Ellis, & Shepherd, 1977; Fraser & Parker, 1986; Haig, 1986). The present results
complement those findings by suggesting that eyes are the most representative facial feature
even when the response does not include the recognition of particular faces (cf. Bruce, 1988).
The scalp distributions for FACES and EYES were similar to each other, but different from those
elicited by LIPS and NOSES. This may indicate similarly located and oriented neural generators for
the N170s elicited by whole FACES and isolated EYES, whereas the scalp distributions for LIPS and
NOSES indicate a different configuration of neural generators.
The significantly larger N170 evoked by isolated EYES compared to whole FACES argues against
the notion that face integrity is the critical factor in the appearance of N170. Indeed, Experiment
4 raises the possibility that the neural mechanism generating N170 may be specific to eyes—
whether present in isolation or in face context. If this is so, it is possible that the presence of
additional face components in the same display modulates the response to the eyes. Experiment
5 was designed to examine these questions further by presenting eyes in a distorted face context.
EXPERIMENT 5
This experiment addressed the question as to whether a normal face context is necessary to
elicit N170. Face components were presented as in Experiment 4, but DISTORTED FACES, created by
dislocating inner face components (Fig. 12), were substituted for normal FACES. If N170 requires
integrity of face components for its appearance, then dislocating those components should
Bentin et al. Page 7
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
diminish its amplitude. If, however, N170 primarily reflects the detection of eyes, then the
presence of eyes among the dislocated components should be sufficient to elicit N170.
Our finding in Experiment 4 that isolated eyes evoked a larger N170 than normal faces (which
also contained eyes) indicates that if N170 primarily reflect eye-specific processing, then the
context in which eyes are presented is important. Perhaps the presence of normally configured
face features diminishes the activity of the eye detector through priming or a related process.
If so, distorting the normal configuration of face features may eliminate this priming effect and
the resulting N170 may be equal in amplitude to that elicited by isolated EYES.
Results
ERPs elicited by each stimulus category are shown in Figure 7. The N170 amplitudes for
DISTORTED FACES (2.9 µV at T6, 2.0 µV at T5) and EYES (3.2 µV at T6, 2.3 µV at T5 were
statistically larger than those for LIPS (0.70 µV at T6, 1.0 µV at T5), and NOSES (1.3 µV at
T6, 1.0 µV at T5) [F(3,33) = 7.78, MSe = 10.7, P < 0.001]. While the N170s elicited by EYES
were slightly larger and later than those elicited by DISTORTED FACES, post-hoc comparisons revealed
no significant difference in either measure. For both categories, N170 amplitudes were greater
at T6 than T5, but this difference also did not reach statistical significance.
Discussion
Faces in which the inner components were dislocated elicited a robust N170. It seems,
therefore, that N170 is not dependent upon the spatial integrity of facial components as would
be predicted for a holistic face-processing mechanism. This result underscores our earlier
deduction that N170 is not related to face recognition per se, but to the detection of facial
features, EYES elicited a larger N170 than DISTORTED FACES, but unlike Experiment 4, this difference
was not statistically reliable. As predicted, this may indicate that the response to eyes presented
within a distorted face was less influenced by the other face components than when presented
within a normally configured face. This conclusion is tempered by the overall smaller N170
amplitudes obtained in the present experiment than those obtained in Experiment 4, including
N170 amplitude to the identical category of isolated EYES. However, as the pattern of amplitudes
for N170 within experiments has been consistent despite variation in its absolute amplitude
across experiments, we believe that between-subject variability is the likely cause for these
differences.
GENERAL DISCUSSION
The present study was designed to examine, by non-invasive scalp recordings in normal
subjects, some of the functional characteristics of the face-specific perceptual mechanism
suggested by subdural recordings in patients. In a series of experiments we found a posterior-
lateral N170, which, like the subdural N200, was elicited by human faces but not by animal
faces, cars, scrambled faces, scrambled cars, items of furniture, or human hands.
N170 was larger over the right than over the left hemisphere in all experiments, although this
difference did not always reach statistical significance. When tested across all subjects, N170
at T6 (3.23 µV) was significantly larger than at T5 (2.41 µV) [t(47) = 2.845, P) < 0.01]. The
peak latency at T6 (173 msec) was similar to that at T5 (171 msec) [t47) = 1.028, P > 0.30].
N170 was as large for inverted as for upright faces. An even larger N170, similarly distributed
over the scalp, was elicited by isolated eyes. In contrast to eyes, the negative ERPs elicited by
isolated lips and noses were significantly smaller and delayed. Although still largest in
amplitude over the posterior-lateral scalp, the ERP distribution for isolated lips and noses
suggested a different configuration of neural generators than for faces and eyes.
Bentin et al. Page 8
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
In all experiments, stimuli were presented while subjects monitored the screen for the
appearance of a target unrelated to faces. These attended targets did not evoke an appreciable
N170. Since there were no task-related differences among the nontarget stimulus categories,
any specific response to faces or to face components was likely due to a neural mechanism
tuned to detect human physiognomic features automatically. The existence of such a
mechanism is suggested by data showing that newborn infants look at human faces more than
any other complex visual stimulus (Goren, Sarty, & Wu, 1975).
Behavioral studies of human face processing have focused on the identification of particular
faces either in regard to their semantic representation, as in studies in which famous faces had
to be distinguished from unfamiliar faces (e.g., Bruce, 1979; Bruce & Valentine, 1985, 1986;
Valentine & Bruce, 1986a, 1986b) or in regard to recently formed representations in episodic
memory (e.g., Shapiro & Penrod, 1986). Similarly, ERP studies of face recognition have
investigated the electrophysiological correlates of face familiarity and unfamiliarity (Barrett,
Rugg, & Perrett, 1988; Begleiter, Porjesz, & Wang, 1995; Hautecœur, Debruyne, Forzy,
Gallois, Hache, & Dereux, 1993; Renault, Signoret, Debruille, Breton, & Bolgert, 1989; Small,
1986; Smith & Halgren, 1987) and face priming and recognition (Barrett & Rugg, 1989; Hertz,
Porjesz, Begleiter, & Cholian, 1994; Schweinberger & Sommer, 1991). Less attention has been
directed to the investigation of basic mechanisms for face detection and categorization among
other complex visual stimuli (for reviews see Bruce, 1990a,b). Perhaps one reason for this
emphasis is that even patients with severe prosopagnosia can distinguish faces from other visual
stimuli (Farah, 1990). Nevertheless, an influential model of face perception assumes the
existence of a “structural encoding” stage at which physiognomic information is detected and
initially processed independently of recognition of personal identity or facial expression (Bruce
& Young, 1986). It is possible that this function is carried out by a dedicated neural mechanism;
the present results suggest that N170 might reflect part of its operation.
Many studies have demonstrated that recognition of faces is impaired by inversion more than
recognition of other visual stimuli (reviewed by Valentine, 1988). In light of these findings the
similarity in amplitude and spatial distribution between the N170 elicited by upright and
inverted faces suggests that the function of the structural encoder is to detect and initially
encode face structural information without being directly involved in the process of face
recognition. Moreover the similarity in latency, amplitude, and spatial distribution between the
N170s elicited by faces, inverted faces, distorted faces, and isolated eyes suggests that they
may be fully activated by salient face characteristics. This pattern suggests that the structural
encoder is sensitive primarily to the presence of these characteristic features rather than to their
orientation or their interrelationship in space. Other findings suggested that spatial-configural
processing may be less useful when faces are inverted (Sergent, 1984; Young, Hellawell, &
Hay, 1987), yet the general similarity of the N170 for upright and inverted faces supports the
view that at this processing stage both stimulus types are processed similarly (see also
Valentine, 1988; Valentine & Bruce, 1988).
The relationship between the scalp-recorded N170 and the subdurally recorded N200 (Allison
et al., 1994a, 1994b) is uncertain. The approximate 20 msec longer latency of N200 in patients
with implanted electrodes could be attributed to differences between neurologically normal
subjects and epileptic patients (e.g., anticonvulsant medications). However, the fusiform and
inferior temporal region from which N200 is recorded is inferior to the T5 and T6 electrodes
(from which N170 is best recorded), which are usually positioned over the middle temporal
gyrus (Homan, Herman, & Purdy, 1987). It is therefore unlikely that the orientation of the
neurons producing a cortical surface negativity over the ventral brain surface could also
produce simultaneously a negative ERP over more superior scalp. Recent functional magnetic
resonance imaging studies of face perception (Clark et al., 1994; Puce et al., 1995)
demonstrated that faces often activate cortex within the occipitotemporal sulcus, which
Bentin et al. Page 9
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
separates the fusiform and inferior temporal gyri. This deep sulcus is oriented obliquely, and
neural generators within the bank of the sulcus would be oriented toward the middle temporal
scalp. These considerations suggest that the subdural N200 is due to radial generators located
mainly in the fusiform gyrus, whereas the scalp N170 is due to oblique generators located
mainly within the adjacent occipitotemporal sulcus. Inverse generator models (e.g., Probst,
Plendl, Paulus, Wist, & Scherg, 1993) to determine the locations and orientations of the neural
generators of N170 will be required to test this assumption quantitatively.
The proposed differences in location of the N170 and N200 generators, and differences in their
responsivity to faces and face components, suggest an extension of the “structural encoding”
model reviewed above. Specifically, these results suggest that a portion of cortex within the
occipitotemporal sulcus is not simply a lateral portion of the fusiform face-specific processor,
but instead constitutes a separate eye-specific processor. The hypothesis is summarized
schematically in Figure 8. In this model, N200 recorded from the fusiform/inferior temporal
region may reflect the operation of the “structural encoding” stage of face processing, whereas
N170 may reflect the operation of a putative eye processor whose function may be to analyze
the direction of gaze and other information conveyed by the eyes. It seems unlikely that a large
population of cells responsive to the eyes (judging by the amplitude of N170 to eyes alone) is
involved only in face identification, particularly since the eyes are less important in this task
than are other facial features such as contour, hair line, and hair style (Haig, 1984,
1986;Shepherd et al., 1981). In monkeys, cells in the superior temporal sulcus are sensitive to
the eyes and to the direction of gaze (Perrett, Heitanen, Oram, & Benson, 1992;Perrett et al.,
1988). Perrett et al. (1992) note that cells sensitive to face identity may be more frequent in
inferior temporal cortex, whereas cells sensitive to gaze direction are more frequent in the
superior temporal sulcus. It is plausible that a similar spatial differentiation of cell types exists
in the human temporal lobe. Indeed, in subdural recordings, locations lateral to locations from
which N200 is recorded often respond better to eyes alone than to the entire face (Allison et
al., 1994c). Whether this lateral region extends into the occipitotemporal sulcus or onto the
lateral surface of the temporal lobe is currently unknown.
If N170 primarily reflects eye-specific neural activity, then the context in which the eyes appear
modulates that activity. Isolated eyes evoked a significantly larger N170 than a normal full
face (Experiment 4) but not than a distorted face (Experiment 5). Inverted faces evoked a larger
N170 than upright faces, although this difference was not significant (Experiment 3). This
pattern suggests that the presence of other face features in appropriate spatial orientation
diminishes the N170 evoked by the eyes contained within the face. Increasing amounts of
feature dislocation or isolation causes an increase in N170. This electrophysiological result is
reminiscent of behavioral results demonstrating that processing of a target embedded within a
face is impeded, suggesting that a gestalt such as a face is first processed holistically and inhibits
lower-level feature processing (e.g., Mermelstein, Banks, & Prinzmetal, 1979; Suzuki &
Cavanaugh, 1995.) This modulation may reflect priming of an eye detector by a holistic face
processor, or inhibitory interactions among face features. The lack of N170 to animal faces
(Experiment 2) is problematic because the animal faces contained distinct eyes. It may be that
N170 is tuned to process human eyes, perhaps by detecting the contrast between the iris and
white sclera. Alternatively, the configuration of other animal face features may inhibit this
putative eye processor. None of the present experiments was designed explicitly to test the eye-
processor hypothesis. Such experiments are now underway and may help to resolve these
issues.
Bentin et al. Page 10
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
METHODS
Subjects
Volunteer subjects were recruited from among the undergraduate and graduate students of the
Hebrew University. Twelve subjects participated in each experiment, each of which lasted
approximately 2 hr. Subjects were tested singly and none participated in more than one
experiment.
Stimuli
Original stimuli were photographs that were digitally scanned, processed by graphics software,
and presented as gray-scale images on a computer monitor. In each experiment, lists of 330
images were presented in three consecutive equal blocks. Each image was exposed for 250
msec with an interval of 1500 msec between the onsets of successive images. Stimulus timing
was controlled by the MEL software system (Psychology Software Tools, Pittsburgh, PA).
Task
Subjects were engaged in a target detection tasks in which stimuli from one stimulus category
were designated as targets. The subjects were required to keep a mental count of the number
of targets presented in each run and report that count at the end. Targets were presented on
approximately 11 % of trials.
EEG Recording
EEG was recorded simultaneously from 14 (Experiment 1) or 28 scalp locations using tin
electrodes attached to electrode caps (Electrode Caps International, Ohio). Additional channels
recorded the electrooculogram (EOG) from the supraorbital ridge and outer canthus of the left
eye. All electrodes were referenced to an electrode placed on the tip of the nose. A ground
electrode was placed on the forehead. The EEG was amplified by battery-operated amplifiers
with a gain of 20,000 through a bandpass of 0.01–100 Hz. Electrode impedances were below
5 k.
EEC epochs were acquired beginning 100 msec prior to stimulus onset and continuing for 1024
msec. These epochs were digitized at a rate of 250 Hz (4 msec/sample/channel) and stored on
disk for off-line averaging. Codes synchronized to stimulus delivery were used to selectively
average epochs associated with different stimulus types. During this averaging procedure,
epochs contaminated with EOG artifacts were eliminated using root mean square EOG values
as criteria.
The mean and peak amplitudes and peak latencies of particular ERP components were obtained
within bounding intervals by computer program for each stimulus condition for each subject.
Significant differences among these measures were tested using repeated-measures analysis
of variance (ANOVA). Post-hoc comparisons were evaluated using Tukey’s test (Tukey A).
In addition, across-subjects (grand-averaged) mean ERPs were computed and used to make
comparison plots and topographic maps. The topographic maps represent the ERP voltage
distribution flattened onto two dimensions of space with the amplitude at any time represented
by color coding. Voltages between electrode locations were interpolated using a spherical
spline technique.
EXPERIMENT 1
Five categories of stimuli used by Allison et al. (1994a) were presented (Fig. 9): (1) FACES, (2)
SCRAMBLED FACES, (3) CARS, (4) SCRAMBLED CARS, and (5) BUTTERFLIES. Equal numbers of male and female faces
were scanned from a college yearbook. None had eyeglasses or jewelry, and all males were
Bentin et al. Page 11
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
clean shaven. Front views of cars were scanned from automotive magazines. Images of cars
and faces were scrambled by digitally cutting the images into rectangular sections and then
rearranging the sections until the identity of the resulting image could not be discerned. This
procedure retained the luminance of the original image, but the random juxtaposition of
rectangular edges tended to increase local spatial contrast. BUTTERFLIES, the target category, were
scanned from a field guide. The order of presentation of images drawn from the different
stimulus categories was randomized and no stimulus was repeated.
EXPERIMENT 2
Five categories of stimuli were presented (Fig. 10): (1) human FACES (2) ANIMAL FACES, (3) human
HANDS, (4) FURNITURE, and (5) CARS. The human FACES were the same as in the previous experiments.
ANIMAL FACES were front views scanned from books and magazines. Animals without front set
eyes and nonhuman primates were excluded. In this study, CARS were presented on 11% of the
trials and comprised the target category. All other categories were presented with equal
frequency.
EXPERIMENT 3
Five categories of stimuli were presented: (1) FACES, (2) INVERTED FACES, (3) CARS, (4) INVERTED CARS, and
(5) BUTTERFLIES. Stimuli were identical to those used in Experiment 1 except that the categories of
SCRAMBLED FACES and SCRAMBLED CARS were replaced by inverted (by 180°) representations of the same
faces and cars used in categories 1 and 3. BUTTERFLIES were the target category. Stimulus
proportions were as in Experiment 1.
EXPERIMENT 4
Five categories of stimuli were presented (Fig. 11): (1) FACES, (2) EYES, (3) MOUTHS, (4) NOSES, and (5)
BUTTERFLIES. FACES and BUTTERFLIES were as in the previous experiments. Face components (eyes,
mouths, and noses) were cropped from the images used in the FACES category and were centered
within a gray enclosing rectangle identical in size to the rectangle enclosing the complete faces.
All categories were presented in equal proportion except for the target BUTTERFLIES.
EXPERIMENT 5
The only difference between the stimuli used in Experiment 5 and those used in Experiment 4
was that a DISTORTED FACES category substituted for the FACE category. Distorted faces were
constructed using computer graphics to displace and dislocate inner facial components of the
regular faces used in the other experiments. The displacement of inner components was done
without changing their vertical orientation and without affecting the face contour (Fig. 12).
The subjects in this experiment were 12 undergraduates who did not participate in the previous
experiments. Their task and all other methodological details were identical to those used in
Experiment 4.
Acknowledgments
This work was supported by the United States-Israel Binational Science Foundation, the Department of Veterans
Affairs, and by NIMH Grant MH-05286.
REFERENCES
Allison T, Ginter H, McCarthy G, Nobre AC, Puce A, Luby M, Spencer DD. Face recognition in human
extrastriate cortex. Journal of Neurophysiology 1994a;71:821–825. [PubMed: 8176446]
Bentin et al. Page 12
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Allison T, McCarthy G, Belger A, Puce A, Luby M, Spencer DD, Bentin S. What is a face?
Electrophysiological responsiveness of human extrastriate visual cortex to human faces, face
components, and animal faces. Society for Neuroscience Abstracts 1994b;20:316.
Allison T, McCarthy G, Nobre AC, Puce A, Belger A. Human extrastriate visual cortex and the perception
of faces, words, numbers, and colors. Cerebral Cortex 1994c;5:544–554.
Barrett SE, Rugg MD. Event-related potentials and the semantic matching of faces. Neuropsychologia
1989;27:913–922. [PubMed: 2771030]
Barrett SE, Rugg MD, Perrett DI. Event-related potentials and the matching of familiar and unfamiliar
faces. Neuropsychologia 1988;1:105–117. [PubMed: 3362336]
Baylis GC, Rolls ET, Leonard CM. Selectivity between faces in the response of a population of neurons
in the cortex in the superior temporal sulcus of the monkey. Brain Research 1985;342:91–102.
[PubMed: 4041820]
Baylis CJ, Rolls ET, Leonard CM. Functional subdivisions of the temporal lobe neocortex. Journal of
Neuroscience 1987;7:330–342. [PubMed: 3819816]
Begleiter H, Porjesz B, Wang W. Event-related brain potentials differentiate priming and recognition to
familiar and unfamiliar faces. Electroencephalography and Clinical Neurophysiology 1995;94:41–49.
[PubMed: 7530638]
Benton AL, Van Allen MW. Prosopagnosia and facial discrimination. Journal of Neurological Sciences
1972;15:167–172.
Bodamer J. Die Prosop-Agnosie. Archiv für Psychiatrie und Nervenkrankheiten 1947;l79:6–54.
Bruce CJ. Face recognition by monkeys: Absence of an inversion effect. Neuropsychologia 1982;20:515–
521. [PubMed: 7145077]
Bruce CJ, Desimone R, Gross CG. Visual properties of neurons in a polysensory area in superior temporal
sulcus of the macaque. Journal of Neurophysiology 1981;46:369–384. [PubMed: 6267219]
Bruce V. Searching for politicians: An information-processing approach to face recognition. Quarterly
Journal of Experimental Psychology 1979;31:373–395.
Bruce, V. Recognizing faces. London: Lawrence Erlbaum; 1988.
Bruce, V. Face recognition. In: Eysenck, M., editor. Cognitive psychology: An international review. New
York: Wiley; 1990a. p. 221-263.
Bruce V. Perceiving and recognising faces. Mind and Language 1990b;5:342–364.
Bruce V, Valentine T. Identity priming in the recognition of familiar faces. British Journal of Psychology
1985;76:363–383.
Bruce V, Valentine T. Semantic priming of familiar faces. Quarterly Journal of Experimental Psychology
1986;38A:125–150.
Bruce V, Young A. Understanding face recognition. British Journal of Psychology 1986;77:305–327.
[PubMed: 3756376]
Clark VP, Keil K, Lalonde F, Maisog JM, Courtney SM, Karni A, Ungerleider LG, Haxby JM.
Identification of cortical processing areas for the perception of faces and locations using fMRI.
Society for Neuroscience Abstracts 1994;20:839.
Damasio, H. Human brain anatomy in computerized images. New York: Oxford University Press; 1995.
Damasio AR, Damasio H, Van Hoesen GW. Prosopagnosia: Anatomic basis and behavioral mechanisms.
Neurology 1982;32:331–341. [PubMed: 7199655]
Damasio AR, Tranel D, Damasio H. Face agnosia and the neural substrates of memory. Annual Review
of Neurosciences 1990;13:89–109.
Davies G, Ellis H, Shepherd J. Cue saliency in faces assessed by the ‘Photofit’ technique. Perception
1977;6:263–269. [PubMed: 866082]
De Renzi, E. Current issues in prosopagnosia. In: Ellis, HD.; Jeeves, MA.; Newcombe, F.; Young, A.,
editors. Aspects of face processing. Dordecht: Martinus Nijhoff; 1986.
Desimone R. Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience
1991;3:1–8.
Desimone R, Albright TD, Gross CG, Bruce CJ. Stimulus selective properties of inferior temporal neurons
in the macaque. Journal of Neuroscience 1984;4:2051–2062. [PubMed: 6470767]
Bentin et al. Page 13
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Donchin E, Coles MGH. Is the P300 component a manifestation of context updating? Behavioral and
Brain Sciences 1988;11:357–374.
Farah, MJ. Visual agnosia. Cambridge, MA: MIT Press; 1990.
Fraser, KI.; Parker, D. Reaction time measures of feature saliency in a perceptual integration task. In:
Ellis, HD.; Jeeves, MA.; Newcombe, F.; Young, A., editors. Aspects of face processing. Dordrecht:
Martinus Nijhoff; 1986. p. 45-52.
Gloning I, Gloning K, Jellinger K, Quatember R. A case of prosopagnosia with necropsy findings.
Neuropsychologia 1970;8:199–204. [PubMed: 5522555]
Goren CC, Sarty M, Wu P. Visual following and pattern discrimination of face-like stimuli by newborn
infants. Pediatrics 1975;56:544–545. [PubMed: 1165958]
Gross, CG.; Rodman, HR.; Gochin, PM.; Colombo, MW. Inferior temporal cortex as a pattern recognition
device. In: Baum, E., editor. Proceedings of the 3rd NEC conference research symposium,
computational learning and cognition; Philadelphia. SIAM; 1993. p. 44-73.
Haig ND. The effect of feature displacement on face recognition. Perception 1984;13:505–512. [PubMed:
6535975]
Haig ND. Exploring recognition with interchanged facial features. Perception 1986;15:235–247.
[PubMed: 3797198]
Hasselmo ME, Rolls ET, Baylis GC, Nalwa V. The role of expression and identity in the face-selective
responses of neurons in the temporal visual cortex of the monkey. Behavioral Brain Research
1989;32:203–218.
Hautecœur P, Debruyne P, Forzy G, Gallois P, Hache JC, Dereux J-F. Potentiels évoqués visuels et
reconnaissance des visages: Influence de la célébrité et de I'expression émotionnelle. Revue
Neurologique (Paris) 1993;149:207–212.
Haxby, JV.; Grady, CL.; Horwitz, B.; Salerno, J.; Ungerleider, LG.; Mishkin, M.; Schapiro, MB.
Dissociation of object and spatial visual processing pathways in human extrastriate cortex. In: Gulyas,
B.; Ottoson, D.; Roland, PE., editors. Functional organisation of the human visual cortex. Oxford:
Pergamon Press; 1993. p. 329-340.
Hertz S, Porjesz B, Begleiter H, Chorlian D. Event-related potentials to faces: the effects of priming and
recognition. Electroencephalography and Clinical Neurophysiology 1994;92:342–351. [PubMed:
7517856]
Homan RW, Herman J, Purdy P. Cerebral location of international 10–20 system electrode placements.
Electroencephalography and Clinical Neurophysiology 1987;66:376–382. [PubMed: 2435517]
Jeffreys DA. The influence of stimulus orientation on the vertex positive scalp potential evoked by faces.
Experimental Brain Research 1993;96:163–172.
Jefferys DA, Tukmachi ESA. The vertex-positive scalp potential evoked by faces and by objects.
Experimental Brain Research 1992;91:340–350.
Kay MC, Levin HS. Prosopagnosia. American Journal of Ophthamology 1982;94:75–80.
Kendrick KM, Baldwin BA. Cells in the temporal cortex of conscious sheep can respond preferentially
to the sight of faces. Science 1987;236:448–450. [PubMed: 3563521]
Landis T, Cummings JL, Christen L, Bogen JE, Imhof HG. Are unilateral right posterior cerebral lesions
sufficient to cause prosopagnosia? Clinical and radiological findings in six additional patients. Cortex
1986;22:243–252. [PubMed: 3731794]
McCarthy R, Warrington EK. Visual associative agnosia; a clinico-anatomical study of a single case.
Journal of Neurology, Neurosurgery, and Psychiatry 1986;49:1233–1240.
Meadows JC. The anatomical basis of prosopagnosia. Journal of Neurology, Neurosurgery, and
Psychiatry 1974;37:489–501.
Mermelstein R, Banks W, Prinzmetal W. Figural goodness effects in perception and memory. Perception
& Psychophysics 1979;26:472–480.
Michel F, Poncet M, Signoret JL. Les lésions responsables de la prosopagnosie sont-elles toujours
bilatérales? Revue Neurologique (Paris) 1989;146:764–770.
Nardelli E, Buananno F, Coccia G, Fiaschi H, Terzian H, Rizzuto N. Prosopagnosia: Report of four cases.
European Neurology 1982;21:289–297. [PubMed: 7117317]
Bentin et al. Page 14
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Overman WM, Doty RW. Hemispheric specialisation displayed by man but not macaques for analysis
of faces. Neuropsychologia 1982;20:113–128. [PubMed: 7088270]
Perrett DI, Heitanen JK, Oram MW, Benson PJ. Organization and functions of cells responsive to faces
in the temporal cortex. Philosophical Transactions of the Royal Society of London 1992;335:23–30.
[PubMed: 1348133]
Perrett DI, Mitslin AJ, Chitty AJ. Visual neurones responsive to faces. Trends in Neurosciences
1987;10:358–364.
Perrett DI, Mistlin AJ, Chitty AJ, Smith PA, Potter DD, Broennimann R, Harries M. Specialized face
processing and hemispheric asymmetry in man and monkey: evidence from single unit and reaction
studies. Behavioral Brain Research 1988;29:245–258.
Perrett D, Rolls ET, Caan W. Visual neurons responsive to faces in the monkey temporal cortex.
Experimental Brain Research 1982;47:329–342.
Probst T, Plendl H, Paulus W, Wist ER, Scherg M. Identification of the visual motion area (area V5) in
the human brain by dipole source analysis. Experimental Brain Research 1993;93:345–351.
Puce A, Allison T, Gore JC, McCarthy G. Face-sensitive regions in human extrastriate cortex studied by
functional MRI. Journal of Neurophysiology 1995;74:1192–1199. [PubMed: 7500143]
Renault B, Signoret J-L, Debruille B, Breton F, Bolger F. Brain potentials reveal covert facial recognition
in prosopagnosia. Neuropsychologia 1989;27:905–912. [PubMed: 2771029]
Rolls ET, Baylis GC. Size and contrast have only small effects on the responses to faces of neurons in
the cortex of the superior temporal sulcus of the monkey. Experimental Brain Research 1986;65:38–
48.
Saito H, Yukie M, Tanaka K, Hikosaka K, Fukada Y, Iwai E. Integration of direction signals of image
motion in the superior temporal sulcus of the macaque monkey. Journal of Neuroscience 1986;6:145–
157. [PubMed: 3944616]
Schweinberger SR, Sommer W. Contributions of stimulus encoding and memory search to right
hemisphere superiority in face recognition: Behavioural and electrophysiological evidence.
Neuropsychologia 1991;29:389–413. [PubMed: 1886682]
Seeck M, Grüsser OJ. Category-related components in visual evoked potentials: photographs of faces,
persons, flowers, and tools as stimuli. Experimental Brain Research 1992;92:338–349.
Sergent J. An investigation into component and configured processes underlying face recognition. British
Journal of Psychology 1984;75:221–242. [PubMed: 6733396]
Sergent J, Ohta S, MacDonald B. Functional neuroanatomy of face and object processing: a positron
emission tomography study. Brain 1992;115:15–36. [PubMed: 1559150]
Shapiro PN, Penrod S. Meta-analysis of facial identification studies. Psychological Bulletin
1986;100:139–156.
Shepherd, JW.; Davies, GM.; Ellis, HD. Studies of cue saliency. In: Davies, G.; Ellis, H.; Shepherd, J.,
editors. Perceiving and remembering faces. London: Academic Press; 1981. p. 105-131.
Small, M. Hemispheric differences in the evoked potential to face stimuli. In: Ellis, H.; Jeeves, MA.;
Newcombe, F.; Young, A., editors. Aspects of face processing. Dordrecht: Martinus Nijhoff; 1986.
p. 228-233.
Smith ME, Halgren E. Event-related potentials elicited by familiar and unfamiliar faces. Current Trends
in Event-Related Potential Research (EEG Suppl) 1987;40:422–426.
Suzuki S, Cavanagh P. Facial organization blocks access to low-level features: An object inferiority
effect. Journal of Experimental Psychology 1995;21:901–913.
Valentine T. Upside-down faces: A review of the effect of inversion upon face recognition. British Journal
of Experimental Psychology 1988;87:116–124.
Valentine T, Bruce V. Recognising familiar faces: The role of distinctiveness and familiarity. Canadian
Journal of Psychology l986a;40:300–305. [PubMed: 3768805]
Valentine T, Bruce V. The effects of distinctiveness in recognising and classifying faces. Perception
1986b;15:525–536. [PubMed: 3588212]
Valentine T, Bruce V. Mental rotation of faces. Memory & Cognition 1988;16:556–566.
Whitely AM, Warrington EK. Prosopagnosia: A clinical, psychological, and anatomical study of three
patients. Journal of Neurology, Neurosurgery, & Psychiatry 1977;40:395–403.
Bentin et al. Page 15
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Yamane S, Kaji S, Kawano K. What facial features activate face neurons in the inferotemporal cortex?
Experimental Brain Research 1988;73:209–214.
Young A, Hellawell D, Hay DC. Configurational information in face perception. Perception
1987;16:747–759. [PubMed: 3454432]
Young MP, Yamane S. Sparse population coding of faces in the inferotemporal cortex. Science
1992;256:1327–1331. [PubMed: 1598577]
Bentin et al. Page 16
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 1.
(A) ERPs elicited by FACES and SCRAMBLED FACES at 14 scalp locations. Note Nl70 at Locations T5 and
T6. (B) ERPs elicited al T5 and T6, by the four nontarget stimulus categories. (C) P300 (largest
at Pz) elicited by target BUTTERFLIES. In this and the following figures, waveforms are grand
averages across all subjects of each experiment.
Bentin et al. Page 17
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 2.
ERPs elicited by human FACES, ANIMAL FACES, human HANDS, and FURNITURE at T5 and T6.
Bentin et al. Page 18
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 3.
Voltage distribution of ERPs clicited by upright human FACES at 28 scalp locations at 88 (A),
128 (B), 172 (C), and 230 (D) msec from stimulus onset. Blue-purple hues represent negative
voltages, yellow-red huts represent positive voltages.
Bentin et al. Page 19
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 4.
ERPs elicited by upright FACES and INVERTED FACES at T5 and T6.
Bentin et al. Page 20
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 5.
ERPs elicited by whole FACES, isolated EYES, LIPS, and NOSES at T5 and T6.
Bentin et al. Page 21
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 6.
Voltage distribution of ERPs elicited by faces and face components, based on 28 scalp locations
and calculated at the peak of the negative potentials: (A) FACES (at 172 msec). (B) EYES (at 172
msec), (C) LIPS (at 212 msec), (D) NOSES (at 232 msec).
Bentin et al. Page 22
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 7.
ERPs elicited by DISTORTED FACES, EYES, LIPS, and NOSES at T5 and T6.
Bentin et al. Page 23
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 8.
Schematic illustration of a working hypothesis of regions of extrastriate cortex involved in face
perception. A face-sensitive region, located mainly in the fusiform gyrus and maximally
responsive to human faces, generates an N200 potential as recorded from the cortical surface
(Allison et al., l994a,b,c). The dipole generator is approximately vertical. An eye-sensitive
region, located mainly in the medial wall of the occipitotemporal sulcus and maximally
responsive to human eyes (Fig. 5), generates an N170 potential as recorded from the temporal
scalp. The dipole generator is oblique and points approximately to the middle temporal gyrus,
where scalp electrodes T5 and T6 are typically located (Homan et al., 1988). In the anterior-
posterior dimension the eye-sensitive region is hypothesized to be located in the mid-fusiform
region anterior to the region of inferior temporal gyrus that responds to faces. Drawing adapted
from Figure 198 of Damasio (1995). The right hemisphere is illustrated, but both hemispheres
are activated by faces and eyes. Abbreviations: CS, collateral sulcus; FG, fusiform gyrus; ITG,
inferior temporal gyrus; MTG middle temporal gyrus; OTS, occipitotemporal sulcus.
Bentin et al. Page 24
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 9.
Examples of stimuli used in Experiment 1.
Bentin et al. Page 25
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 10.
Examples of stimuli used in Experiment 2.
Bentin et al. Page 26
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 11.
Examples of stimuli used in Experiment 4.
Bentin et al. Page 27
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Figure 12.
Examples of stimuli used in Experiment 5. The normal version of the distorted face on the left
is shown in Figure 11, upper left.
Bentin et al. Page 28
J Cogn Neurosci. Author manuscript; available in PMC 2010 August 24.
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
... emotional expressions [6]. The stimulus only in terms of eye information increases the amplitude of the N170 response and prolongs its latency [7]. However, it is unclear whether any differences exist in the N170 response to the faces or expressions of a person wearing a mask. ...
... We found that the masked facial expressions increased N170 amplitudes and delayed latencies, compared to the cases of unmasked faces that were also used in the previous reports [5][6] [7]. One factor of N170 changes may be brain processing related to predicting the invisible parts of a masked face (e.g., the mouth and nose), suggesting the effects of amodal completion [2] in face recognition. ...
... This component reflects an occipital positive deflection that occurs 80-120 ms after stimulus presentation and is indicative of early attention allocation (Luck, 2005). The occipito-temporal N170 represents a negative potential, which is assumed to be involved in the structural pre-categorical encoding of faces and shows face-sensitivity in adults and children (Bentin et al., 1996;Kuefner et al., 2009;Rossion & Jacques, 2008). The early posterior negativity (EPN) is often measured using temporo-occipital electrodes and is modulated by the emotional salience of stimuli (Junghöfer et al., 2017;Schupp et al., 2004). ...
... The N170 (Bentin et al., 1996) has been used to evaluate the processing of delayed feedback. The N170 is larger for delayed relative to immediate feedback (Arbel et al., 2017;Kim and Arbel, 2019;Höltje and Mecklinger, 2020;Albrecht et al., 2023) and in the context of feedback-based tasks has been hypothesized to reflect activity in the medial temporal lobe (MTL) (Arbel et al., 2017;Kim and Arbel, 2019;Mecklinger, 2018, 2020;Albrecht et al., 2023). ...
Article
Full-text available
Introduction Corrective feedback can be received immediately after an action or with a temporal delay. Neuroimaging studies suggest that immediate and delayed feedback are processed by the striatum and medial temporal lobes (MTL), respectively. Age-related changes in the striatum and MTL may influence the efficiency of feedback-based learning in older adults. The current study leverages event-related potentials (ERPs) to evaluate age-related differences in immediate and delayed feedback processing and consequences for learning. The feedback-related negativity (FRN) captures activity in the frontostriatal circuit while the N170 is hypothesized to reflect MTL activation. Methods 18 younger ( M years = 24.4) and 20 older ( M years = 65.5) adults completed learning tasks with immediate and delayed feedback. For each group, learning outcomes and ERP magnitudes were evaluated across timing conditions. Results Younger adults learned better than older adults in the immediate timing condition. This performance difference was associated with a typical FRN signature in younger but not older adults. For older adults, impaired processing of immediate feedback in the striatum may have negatively impacted learning. Conversely, learning was comparable across groups when feedback was delayed. For both groups, delayed feedback was associated with a larger magnitude N170 relative to immediate feedback, suggesting greater MTL activation. Discussion and conclusion Delaying feedback may increase MTL involvement and, for older adults, improve category learning. Age-related neural changes may differentially affect MTL- and striatal-dependent learning. Future research can evaluate the locus of age-related learning differences and how feedback can be manipulated to optimize learning across the lifespan.
... To further test this hypothesis, we employed electrophysiology, specifically event-related potentials (ERPs), which are a dependable source of brain markers capable of characterizing the neuropsychological architecture of cognitive disorders [11]. Face processing has been linked to the modulation of three ERP components: the N170 [12,13], the N250 [14,15], and the N400 [16,17,18]. Some studies have also suggested that the P100 may be sensitive to face processing, reflecting either a difference in low-level visual features between faces and other complex visual stimuli, or a holistic face perception, however, the evidence for this is somewhat disputed [19,20]. ...
Article
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories—for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300–500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Article
Emojis or emoticons are commonly used to convey emotional status to others in text‐based, online communication. While several studies have investigated the influence of emojis on emotional processing, the influence of emojis on the recognition of messages is less understood. In the present study, we investigated the effects of emojis accompanying a short text message on the emotional impressions and memory of the messages. The results suggested that emojis modulated the emotional processing of the messages; the emotional arousal of the messages increased by adding emojis, and the emotional valence of messages was biased towards the valence of emojis. Furthermore, we found that the memory of the text messages was modulated by emojis; the recognition performance of the positive text messages was improved when they appeared with negative emojis. These results implied that emojis would have an impact on cognitive processing, as well as the emotional processing of text messages.
Article
Impaired socioemotional functioning characterizes autistic children, but does weak inhibition control underlie their socioemotional difficulty? This study addressed this question by examining whether and, if so, how inhibition control is affected by face realism and emotional valence in school‐age autistic and neurotypical children. Fifty‐two autistic and 52 age‐matched neurotypical controls aged 10–12 years completed real and cartoon emotional face Go/Nogo tasks while event‐related potentials (ERPs) were recorded. The analyses of inhibition–emotion components (i.e., N2, P3, and LPP) and a face‐specific N170 revealed that autistic children elicited greater N2 while inhibiting Nogo trials and greater P3/LPP and late LPP for real but not cartoon emotional faces. Moreover, autistic children exhibited a reduced N170 to real face emotions only. Furthermore, correlation results showed that better behavioral inhibition and emotion recognition in autistic children were associated with a reduced N170. These findings suggest that neural mechanisms of inhibitory control in autistic children are less efficient and more disrupted during real face processing, which may affect their age‐appropriate socio‐emotional development.
Chapter
This indispensable sourcebook covers conceptual and practical issues in research design in the field of social and personality psychology. Key experts address specific methods and areas of research, contributing to a comprehensive overview of contemporary practice. This updated and expanded second edition offers current commentary on social and personality psychology, reflecting the rapid development of this dynamic area of research over the past decade. With the help of this up-to-date text, both seasoned and beginning social psychologists will be able to explore the various tools and methods available to them in their research as they craft experiments and imagine new methodological possibilities.
Chapter
This indispensable sourcebook covers conceptual and practical issues in research design in the field of social and personality psychology. Key experts address specific methods and areas of research, contributing to a comprehensive overview of contemporary practice. This updated and expanded second edition offers current commentary on social and personality psychology, reflecting the rapid development of this dynamic area of research over the past decade. With the help of this up-to-date text, both seasoned and beginning social psychologists will be able to explore the various tools and methods available to them in their research as they craft experiments and imagine new methodological possibilities.
Article
Full-text available
In order to gather evidence on functional subdivisions of the temporal lobe neocortex of the primate, the activity of more than 2600 single neurons was recorded in 10 myelo- and cytoarchitecturally defined subdivisions of the cortex in the superior temporal sulcus (STS) and inferior temporal gyrus of the anterior part of the temporal lobe of 5 hemispheres of 3 macaque monkeys. First, convergence of different modalities into each area was investigated. Areas TS and TAa, in the upper part of this region, were found to receive visual as well as auditory inputs. Areas TPO, PGa, and IPa, in the depths of the STS, received visual, auditory, and somatosensory inputs. Areas TEa, TEm, TE3, TE2, and TE1, which extend from the ventral bank of the STS through the inferior temporal gyrus, were primarily unimodal visual areas. Second, of the cells with visual responses, it was found that some neurons in areas TS-IPa could be activated only by moving visual stimuli, whereas the great majority of neurons in areas TEa-TE1 could be activated by stationary visual stimuli. Third, it was found that there were few sharply discriminating visual neurons in areas TS and TAa; of the sharply discriminating visual neurons in other areas, however, neurons that responded primarily to faces were found predominantly in areas TPO, TEa, and TEm (in which they represented 20% of the neurons with visual responses); neurons that were tuned to relatively simple visual stimuli such as sine-wave gratings, color, or simple shapes were relatively common in areas TEa, TEm, and TE3; and neurons that responded only to complex visual stimuli were common in areas IPa, TEa, TEm, and TE3. These findings show inter alia that areas TPO, PGa, and IPa are multimodal, that the inferior temporal gyrus areas are primarily unimodal, that there are areas in the cortex in the anterior and dorsal part of the STS that are specialized for the analysis of moving visual stimuli, that neurons responsive primarily to faces are found predominantly in areas TPO, TEa, and TEm, and that architectural subdivisions of the temporal lobe cortex are related to neuronal response properties.
Article
Full-text available
Previous studies have reported that some neurons in the inferior temporal (IT) cortex respond selectively to highly specific complex objects. In the present study, we conducted the first systematic survey of the responses of IT neurons to both simple stimuli, such as edges and bars, and highly complex stimuli, such as models of flowers, snakes, hands, and faces. If a neuron responded to any of these stimuli, we attempted to isolate the critical stimulus features underlying the response. We found that many of the responsive neurons responded well to virtually every stimulus tested. The remaining, stimulus-selective cells were often selective along the dimensions of shape, color, or texture of a stimulus, and this selectivity was maintained throughout a large receptive field. Although most IT neurons do not appear to be "detectors" for complex objects, we did find a separate population of cells that responded selectively to faces. The responses of these cells were dependent on the configuration of specific face features, and their selectivity was maintained over changes in stimulus size and position. A particularly high incidence of such cells was found deep in the superior temporal sulcus. These results indicate that there may be specialized mechanisms for the analysis of faces in IT cortex.
Article
Human beings possess a remarkable ability to recognise familiar faces quickly and without apparent effort. In spite of this facility, the mechanisms of visual recognition remain tantalisingly obscure. An experiment is reported in which image processing equipment was used to displace slightly the features of a set of original facial images to form groups of modified images. Observers were then required to indicate whether they were being shown the “original” or a “modified” face, when shown one face at a time on a TV monitor screen. Memory reinforcement was provided by displaying the original face at another screen position, between presentations. The data show, inter alia, the very high significance of the vertical positioning of the mouth, followed by eyes, and then the nose, as well as high sensitivity to close-set eyes, coupled with marked insensitivity to wide-set eyes. Implications of the results for the use of recognition aids such as Identikit and Photofit are briefly discussed.
Article
A new facial composites technique is demonstrated, in which photographs of the top and bottom halves of different familiar faces fuse to form unfamiliar faces when aligned with each other. The perception of a novel configuration in such composite stimuli is sufficiently convincing to interfere with identification of the constituent parts (experiment 1), but this effect disappears when stimuli are inverted (experiment 2). Difficulty in identifying the parts of upright composites is found even for stimuli made from parts of unfamiliar faces that have only ever been encountered as face fragments (experiment 3). An equivalent effect is found for composites made from internal and external facial features of well-known people (experiment 4). These findings demonstrate the importance of configurational information in face perception, and that configurations are only properly perceived in upright faces.
Chapter
Investigation of patients with cerebral lesions and tachistoscopic studies in both normal subjects and commissurectomised patients clearly indicate that the right hemisphere has a distinctive role in the processing of faces. Given these indications of a cerebral asymmetry, the possibility that such specialization might be mirrored electrophysiologically seemed worthy of investigation. Schulman-Galambos and Galambos (1978) and Neville et al (1982), recording evoked potentials in response to coloured slides of scenes and people, have reported a late wave, P300, which seems to reflect cognitive events.
Article
Previous work on face recognition has concentrated on the processing of unfamiliar faces. This paper examines the recognition of already familiar faces, specifically politicians. Three experiments are described in which the subject's task was to search through a series of faces for particular target politicians. In Experiment I the function relating search time to target set size was found to be negatively accelerated. A similar function was observed when names were used as search items. In Experiment II all subjects searched for four targets, and the relationship between distractors and target items was varied. Distractors rated visually similar to the targets took longer to reject than those rated dissimilar. Distractors who were other politicians took longer to reject than actors, and this effect of semantic category was independent of visual similarity. In Experiment III, where subjects searched for a single target, semantic category appeared only to have an effect when the distractors were also visually similar to the target. Models of the rejection process are discussed, and the similarities between the effects observed here, with faces, and those reported elsewhere for words are pointed out.
Conference Paper
Research on cue saliency in facial recognition has relied heavily on two basic paradigms which can broadly be described as (i) recognition and (ii) recall. The recognition paradigm involves the identification of a familiar face from one or more of its individual features or examining the effects of alteration of particular features on identification. The recall paradigm utilises the frequency with which fragments or features appear in subjects descriptions of a previously presented stimulus. Highly salient features which subjects attended to would be expected to constitute a substantial part of their description. Recognition and recall studies have established that particular aspects of the face are more salient than others (c.f. Shepherd, Davies and Ellis 1981). When Caucasians view caucasian faces the hair and the eyes emèrge as probably the most important cues in both paradigms. The nose features prominently in verbal descriptions of faces (Ellis et al. 1980, Shepherd et al., 1977) but does not appear to be very affective as an isolated cue for identification (Goldstein and Mackenberg 1966, Seamon et al., 1978). It is also apparent that subjects select the hair and forehead region first, then the eyes followed by the nose and mouth when they construct Photofit faces and the accuracy of feature selection follows the same order (Ellis et al., 1977). Laughery et al. (1977) and Ellis et al.(1977) report that the amount of time devoted to the hair/forehead and eye region in Photofit and Identikit reconstruction and when instructions are given to sketch artists is greater than that spent on lower facial features.
Article
Face recognition is an area where research has increased considerably in recent years, yet theoretical progress has been slow. Here it is argued that by considering the perception and recognition of familiar faces, as well as episodic memory for unfamiliar faces, a functional framework for face recognition can be developed. Experiments using faces, that include tasks analogous to 'visual search' and 'lexical decision' are described, and the processes in operation are compared with those occurring in word recognition. The results allow us to distinguish a number of possible subcomponents for a functional model of face recognition.