ArticlePDF Available

Effects of subconscious and conscious emotions on human cue–reward association learning

Authors:

Abstract and Figures

Life demands that we adapt our behaviour continuously in situations in which much of our incoming information is emotional and unrelated to our immediate behavioural goals. Such information is often processed without our consciousness. This poses an intriguing question of whether subconscious exposure to irrelevant emotional information (e.g. the surrounding social atmosphere) affects the way we learn. Here, we addressed this issue by examining whether the learning of cue-reward associations changes when an emotional facial expression is shown subconsciously or consciously prior to the presentation of a reward-predicting cue. We found that both subconscious (0.027 s and 0.033 s) and conscious (0.047 s) emotional signals increased the rate of learning, and this increase was smallest at the border of conscious duration (0.040 s). These data suggest not only that the subconscious and conscious processing of emotional signals enhances value-updating in cue-reward association learning, but also that the computational processes underlying the subconscious enhancement is at least partially dissociable from its conscious counterpart.
Content may be subject to copyright.
Effects of subconscious and conscious
emotions on human cue–reward
association learning
Noriya Watanabe
1,2,3
& Masahiko Haruno
1,4
1
Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka
565-0871, Japan,
2
Japan Society for Promotion of Science,
3
Graduate School of Environmental Studies, Nagoya University,
4
Japan
Science and Technology Agency.
Life demands that we adapt our behaviour continuously in situations in which much of our incoming
information is emotional and unrelated to our immediate behavioural goals. Such information is often
processed without our consciousness. This poses an intriguing question of whether subconscious exposure
to irrelevant emotional information (e.g. the surrounding social atmosphere) affects the way we learn. Here,
we addressed this issue by examining whether the learning of cue-reward associations changes when an
emotional facial expression is shown subconsciously or consciously prior to the presentation of a
reward-predicting cue. We found that both subconscious (0.027 s and 0.033 s) and conscious (0.047 s)
emotional signals increased the rate of learning, and this increase was smallest at the border of conscious
duration (0.040 s). These data suggest not only that the subconscious and conscious processing of emotional
signals enhances value-updating in cue–reward association learning, but also that the computational
processes underlying the subconscious enhancement is at least partially dissociable from its conscious
counterpart.
T
o achieve our behavioural goals, we must continuously adapt our behaviour and learn from changing
circumstances. However, the great majority of incoming signals in real-life social situations is irrelevant
to our immediate goals, and may be processed unconsciously in many situations. An intriguing question is
whether such irrelevant and subconsciously received information can affect behavioural adaptation.
Many studies report that emotional information not necessary for achieving an immediate task goal can affect
aspects of human behaviour including decision making
1
, clarity of memory
2
, and learning rates during cue-
reward association learning
3
, and that this is true even when the people are aware that the information is irrelevant
to achieving the task goal. For instance, in a cue-reward association-learning study, presentation of a task-
independent fearful face just before the reward-predicting cue accelerated the learning rates compared with
presentation of a neutral face; an enhancement effect that was not found in a similarly designed short-term
memory task
3
. However, all of these experiments employed an emotional signal that subjects could consciously
perceive, and did not account for incoming information that is processed subconsciously (e.g. the surrounding
social atmosphere such as feelings of tension in a classroom). Although shorter duration of stimulus presentation
generally induces smaller behavioural effects and neuronal responses, some studies report that subconscious
presentation of information or subconscious thought results in larger effects than does conscious counterpart
4–7
,
and can affect human behaviour in daily life
8,9
. Therefore, it is important to clarify whether and how subconscious
emotional information influences human learning.
Here, we performed a computational model-based analysis of behaviour to examine how learning of a prob-
abilistic cue-reward association is affected when emotional facial expressions are shown subconsciously or
consciously before presentation of the reward-predicting cue. We have previously found that learning was
enhanced when the duration of face presentation was long (1.0 s)
3
and thus focus here on how learning is affected
by a duration (0.027–0.047 s) that yields less recognisable faces.
Results
Facial Discrimination task. Before the main learning task, we conducted a discrimination task (n 5 91) to
estimate duration thresholds for conscious discrimination of facial expressions that were based on objective
(correct rate) and subjective (confidence scoring) measures (Figure 1a). We regarded a presentation as ‘conscious’
OPEN
SUBJECT AREAS:
CLASSICAL
CONDITIONING
EMOTION
HUMAN BEHAVIOUR
CONSCIOUSNESS
Received
25 September 2014
Accepted
22 January 2015
Published
16 February 2015
Correspondence and
requests for materials
should be addressed to
M.H. (mharuno@nict.
go.jp)
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 1
if it was delivered above both subjective and objective thresholds, and
as ‘subliminal’ if it was lower than both thresholds. We define ‘sub-
conscious’ presentation as being at a duration between subliminal
and conscious presentations.
We conducted a series of t-tests to determine the threshold duration.
Analysis showed that performance accuracy (the correct rate, [CR])
at a duration of 0.040 s was higher than at 0.033 s (paired t-test,
t
(90)
5217.808, p , 0.001 with Bonferroni corrections [BC]), but
not for any other comparisons (0.020 s vs. 0.027 s: t
(90)
522.294,
p 5 0.360; 0.027 s vs. 0.033 s: t
(90)
5 0.982, p < 1.000; 0.040 s vs.
0.047 s: t
(90)
522.470, p 5 0.225 with BC) (Figure 1b, red; com-
parison among five durations). Additionally, although the CRs in
0.020 and 0.033 s were not different from chance level (paired t-test,
0.020 s: t
(90)
5 0.156, p < 1.000; 0.033 s: t
(90)
5 2.250, p 5 0.405
with BC), CR in 0.027 s was slightly and significantly higher than
the chance level (paired t-test, t
(90)
5 3.551, p 5 0.015 with BC)
(Figure 1b red).
Consistent with the CR analysis, the subjective confidence score
index (CSI) showed that participants discriminated facial expres-
sions when they were presented for longer than 0.040 s significantly
better than at shorter durations (0.033 s vs. 0.040 s: paired t-test,
t
(90)
5217.033, p , 0.0001 with BC) (Figure 1b black). While CSI
comparisons did not differ significantly between 0.027 s and 0.033 s
(t
(90)
522.347, p 5 0.211 with BC) or between 0.040 s and 0.047 s
(t
(90)
522.373, p 5 0.199 with BC), they did differ significantly
between 0.020 s and 0.027 s (t
(90)
5219.632, p , 0.001 with BC).
We also sorted CSIs based on task performance to confirm that
participants rated their correct trials as more certain. We found that
although CSIs at 0.020 s and 0.027 s stimulus durations did not differ
between correct and error trials (paired t-test, 0.020 s: t
(90)
5 1.577,
p < 1.000; 0.027 s: t
(90)
5 1.549, p < 1.000 with BC), they did differ at
longer durations (paired t-test, 0.033 s: t
(90)
5 6.012, p , 0.0001;
0.040 s: t
(90)
5 8.981, p , 0.0001, 0.047 s: t
(90)
5 8.564, p , 0.0001
with BC) (Supplementary Fig. S1).
These results showed that participants correctly discriminated
facial expressions with high confidence at presentation durations of
0.040 s and 0.047 s, which thus represents conscious presentations
as we defined them. In contrast, it was impossible to discriminate
facial expressions either objectively or subjectively when faces were
presented for only 0.020 s. The other two durations (0.027 s and
0.033 s) represent subconscious presentation because participants
showed similar confidence levels in correct and error trials with
better-than-random CR at 0.027-s durations, while at 0.033-s dura-
tions they could not discriminate faces objectively even with the
high CSI in the correct trials. Based on these observations, we used
0.027 s or 0.033 s for the subconscious condition and 0.040 s or
0.047 s for the conscious condition in the learning task. This defini-
tion of the subconscious and conscious conditions is similar to that
in other studies using facial expressions
10,11
.
In the learning task, each participant was randomly assigned to
one of these four durations. To rule out other possible factors affect-
ing learning performance, we assessed several individual differences
including age, sex, the time we conducted the experiment, and intel-
ligence level. We did not find any factor that was biased among the
groups (Table 1, see statistical analyses for sampling bias section).
Learning task. To examine the computational processes behind the
interaction between reward learning and subconscious/conscious
emotional processing (Figure 2a and 2b), we analysed behaviour
using a reinforcement learning model. More specifically, we esti-
mated the following four parameters. The learning rate (e) controls
reward prediction error in each trial. The exploration parameter (b)
controls how deterministically a value function leads to advanta-
geous behaviour, and reward sensitivity (d) transforms the actual
reward into a subjective reward, as the emotional stimulus can
change subjective sensitivity to reward. The last parameter is a
¥100 choice bias (b), which is a value-independent bias for choos-
ing the ¥100 option. This parameter represents the possibility that
participants were biased to choose one of the two rewards depending
on facial emotional expression, regardless of cue-reward associations.
We estimated these parameters separately for fearful or neutral
conditions (see Reinforcement learning model-based analysis). Before
the detailed analysis, we quantified the appropriateness of our statis-
tical models using Akaike information criteria (AIC) and Bayesian
information criteria (BIC). As shown in Figure 2c, the ebb model,
which includes learning rate, exploration parameter and ¥100 bias,
Figure 1
|
Task design and behavioural results for the discrimination task. (A) Two facial expressions (happy or sad) were presented sequentially with
masks. Duration of each presentation was 0.020 s, 0.027 s, 0.033 s, 0.040 s, or 0.047 s. Participants were required to determine whether the presented
expressions were the ‘‘same’’ or ‘‘different’’, and to rate their confidence level (‘‘low’’, ‘‘medium’’, or ‘‘high’’). (B) Both the correct rate (CR: red)
and confidence score index (CSI: black) (mean 6 SEM) showed that the ability to discriminate facial expression sharply increased at 0.040 s (CR, paired
t-test, t
(90)
5217.808, p , 0.001; CSI, paired t-test, t
(90)
5217.033, p , 0.001 with BC). *p , 0.05, **p , 0.01, ***p , 0.005, and ****p , 0.001
throughout the figures. This image is not covered by the [CC licence]. Photographs are from the NimStim Face Stimulus Set. Development of the
MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network
on Early Experience and Brain Development. (http://www.macbrain.org/resources.htm).
www.nature.com/scientificreports
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 2
Figure 2
|
Task design, behavioural results, and model-based analysis of the learning task. (A) Participants were required to press a button to indicate
which reward would they expected, and eventually learned the association between particular rewards and particular cues. The duration of face
presentations was randomly assigned as 0.027 s, 0.033 s, 0.040 s, or 0.047 s for each participant. (B) An example combination of the facial expression, cue,
and reward. Each of the four cues was associated probabilistically (65%) with one of the two different reward amounts, and also with one of the two
facial expressions (fearful or neutral faces with 100% probability). (C) The results of the parameter estimation by AIC and BIC. e, b, d, b represent the
learning rate, exploration, reward sensitivity, and ¥100 bias, respectively. (D) Learning curves. Each data point represents the average of five trials. (E) ebb
model-based estimation of the learning rate, the ¥100 bias, and exploration (mean 6 SEM). Photographs are from the NimStim Face Stimulus Set.
Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation
Research Network on Early Experience and Brain Development. (http://www.macbrain.org/resources.htm).
Table 1
|
Descriptive statistics for participants in the four presentation conditions
0.027 s 0.033 s. 0.040 s. 0.047 s. Statics df p
Num. of participants 20 20 31 20 - - -
Sex ratio (Male/All) 0.80 0.65 0.74 0.60 x
2
5 0.424 3 0.935
Mean Age (SD) 22.20 21.20 21.48 21.20 F 5 1.420 3, 87 0.242
(2.66) (1.47) (1.41) (1.17)
Mean clock time (hh:mm:ss) (SD) 12:51:00 12:42:00 12:32:54 13:18:00 F 5 0.322 3, 87 0.810
(2:50:00) (2:50:52) (2:34:33) (2:27:06)
Mean university/department academic score (SD) 54.63 56.50 56.21 54.38 F 5 1.955 3, 87 0.127
(4.95) (2.78) (2.90) (3.34)
Note: Mean clock time indicates the mean time at which a participant began the experiment. Mean university/department academic score was calculated as the mean academic ranking within the university
department to which each participant belonged (the mean intelligence level across Japanese universities is standardised to 50).
www.nature.com/scientificreports
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 3
was selected by AIC, and the eb and ebb models were comparable
using BIC (eb was slightly better). Model comparisons were highly
consistent with our previous report
3
and we used the ebb model in
subsequent analyses.
Learning curves averaged separately for each cue, irrespective of
face presentation duration (n 5 91), are shown in Figure 2d. Espe-
cially in the early stages, learning was faster for cues associated with
fearful faces and the ¥100 reward than other cues (solid red line).
To conduct a more quantitative analysis, we examined the effects of
emotion (fear vs. neutral) on each parameter of the computational
model (learning rates, ¥100 choice bias, and exploration). Consistent
with our previous report with 1.0-s face presentations
3
, we found that
the learning rate was higher in the fearful condition than in the neu-
tral condition (t
(90)
5 3.077, p 5 0.003) (Figure 2e, left). Additionally,
¥100 choice bias was negative in the fearful condition (t
(90)
524.687,
p , 0.001 with BC), and no difference was found in the exploration
parameter (t
(90)
521.552, p 5 0.124) (Figure 2e, middle and right).
The only notable difference from our previous study was that ¥100
choice bias was negative in the neutral condition (t
(90)
523.401,
p 5 0.002 with BC) (Figure 2e, middle).
Having seen that emotional face presentation modulates the learn-
ing rates and ¥100 choice bias, we then investigated how subcon-
scious presentation of emotional faces affects learning rates and the
¥100 choice bias. To achieve this, we separately computed learning
rates for each presentation duration (0.027 s: n 5 20; 0.033 s: n 5 20;
0.040 s: n 5 31; 0.047 s: n 5 20) (Figure 3a). A two-way ANOVA (2
Emotions 3 4 Presentations) showed a significant main effect of
emotion (F
(1,87)
5 13.306, p , 0.001) and no main effect of presenta-
tion duration (F
(3,87)
5 2.508, p 5 0.064). Importantly, the inter-
action between emotion and presentation duration was significant
(F
(3,87)
5 2.946, p 5 0.037), suggesting that the learning enhance-
ment provided by the fearful faces may disappear at some durations.
Therefore, we looked into the effects of emotion on the learning rate
of each presentation duration.
The learning rate differences (eF 2eN) were larger than zero in the
0.027 s (t
(19)
5 2.211, p 5 0.040), 0.033 s (t
(19)
5 2.482, p 5 0.023),
and 0.047 s (t
(19)
5 2.194, p 5 0.041) conditions, but not in the 0.040 s
condition (t
(30)
520.560, p 5 0.580). This targeted analysis revealed
a trough in the emotional enhancement effect at around 40 ms. To
interpret this result in terms of perception, we sorted the subjects
by CR scores on the discrimination task (mean 6 SE CRs, ,60%:
0.478 6 0.023; 60%–70%: 0.667 6 0.007; 70%–80%: 0.757 6 0.004;
80%–90%: 0.851 6 0.006; 90%–100%: 0.960 6 0.009) and found
that the emotional face-induced increases in learning rates was
strongest (n 5 26, eF2eN 5 0.028 6 0.011) when the participants’
CRs were 60%–70% (t
(25)
5 2.628, p 5 0.014) (Figure 3b), and the
enhancement effect disappeared at around CRs of 70%–80% (n 5 17,
eF2eN 5 0.009 6 0.005, t
(16)
5 1.655, p 5 0.117) and 80%–90%
CRs (n 5 23, eF2eN 5 0.003 6 0.009, t
(22)
5 0.341, p 5 0.736). The
increase in the learning rate caused by emotional faces started to be
discernible again at CRs of 90%–100%, although this was not statis-
tically significant (n 5 11, eF2eN 5 0.014 6 0.011, t
(10)
5 1.203,
p 5 0.257).
We conducted the same analysis for the ¥100 choice bias. A two-
way ANOVA (2 Emotions 3 4 Presentations) revealed a significant
main effect of presentation duration (F
(3,87)
5 3.287, p 5 0.024),
but not of emotion (F
(1,87)
5 0.973, p 5 0.327), or the interaction
(F
(3,87)
5 0.356, p 5 0.785). Importantly, these data demonstrate that
the trough was observed for the learning rates but not for the ¥100
bias.
Discussion
In this paper, we used a computational model-based behavioural
analysis of probabilistic cue–reward association learning to deter-
mine whether subconscious and task-independent emotional signals
affect learning. We found that the learning rate for cues paired with a
fearful face was larger than for cues paired with neutral faces, and that
this enhancement effect was significant when the face was presented
subconsciously (durations of 0.027 s or 0.033 s) and consciously
(0.047 s). However, this effect disappeared at 0.040 s. Furthermore,
not only does the effect of emotional signals on learning rates vanish
at the presentation duration of 0.040 s, but this duration also corre-
sponds to the 70%–90% CR level, validating the discontinuity of the
learning-enhancement effect. Because we did not observe this effect
in the discrimination task or in the ¥100 choice bias, it is likely to be
specific to the associative learning paradigm.
The discontinuity of the learning-rate enhancement effect might
have been caused by some malfunction in our experimental devices
for stimulus presentation. However, if this was the case, we would
expect the same problem to have occurred for the objective CRs in
the discrimination task. As we did not observe any significant per-
formance trough in Figure 1b at 0.040 s, and because almost all
Figure 3
|
Average learning rates sorted by presentation duration and correct rate. (A) Learning rates sorted by presentation durations revealed a
behavioural trough in the fearful condition at 0.040 s duration. Learning rate differences (eF2eN) for each duration were higher than zero (ts $ 2.194,
ps , 0.05), except for the 0.040 s duration (t
(30)
520.560, p 5 0.580). (B) Learning rates sorted by correct rates (CRs) in the discrimination test (upper
panel) showed that participants with 60%–70% CRs were most affected in terms of their learning rates (t
(25)
5 2.628, p 5 0.014). The lower panel refers to
the number of the participants in each CR condition.
www.nature.com/scientificreports
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 4
participants reported that they were conscious that two facial expres-
sions were presented in the 0.040 s condition (post-experimental
questionnaire), we can rule out the possibility of an experimental
device-dependent problem. Another possibility is that the trough
resulted from some sampling bias among different groups. However,
we examined sex, age, experiment time, and academic scores (as shown
in Table 1) and did not find any difference among the groups (see
Statistical analyses for sampling bias).
One plausible explanation for the disappearance of the enhance-
ment effect is that there are two pathways for emotional signal pro-
cessing in the brain
12,13
. One system is the cortical pathway, which is
routed through several visual stages such as the retina, lateral geni-
culate nucleus of the thalamus, primary visual area cortex, higher-
order brain areas, and finally extending to the amygdala. This route
of information processing results in precise perception in which we
are conscious of presented stimuli. The other system is the subcor-
tical pathway, which is routed through the retina, superior colliculus,
pulvinar nucleus of the thalamus, and extends to the amygdala.
Although information processing via this route is comparatively
crude, it is thought to be an implicit system that works faster than
the cortical pathway. Several behavioural and brain-imaging studies
have shown that the subcortical pathway has sensitivity to the rapid
presentation (faster than 0.033 s) of emotional facial expressions
11,14,15
.
Therefore, the subconscious presentation (0.027 s and 0.033 s) of
stimuli presented here could have driven the subcortical pathway,
whereas the 0.047 s presentation drove the cortical pathway. These
two systems could have different effects on reward-based learning
systems that include the substantia nigra, ventral striatum, and amyg-
dala as implicated in previous studies
3,11,14–17
.
Similar discontinuity effects observed in behavioural responses
to visual stimuli have also been reported as the ‘performance-dip
effect
5,6
, which is defined as the lowered accuracy in a main task when
it is paired with the presentation of a para-threshold task-irrelevant
stimulus. These experiments and our current observations are com-
patible in the sense that performance of the main task was affected
when either a subconscious or clear task-irrelevant visual stimulus
was presented. Importantly however, while previous experiments
showed that the task-irrelevant stimuli reduced performance, our
results showed the opposite effect: subconscious emotional signal
enhanced learning.
One might wonder which enhances learning more, conscious or
subconscious perception of emotional stimulus. Although the effects
of subconscious stimulation tend to be weaker in general than con-
scious stimulation, some studies have reported that subconscious
presentation of stimuli was more effective
4–6
. Here, we showed that
enhancement by emotion perception was significant in both subcon-
scious and conscious conditions, except when the stimulus duration
was 0.040 s. However, as shown in Figure 3b, participants were most
affected by the emotional signal when their accuracy was between 60%
and 70%. This result seems to suggest that the learning-enhancement
effect is strongest when the emotional signal is presented obscurely.
Figure 3b also indicates that overly quick stimulus presentation
(,60% CRs: mean CR 5 0.478 6 0.023) does not enhance learning
rates. These results may indicate that there is an optimal range of
presentation durations for emotional signals that yield subconscious
enhancement of learning.
Finally, while the ¥100 choice bias (which was independent of
learning) was also affected by presentation duration, no trough in
the effect was observed. Although faces were unrelated to our main
learning task, the subconsciously presented faces may have induced
uncertainty
18
or anxiety concerning subjective perception, and the
negative feeling may have led to negative choices (smaller reward).
Such a transfer of the task-independent feeling to the main task could
well be linked with Pavlovian Instrumental Transfer (PIT)
19,20
. The
PIT is a phenomenon in which previously conditioned Pavlovian
cues affect the subjective prediction and motivation in subsequent
instrumental conditioning from the outset, despite no explicit asso-
ciation between the Pavlovian cue and the new learning
19,20
. In the
current learning experiment, the subconscious presentation of facial
expressions could have induced negative emotion, and this emotion
then transferred the subsequent associative learning from the very
first trial. Such a negative bias might have been quantified as the
negative ¥100 choice bias.
Methods
Participants. Participants in this study were undergraduate and graduate students
who did not declare any history of psychiatric or neurological disorders. All
experiments were conducted according to the principles in the Declaration of
Helsinki and were approved by the ethics committee of the National Institute of
Information and Communications Technology. All 130 participants gave informed
consent prior to the experiments. Thirty-nine people (30.0%) were unable to learn all
four of the associations. Therefore, we analysed data from the remaining 91
participants (64 male; mean age 21.5 6 1.7 years).
Experimental design. Stimuli were presented via a Dell precision T7500 computer
with a graphics accelerator (NVIDIA Quadro 4000) and 19 inch CRT display (SONY
CPD-G420) to achieve 150 Hz refresh rates. Stimulus presentation and response
acquisition were controlled using Psychtoolbox-3 software (www.psychtoolbox.org)
with MATLAB. Stimuli were presented within an area subtending 4.49 3 6.16 degrees
of visual angle.
Facial discrimination task. Prior to the learning task, all 91 participants performed
the facial expression discrimination task (Figure 1a), which measured the
presentation-time threshold for subconscious and conscious facial expression
discrimination. We used 8 happy and 8 sad faces of the same 4 actors and 4 actresses
including 6 Caucasoid, 1 Negroid, and 1 Mongoloid from the NimStim
21
collection
that have high validity and reliability of expressions. Three masks (presented for 0.3 s
each) and two emotional faces (displayed for 0.020 s, 0.027 s, 0.033 s, 0.040 s or
0.047 s) were presented alternately on a screen (see Figure 1a). To maximise the
effects in the main learning task, we did not use fearful or neutral faces in this task. We
reasoned that prior knowledge of the facial expressions might affect
participant’s behaviour in the main learning task. Additionally, repetitive
presentation of the same emotional pictures could lead to reduced stimulus saliency.
Participants were required to discriminate the two expressions of an identical actor
or actress by answering whether the first expression was the ‘‘same’’ as the second one
within 3.0 s. Participants indicated their answers by pressing a button with the
right index (same) or ring (different) fingers. Additionally, they were asked to indicate
how confident they were in their answers (‘‘low confidence’’, ‘‘medium confidence’’,
or ‘‘high confidence’’) with the right index, middle, and ring fingers, respectively.
As we used two different pictures of an identical actor or actress with forward and
backward masks for each trial, participants could not judge the difference of
expressions based on outlines of faces or afterimages. This task included 80 trials
(8 same and 8 different trials 35 presentation conditions in a pseudo-random order).
As the participants were trained for several practice trials with another stimulus set
(happy and sad faces), they executed this task flawlessly.
Learning task. For the main learning task, participants learned probabilistic
associations (65% or 35%) between four visual cues and two rewards (¥100 or ¥1)
through trial and error (Figure 2a). The design was similar to a previous experimental
paradigm
3
except for the brief presentation of facial expressions. Each participant was
randomly assigned to one of four face-presentation durations (0.027 s, 0.033 s,
0.040 s, or 0.047 s). We used a between-participants design for the four durations
because of task difficulty and to avoid the effects of repetition, such as habituation to
the task or meta-learning of task structure
22
.
Face stimuli were 20 fearful and 20 neutral faces of 10 actors and 10 actresses,
including 10 Caucasoid, 7 Negroid, and 3 Mongoloid. Just before the visual cue
(0.3 s), either a fearful or neutral face interleaved with four masks (0.3 s) was
presented three times on a screen for an individually and randomly assigned duration
in a pseudo-random order. Only one emotion was used within a given trial.
Following the last face, one of the cues was presented, followed by a choice between
¥100 and ¥1. Participants then pressed a button within 1.5 s to indicate which of the
two rewards they expected. The order of cue presentation and the assignment of
the two buttons (left or right) with rewards were randomised across trials. After
making their ch oice, the actual reward was shown in yellow letters for 1.0 s. Over
time, participants could then learn the association between each cue and the
corresponding reward. Before the experiments, we confirmed that the participants
fully understood this task. They were instructed that the face and noise presentations
would signal the appearance of a cue. No participants reported noticing any
associations between particular facial expressions and the cues. The combinations of
the four visual cues, facial expressions, and rewards were counterbalanced across
participants (Figure 2b). The total number of trials was 320 (80 3 4 conditions).
Statistical analyses for perceptive discrimination task. The correct rates (CR) were
calculated by dividing the sum of the hit rate and correct rejection rate by the number
of trials (16) (Figure 1b red). We calculated the subjective confidence level for
each judgment using the confidence score index (CSI). For this index, each raw rating
www.nature.com/scientificreports
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 5
was 1, 2, or 3, representing ‘‘low confidence’’, ‘‘medium confidence’’, or ‘‘high
confidence’’, respectively. This rating was independent of the correctness of the
judgment, and was averaged for each duration (1 # CSI # 3) (Figure 1b black).
Additionally, we sorted the CSI data based on the CR (Supplementary Fig. S1).
Statistical analyses for sampling bias . The learning task was conducted using a
between-participants design for the four presentation durations to avoid fatigue,
habituation, and meta-learning of task structure
22
. However, this might have induced
sampling bias. We therefore examined four possible biases: age, sex, the time of day
the experiment started, and the intelligence level based on university-department
academic scores. The mean experimental start time was taken into account because
experiments conducted in the early morning or late at night may be associated with
different arousal levels, even though we reminded participants by email before
participation to get enough sleep. Results are summarised in Table 1 and there was no
bias in any of the four groups.
Reinforcement learning model-based analysis. To conduct a trial-based analysis of
the learning process, we adopted a reinforcement learning model
3,23,24
. This model
assumes that each participant assigns the value function Q
t
(s
t
, a
t
) to action a
t
for the
cue s
t
at time t. Learning increases the accuracy of value representation by updating
the value in proportion to the reward prediction error (RPE) R
t
2 Q
t
(s
t
, a
t
), which is
the difference between the expected and actual reward at time t (Equation 1):
Q
tz1
s
t
,a
t
ðÞ~Q
t
s
t
,a
t
ðÞze
f
R
t
{Q
t
s
t
,a
t
ðÞðÞ: ð1Þ
Our learning model contains four free parameters: a learning rate (e
f
), reward
sensitivity (d
f
), value-independent bias for the choice of ¥100 (a
t
), (b
f
(a
t
)), and an
exploration parameter (b
f
). The learning rate controls the effects of the RPE, and
reward sensitivity transforms the actual reward (r
t
) in yen into a subjective reward
(R
t
) for each participant (Equation 2):
R
t
~d
f
r
t
: ð2Þ
In relation to behavioural choice (Equation 3), the bias term represents a value-
independent bias or inclination towards the choice of ¥100, and the exploration
parameter controls how deterministically the value function leads to an advantageous
behaviour:
Pa
t
js
t
ðÞ
~
exp b
f
Q
t
s
t
,a
t
ðÞzb
f
a
t
ðÞ

P
a
exp b
f
Q
t
s
t
,a
t
ðÞzb
f
a
t
ðÞ

: ð3Þ
We estimated each participant’s free parameters (denoted as the vector h) from their
trial-by-trial learning using the maximum likelihood-estimation method, which
minimises the negative log-likelihood of the participant’s behaviour (D), as shown in
equations 4 and 5. This non-linear minimisation of equation 4 was conducted using
the MATLAB function ‘‘fmincon’’.
min{log PDjhðÞ ð4Þ
PDjhðÞ~ P
t
Pa
t
js
t
ðÞ ð5Þ
The probability of choosing an action, a
t
(¥100 or ¥1), given a visual cue, s
t
,was
computed based on equation 3.
We evaluated the significance of each parameter using Akaike information criteria
(AIC) and Bayesian information criteria (BIC) by comparing four models using the
learning rate and exploration parameter (eb), eb with reward sensitivity (ebd), eb with
¥100 bias (ebb), and eb with both reward sensitivity and ¥100 bias (ebdb). We
calculated these information criteria for each participant and compared the mean
scores (n 5 91).
1. Kahneman, D. & Tversky, A. Prospect Theory - Analysis of Decision under Risk.
Econometrica 47, 263–291 (1979).
2. McGaugh, J. L. Memory consolidation and the amygdala: a systems perspective.
Trends Neurosci 25, 456–461 (2002).
3. Watanabe, N., Sakagami, M. & Haruno, M. Reward prediction error signal
enhanced by striatum-amygdala interaction explains the acceleration of
probabilistic reward learning by emotion. J Neurosci 33, 4487–4493 (2013).
4. Murphy, S. T. & Zajonc, R. B. Affect, cognition, and awareness: affective priming
with optimal and suboptimal stimulus exposures. J Pers Soc Psychol 64, 723–739
(1993).
5. Tsushima, Y., Sasaki, Y. & Watanabe, T. Greater disruption due to failure of
inhibitory control on an ambiguous distractor. Science 314, 1786–1788 (2006).
6. Yotsumoto, Y. et al. Performance Dip in motor response induced by task-
irrelevant weaker coherent visual motion signals. Cereb Cortex 22, 1887–1893
(2012).
7. Newell, B. R. & Shanks, D. R. Unconscious influences on decision making: a
critical review. Behav Brain Sci 37, 1–19 (2014).
8. Karremans, J. C., Stroebe, W. & Claus, J. Beyond Vicary’s fantasies: The impact of
subliminal priming and brand choice. J Exp Soc Psychol 42, 792–798 (2006).
9. Hassin, R. R., Ferguson, M. J., Shidlovski, D. & Gross, T. Subliminal exposure to
national flags affects political thought and behavior. Proc Natl Acad Sci U S A 104,
19757–19761 (2007).
10. Whalen, P. J. et al. Masked presentations of emotional facial expressions modulate
amygdala activity without explicit knowledge. J Neurosci 18, 411–418 (1998).
11. Morris, J. S., Ohman, A. & Dolan, R. J. A subcortical pathway to the right amygdala
mediating ‘‘unseen’’ fear. Proc Natl Acad Sci U S A 96, 1680–1685 (1999).
12. Hannula, D. E., Simons, D. J. & Cohen, N. J. Imaging implicit perception: promise
and pitfalls. Nat Rev Neurosci 6, 247–255 (2005).
13. Tamietto, M. & de Gelder, B. Neural bases of the non-conscious perception of
emotional signals. Nat Rev Neurosci 11, 697–709 (2010).
14. Liddell, B. J. et al. A direct brainstem-amygdala-cortical ‘alarm’ system for
subliminal signals of fear. Neuroimage 24, 235–243 (2005).
15. Williams, L. M. et al. Mode of functional connectivity in amygdala pathways
dissociates level of awareness for signals of fear. J Neurosci 26, 9264–9271 (2006).
16. Pessiglione, M. et al. Subliminal instrumental conditioning demonstrated in the
human brain. Neuron 59, 561–567 (2008).
17. Haruno, M., Kimura, M. & Frith, C. D. Activity in the nucleus accumbens and
amygdala underlies individual differences in prosocial and individualistic
economic choices. J Cog Neurosci 26, 1861–1870 (2014).
18. Epstein, L. G. A definition of uncertainty aversion. Rev Econ Stud 66, 579–608
(1999).
19. Talmi, D., Seymour, B., Dayan, P. & Dolan, R. J. Human pavlovian-instrumental
transfer. J Neurosci 28, 360–368 (2008).
20. Bray, S., Rangel, A., Shimojo, S., Balleine, B. & O’Doherty, J. P. The neural
mechanisms underlying the influence of pavlovian cues on human decision
making. J Neurosci 28, 5861–5866 (2008).
21. Tottenham, N. et al. The NimStim set of facial expressions: judgments from
untrained research participants. Psychiatry Res 168, 242–249 (2009).
22. Fleming, S. M. & Frith, C. D. The Cognitive Neuroscience of Metacognition
(Springer, 2014).
23. Sutton, R. S. & Bart, A. G. Reinforcement learning (MIT Press, 1998).
24. Daw, D. N. [Trial-by-Trial Data Analysis Using Computational Models.] Decision
Making, Affect, and Learning: Attention and Performance XXIII [Delgado, M. R.,
Phelps, E. A. & Robbins, T. W. (eds.)] [3–38] (Oxford Univ Press, 2011).
Author contributions
N.W. and M.H. designed the experiments. N.W. performed the experiments. N.W. and
M.H. analysed the data, and wrote and reviewed the manuscript.
Additional information
Supplementary information accompanies this paper at http://www.nature.com/
scientificreports
Competing financial interests: The authors declare no competing financial interests.
How to cite this article: Watanabe, N. & Haruno, M. Effects of subconscious and conscious
emotions on human cue–reward association learning. Sci. Rep. 5, 8478; DOI:10.1038/
srep08478 (2015).
This work is licensed under a Creative Commons Attribution 4.0 International
License. The images or other third party material in this article are included in the
article’s Creative Commons license, unless indicated otherwise in the credit line; if
the material is not included under the Creative Commons license, users will need
to obtain permission from the license holder in order to reproduce the material. To
view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
www.nature.com/scientificreports
SCIENTIFIC REPORTS | 5 : 8478 | DOI: 10.1038/srep08478 6
... Although we failed to demonstrate that normal observers can discriminate a stimulus yet be completely unaware of its overall presence (Peters and Lau 2015), it may be possible for an observer to be able to discriminate a stimulus above chance while being unaware of its task-relevant properties. This might occur especially for stimuli that could possibly bypass "conscious" visual processing areas to activate other task-relevant circuitry, such as observations of amygdala reactivity in "unconscious" face emotion processing (Pessoa and Adolphs 2010;Watanabe and Haruno 2015;Diano et al. 2016;Khalid and Ansorge 2017). In such emotion processing "featural blindsight," the observer would be aware of the face itself and be able to correctly identify its emotion (e.g. ...
... We modified a previous version of this paradigm (Peters and Lau 2015) to match low-level visual properties such as luminance, contrast, and spatial frequency differences between the Emotion Present (EP) and Emotion Absent (EA) intervals; our goal was to reveal emotion processing "featural blindsight," in which observers would be able to correctly identify an emotion when it was present but have no ability to introspect on the evidence or process leading to their choices. However, despite previous reports suggesting that emotion processing may bypass conscious visual processing areas (Pessoa and Adolphs 2010;Watanabe and Haruno 2015;Diano et al. 2016;Khalid and Ansorge 2017) and that emotional (especially fearful) stimuli may be powerful candidates for inducing blindsight-like behavior (Vieira et al. 2017), here we saw no evidence to suggest that emotion-processing "featural blindsight" may occur. ...
Article
Full-text available
Some researchers have argued that normal human observers can exhibit “blindsight-like” behavior: the ability to discriminate or identify a stimulus without being aware of it. However, we recently used a bias-free task to show that what looks like blindsight may in fact be an artifact of typical experimental paradigms’ susceptibility to response bias. While those findings challenge previous reports of blindsight in normal observers, they do not rule out the possibility that different stimuli or techniques could still reveal perception without awareness. One intriguing candidate is emotion processing, since processing of emotional stimuli (e.g. fearful/happy faces) has been reported to potentially bypass conscious visual circuits. Here we used the bias-free blindsight paradigm to investigate whether emotion processing might reveal “featural blindsight,” i.e. ability to identify a face’s emotion without introspective access to the task-relevant features that led to the discrimination decision. However, we saw no evidence for emotion processing “featural blindsight”: as before, whenever participants could identify a face’s emotion they displayed introspective access to the task-relevant features, matching predictions of a Bayesian ideal observer. These results add to the growing body of evidence that perceptual discrimination ability without introspective access may not be possible for neurologically intact observers.
... intra-) × task difficulty (easy vs. hard) was then performed for each channel dyad. After correction at p < 0.005 (Watanabe & Haruno, 2015), seventeen significant channel dyads were taken as the channel dyads of interest. Next, correlation analyses were conducted between these 17 INS and the learning acquisition of older adults, and the INS of CH7-22 from instructor to learner was significantly correlated with behavioral performance. ...
Article
Full-text available
Objectives: Lifelong learning facilitates active ageing, and intragenerational learning-the process by which older adults learn from their peers-is an effective means of achieving this goal. The present research aims to elucidate the mechanisms and differences between intergenerational and intragenerational learning models for older adults as evidenced by brain-to-brain synchrony. Methods: Fifty-six instructor-learner dyads completed a study comparing intergenerational and intragenerational learning models, as well as task difficulty. The study utilized a block puzzle task and functional near-infrared spectroscopy (fNIRS) for hyperscanning. Results: The instructor-learner dyads showed greater interpersonal neural synchrony (INS) and learning acquisition in the intragenerational learning model in the difficult task condition (t (54) = 3.49, p < 0.01), whereas the two learning models yielded similar results in the easy condition (t (54) = 1.96, p = 0.06). In addition, INS and self-efficacy mediated the association between learning models and learning acquisition in older adults (b = 0.14, SEM = 0.04, 95 % CI [0.01 0.16]). Discussion: This study is the first to provide evidence of interbrain synchrony in an investigation of the intra-generational learning model in older adults. Our findings suggest that intra-learning is as effective as traditional inter-learning and may be more effective in certain contexts, such as difficult tasks. Encouraging intra-learning in community service or educational activities can effectively mitigate the challenge of limited volunteers and enhance learning acquisition among older adults.
... In contrast with previous work reporting non-conscious processing of emotional information in healthy participants (e.g., Khalid & Ansorge, 2017;Vieira et al., 2017;Watanabe & Haruno, 2015), we found no behavioral evidence for body expression discrimination during perceptual unawareness (Figure 2A). One possible reason for these divergent results may be that most studies reporting non-conscious affective processing used facial expressions (Tamietto & . ...
Preprint
Full-text available
Two major issues in consciousness research concern the measuring methods that determine per-ceptual unawareness and whether consciousness is a gradual or an "all-or-nothing" phenomenon. This 7T fMRI study addresses both questions using a continuous flash suppression paradigm with an emotional recognition task (fear vs neutral bodies) in combination with the perceptual awareness scale. Behaviorally, recognition sensitivity increased linearly with increased stimuli awareness and was at chance level during perceptual unawareness. Threat expressions triggered a slower heart rate than neutral ones during "almost clear" experience of the stimulus, indicating freezing behavior. The activity in occipital, temporal, parietal and frontal regions as well as in amygdala increased with increased stimulus awareness while the activity in early visual areas showed the opposite pattern. The relationship between temporal area activity and perceptual awareness was better characterized by a gradual model while the activity in fronto-parietal areas by a dichotomous model, suggesting different roles in conscious processing. Interestingly, no evidence of non-conscious processing was found in amygdala as well as no significant effect of emotion, in disagreement with the functions long ascribed to this subcortical structure.
... In yet another study, an irrelevant stimulus that was previously associated with a subliminal monetary reinforcement cue affected both goal-directed and stimulus-driven attention during search (Bourgeois, Neveu, & Vuilleumier, 2016). Similarly, subliminal emotional faces increased cue-reward association learning (Watanabe & Haruno, 2015). ...
Article
Full-text available
The neuroscience of volition has been investigating the neural underpinnings of decisions that lead to voluntary action, hereby contributing to the age-old debate on free will. It focuses on endogenous, voluntary actions that are under one’s control and on the causal role of consciousness in such actions. Thus far, studies in the field have almost exclusively focused on arbitrary decisions (e.g., raising the left or right hand for no reason or purpose), which are unreasoned, meaningless, and purposeless. In parallel, the field of neuroeconomics has been investigating the neural underpinnings of meaningful, deliberate decisions, trying to understand the neural mechanisms of valuation and choice. Yet, though meaningful decisions also typically lead to voluntary actions, the question of volition is typically overlooked in neuroeconomics, much like the neuroscience of volition has neglected meaningful, reasoned decisions. In this review, we briefly survey both fields and the possible ways in which they can be combined to study volition in meaningful decisions. We highlight an experiment that used a neuroeconomics paradigm to show that key findings from the volition literature do not generalize to meaningful, deliberate decisions. We argue that further interactions between the two fields are important, especially to the neuroscience of volition. Extending the focus to deliberate decisions will allow the field to cover more of what is typically taken to mean by volition. It will also yield more ecologically valid conclusions about everyday decisions, and especially important ones that are mostly deliberate, and arguably will be more relevant to the free will debate.
... Ainsi, lorsque l'on visualise un stimulus émotionnel juste avant un signal prédictif de récompense au cours d'une tâche d'apprentissage associatif, la vitesse d'apprentissage de l'association stimulusrécompense est accélérée. De plus, cet effet est renforcé lorsque la valence émotionnelle est congruente avec le type d'information impliqué [22]. ...
Article
Full-text available
Résumé La modélisation computationnelle permet de construire des modèles mathématiques simulant les mécanismes de perception, de prise de décision et de mise à jour des croyances. Ces modèles représentent mathématiquement ces processus complexes de traitement de l’information en combinant une distribution de probabilité antérieure, une fonction de vraisemblance et un ensemble de paramètres et d’hyperparamètres. Leur utilisation a popularisé la conception d’un système nerveux fonctionnant comme une machine prédictive, ou « cerveau bayésien ». Appliqués à la psychiatrie, ces modèles offrent des explications mécanistiques des dysfonctionnements retrouvés dans les troubles psychiatriques. Malgré les preuves de l’influence de l’émotion sur les processus cognitifs et son implication dans les troubles psychiatriques, peu de modèles computationnels proposent des représentations mathématiques de l’émotion ou intègrent des facteurs émotionnels dans leurs paramètres de modélisation. Nous présentons ici quelques hypothèses computationnelles pour la modélisation de paramètres affectifs et nous suggérons que les modèles bayésiens des troubles psychiatriques bénéficieraient de ces paramètres de modélisation.
... In comparison, we did not find strong evidence that faster acquisition of the conditioned response to angry and happy faces than to neutral faces was driven by a higher excitatory learning rate. These results are partially inconsistent with previous studies using reward learning paradigms (Watanabe & Haruno, 2015;Watanabe, Sakagami, & Haruno, 2013), which reported that threat-related (i.e., fearful) faces not only accelerated learning in comparison to neutral faces, but also increased the associated excitatory learning rate. ...
Article
Full-text available
Learning biases in Pavlovian aversive conditioning have been found in response to specific categories of threat-relevant stimuli, such as snakes or angry faces. This has been suggested to reflect a selective predisposition to preferentially learn to associate stimuli that provided threats to survival across evolution with aversive outcomes. Here, we contrast with this perspective by highlighting that both threatening (angry faces) and rewarding (happy faces) social stimuli can produce learning biases during Pavlovian aversive conditioning. Using a differential aversive conditioning paradigm, the present study (N = 107) showed that the conditioned response to angry and happy faces was more readily acquired and more resistant to extinction than the conditioned response to neutral faces. Strikingly, whereas the effects for angry faces were of moderate size, the conditioned response persistence to happy faces was of relatively small size and influenced by interindividual differences in their affective evaluation, as indexed by a Go/No-Go Association Task. Computational reinforcement learning analyses further suggested that angry faces were associated with a lower inhibitory learning rate than happy faces, thereby inducing a greater decrease in the impact of negative prediction error signals that contributed to weakening extinction learning. Altogether, these findings provide further evidence that the occurrence of learning biases in Pavlovian aversive conditioning is not specific to threat-related stimuli and depends on the stimulus' affective relevance to the organism. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Article
A central question in consciousness theories is whether one is dealing with a dichotomous (“all-or-none”) or a gradual phenomenon. In this 7T fMRI study, we investigated whether dichotomy or gradualness in fact depends on the brain region associated with perceptual awareness reports. Both male and female human subjects performed an emotion discrimination task (fear vs neutral bodies) presented under continuous flash suppression with trial-based perceptual awareness measures. Behaviorally, recognition sensitivity increased linearly with increased stimuli awareness and was at chance level during perceptual unawareness. Physiologically, threat stimuli triggered a slower heart rate than neutral ones during “almost clear” stimulus experience, indicating freezing behavior. Brain results showed that activity in the occipitotemporal, parietal, and frontal regions as well as in the amygdala increased with increased stimulus awareness while early visual areas showed the opposite pattern. The relationship between temporal area activity and perceptual awareness best fitted a gradual model while the activity in frontoparietal areas fitted a dichotomous model. Furthermore, our findings illustrate that specific experimental decisions, such as stimulus type or the approach used to evaluate awareness, play pivotal roles in consciousness studies and warrant careful consideration.
Article
Full-text available
Studies found that subliminal primes can be associated with specific tasks to facilitate task performance, and such learning is highly adaptive and generalizable. Meanwhile, conditioning studies suggest that aversive/reward learning and generalization actually occur at the semantic level. The current study shows that prime-task associations can also be generalized to novel word/neighbour primes from the same semantic category, and this occurs without contingency awareness. Previous studies have counterintuitively suggested that both the learning of task priming and the semantic priming of word neighbours depend on the lack of visibility. Here, we show that semantic generalization indeed depends on reduced visibility, but cannot occur subliminally. The current study shows for the first time that semantic learning and generalization can occur without any emotional or motivational factors, and that semantic priming can occur for arbitrary-linked stimuli in a context completely devoid of semantics.
Article
Full-text available
Much decision-making requires balancing benefits to the self with benefits to the group. There are marked individual differences in this balance such that individualists tend to favor themselves whereas prosocials tend to favor the group. Understanding the mechanisms underlying this difference has important implications for society and its institutions. Using behavioral and fMRI data collected during the performance of the ultimatum game, we show that individual differences in social preferences for resource allocation, so-called "social value orientation," is linked with activity in the nucleus accumbens and amygdala elicited by inequity, rather than activity in insula, ACC, and dorsolateral pFC (DLPFC). Importantly, the presence of cognitive load made prosocials behave more prosocially and individualists more individualistically, suggesting that social value orientation is driven more by intuition than reflection. In parallel, activity in the nucleus accumbens and amygdala, in response to inequity, tracked this behavioral pattern of prosocials and individualists. In addition, we conducted an impunity game experiment with different participants where they could not punish unfair behavior and found that the inequity-correlated activity seen in prosocials during the ultimatum game disappeared. This result suggests that the accumbens and amygdala activity of prosocials encodes "outcome-oriented emotion" designed to change situations (i.e., achieve equity or punish). Together, our results suggest a pivotal contribution of the nucleus accumbens and amygdala to individual differences in sociality.
Article
Full-text available
Learning does not only depend on rationality, because real-life learning cannot be isolated from emotion or social factors. Therefore, it is intriguing to determine how emotion changes learning, and to identify which neural substrates underlie this interaction. Here, we show that the task-independent presentation of an emotional face before a reward-predicting cue increases the speed of cue-reward association learning in human subjects compared with trials in which a neutral face is presented. This phenomenon was attributable to an increase in the learning rate, which regulates reward prediction errors. Parallel to these behavioral findings, functional magnetic resonance imaging demonstrated that presentation of an emotional face enhanced reward prediction error (RPE) signal in the ventral striatum. In addition, we also found a functional link between this enhanced RPE signal and increased activity in the amygdala following presentation of an emotional face. Thus, this study revealed an acceleration of cue-reward association learning by emotion, and underscored a role of striatum-amygdala interactions in the modulation of the reward prediction errors by emotion.
Article
Full-text available
The affective primacy hypothesis (R. B. Zajonc, 1980) asserts that positive and negative affective reactions can be evoked with minimal stimulus input and virtually no cognitive processing. The present work tested this hypothesis by comparing the effects of affective and cognitive priming under extremely brief (suboptimal) and longer (optimal) exposure durations. At suboptimal exposures only affective primes produced significant shifts in Ss' judgments of novel stimuli. These results suggest that when affect is elicited outside of conscious awareness, it is diffuse and nonspecific, and its origin and address are not accessible. Having minimal cognitive participation, such gross and nonspecific affective reactions can therefore be diffused or displaced onto unrelated stimuli. At optimal exposures this pattern of results was reversed such that only cognitive primes produced significant shifts in judgments. Together, these results support the affective primacy hypothesis.
Chapter
Metamemory has been broadly defined as knowledge of one’s own memory. Based on a theoretical framework developed by Nelson and Narens in 1990, there has been a wealth of cognitive research that provides insight in to how we make judgments about out memory. More recently, there has been a growing interest in understanding the neural mechanisms supporting metamemory monitoring judgments. In this chapter, we propose that a fuller understanding of the neural basis of metamemory monitoring involves examining which brain regions: 1) are involved in the process of engaging in a metamemory monitoring task, 2) modulate based on the subjective level of the metamemory judgment expressed, and 3) are sensitive to the accuracy of the metamemory judgment (i.e., when the subjective judgment is congruent with the objective memory performance). Lastly, it is critical to understand how brain activation changes when metamemory judgments are based on different sources of information. Our review of the literature shows that, although we have begun to address the brain mechanisms supporting metamemory judgments, there are still many unanswered questions. The area for the most growth, however, is in understanding how the patterns of activation are changed when metamemory judgments are based on different kinds of information.
Article
Researchers have recently begun to integrate computational models into the analysis of neural and behavioural data, particularly in experiments on reward learning and decision making. This chapter aims to review and rationalize these methods. It exposes these tools as instances of broadly applicable statistical techniques, considers the questions they are suited to answer, provides a practical tutorial and tips for their effective use, and, finally, suggests some directions for extension or improvement. The techniques are illustrated with fits of simple models to simulated datasets. Throughout, the chapter flags interpretational and technical pitfalls of which authors, reviewers, and readers should be aware. © The International Association for the study of Attention and Performance, 2011. All rights reserved.
Article
To what extent do we know our own minds when making decisions? Variants of this question have preoccupied researchers in a wide range of domains, from mainstream experimental psychology (cognition, perception, social behavior) to cognitive neuroscience and behavioral economics. A pervasive view places a heavy explanatory burden on an intelligent cognitive unconscious, with many theories assigning causally effective roles to unconscious influences. This article presents a novel framework for evaluating these claims and reviews evidence from three major bodies of research in which unconscious factors have been studied: multiple-cue judgment, deliberation without attention, and decisions under uncertainty. Studies of priming (subliminal and primes-to-behavior) and the role of awareness in movement and perception (e.g., timing of willed actions, blindsight) are also given brief consideration. The review highlights that inadequate procedures for assessing awareness, failures to consider artifactual explanations of "landmark" results, and a tendency to uncritically accept conclusions that fit with our intuitions have all contributed to unconscious influences being ascribed inflated and erroneous explanatory power in theories of decision making. The review concludes by recommending that future research should focus on tasks in which participants' attention is diverted away from the experimenter's hypothesis, rather than the highly reflective tasks that are currently often employed.
Article
Neuroimaging studies have shown differential amygdala responses to masked ("unseen") emotional stimuli. How visual signals related to such unseen stimuli access the amygdala is unknown. A possible pathway, involving the superior colliculus and pulvinar, is suggested by observations of patients with striate cortex lesions who show preserved abilities to localize and discriminate visual stimuli that are not consciously perceived ("blindsight"). We used measures of right amygdala neural activity acquired from volunteer subjects viewing masked fear-conditioned faces to determine whether a colliculo-pulvinar pathway was engaged during processing of these unseen target stimuli. Increased connectivity between right amygdala, pulvinar, and superior colliculus was evident when fear-conditioned faces were unseen rather than seen. Right amygdala connectivity with fusiform and orbitofrontal cortices decreased in the same condition. By contrast, the left amygdala, whose activity did not discriminate seen and unseen fear-conditioned targets, showed no masking-dependent changes in connectivity with superior colliculus or pulvinar. These results suggest that a subcortical pathway to the right amygdala, via midbrain and thalamus, provides a route for processing behaviorally relevant unseen visual events in parallel to a cortical route necessary for conscious identification.
Article
With his claim to have increased sales of Coca Cola and popcorn in a movie theatre through subliminal messages flashed on the screen, James Vicary raised the possibility of subliminal advertising. Nobody has ever replicated Vicary’s findings and his study was a hoax. This article reports two experiments, which assessed whether subliminal priming of a brand name of a drink can affect people’s choices for the primed brand, and whether this effect is moderated by individuals’ feelings of thirst. Both studies demonstrated that subliminal priming of a brand name of drink (i.e., Lipton Ice) positively affected participants’ choice for, and their intention to, drink the primed brand, but only for participants who were thirsty. Theoretical and practical implications of these findings are discussed.
Article
We provide a tutorial on learning and inference in hidden Markov models in the context of the recent literature on Bayesian networks. This perspective make sit possible to consider novel generalizations to hidden Markov models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. Although exact inference in these generalizations is usually intractable, one can use approximate inference in these generalizations is usually intractable, one can use approximate inference algorithms such as Markov chain sampling and variational methods. We describe how such methods are applied to these generalized hidden Markov models. We conclude this review with a discussion of Bayesian methods for model selection in generalized HMMs.