ArticlePDF Available

Abstract and Figures

Significance Placebo effects pose problems for some intervention studies, particularly those with no clearly identified mechanism. Cognitive training falls into that category, and yet the role of placebos in cognitive interventions has not yet been critically evaluated. Here, we show clear evidence of placebo effects after a brief cognitive training routine that led to significant fluid intelligence gains. Our goal is to emphasize the importance of ruling out alternative explanations before attributing the effect to interventions. Based on our findings, we recommend that researchers account for placebo effects before claiming treatment effects.
Content may be subject to copyright.
Placebo effects in cognitive training
Cyrus K. Foroughi
a,1
, Samuel S. Monfort
a
, Martin Paczynski
a
, Patrick E. McKnight
a
, and P. M. Greenwood
a
a
Department of Psychology, George Mason University, Fairfax, VA 22030
Edited by Michael S. Gazzaniga, University of California, Santa Barbara, CA, and approved May 17, 2016 (received for review January 22, 2016)
Although a large body of research shows that general cognitive
ability is heritable and stable in young adults, there is recent
evidence that fluid intelligence can be heightened with cognitive
training. Many researchers, however, have questioned the meth-
odology of the cognitive-training studies reporting improvements
in fluid intelligence: specifically, the role of placebo effects. We
designed a procedure to intentionally induce a placebo effect via
overt recruitment in an effort to evaluate the role of placebo
effects in fluid intelligence gains from cognitive training. Individ-
uals who self-selected into the placebo group by responding to a
suggestive flyer showed improvements after a single, 1-h session
of cognitive training that equates to a 5- to 10-point increase on a
standard IQ test. Controls responding to a nonsuggestive flyer
showed no improvement. These findings provide an alternative
explanation for effects observed in the cognitive-training litera-
ture and the brain-training industry, revealing the need to account
for confounds in future research.
placebo effects
|
cognitive training
|
brain training
|
fluid intelligence
Whats more, working memory is directly related to intelligencethe
more you train, the smarter you can be.
NeuroNation (www.neuronation.com/, May 8, 2016)
The above quotation, like many others from the billion-dollar
brain-training industry (1), suggests that cognitive training
can make you smarter. However, the desire to become smarter
may blind us to the role of placebo effects. Placebo effects are well
known in the context of drug and surgical interventions (2, 3), but
the specter of a placebo may arise in any intervention when the
desired outcome is known to the participantan intervention like
cognitive training. Although a large body of research shows that
general cognitive ability, g, is heritable (4, 5) and stable in young
adults (6), recent research stands in contrast to this, indicating that
intelligence can be heightened by cognitive training (712). Gen-
eral cognitive ability and IQ are related to many important life
outcomes, including academic success (13, 14), job performance
(15), health (16, 17), morbidity (18), mortality (18, 19), income
(20, 21), and crime (13). In addition, the growing population of
older people seeks ways to stave off devastating cognitive decline
(22). Thus, becoming smarter or maintaining cognitive abilities via
cognitive training is a powerful lure, raising important questions
about the role of placebo effects in training studies.
The question of whether intelligence can be increased through
training has generated a lively scientific debate. Recent research
claims that it is possible to improve fluid intelligence (Gf: a core
component of general cognitive ability, g) by means of working
memory training (712, 23, 24); even meta-analyses support
these claims (25, 26), concluding that improvements from cog-
nitive training equate to an increase ...of 34 points on a
standardized IQ test.(ref. 25; but cf. ref. 27). However, re-
searchers have yet to identify, test, and confirm a clear mech-
anism underlying fluid intelligence gains after cognitive training
(28). One potential mechanism that has yet to be tested is that
the observed effects are partially due to positive expectancy or
placebo effects.
Researchers now recognize that placebo effects may potentially
confound cognitive-training [i.e., brain training(29)] outcomes
and may underlie some of the posttraining fluid intelligence gains
(24, 27, 2932). Specifically, it has been argued that overtre-
cruitment methods in which the expected benefits of training are
stated (or implied) may lead to a sampling bias in the form of self-
selection, such that individuals who expect positive results will be
overrepresented in any sample of participants (29, 33). If an in-
dividual volunteers to participate in a study entitled Brain
Training and Cognitive Enhancementbecause he or she thinks
the training will be effective, any effect of the intervention may be
partially or fully explained by participant expectations.
Expectations regarding the efficacy of cognitive training may
be rooted in beliefs regarding the malleability of intelligence
(34). Dwecks (34) work showed that people tend to hold strong
implicit beliefs regarding whether or not intelligence is malleable
and that these beliefs predict a number of learning and academic
outcomes. Consistent with that work, there is evidence that in-
dividuals with stronger beliefs in the malleability of intelligence
have greater improvements in fluid intelligence tasks after
working-memory training (10). If individuals who believe that
intelligence is malleable are overrepresented in a sample, the
apparent effect of training may be related to the belief of mal-
leability, rather than to the training itself.
The present study was motivated by concerns about overt re-
cruitment and self-selection bias (29, 33), as well as our own
observation that few published articles on cognitive training
provide details regarding participant recruitment. In fact, of the
primary studies included in the meta-analysis of Au et al. (25)
only two provided sufficient detail to determine whether par-
ticipants were recruited overtly [e.g., sign up for a brain training
study(10)] or covertly [e.g., did not inform subjects that they
were participating in a training study(24)]. (We were able to
assess 18 of the 20 studies.) We later emailed the corresponding
authors from all of the studies in the Au et al. (25) meta-analysis
for more detailed recruitment information. (This step was done
at the suggestion of a reviewer and occurred after data collection
Significance
Placebo effects pose problems for some intervention studies,
particularly those with no clearly identified mechanism. Cog-
nitive training falls into that category, and yet the role of
placebos in cognitive interventions has not yet been critically
evaluated. Here, we show clear evidence of placebo effects
after a brief cognitive training routine that led to significant
fluid intelligence gains. Our goal is to emphasize the impor-
tance of ruling out alternative explanations before attributing
the effect to interventions. Based on our findings, we recom-
mend that researchers account for placebo effects before
claiming treatment effects.
Author contributions: C.K.F. designed research; C.K.F. andS.S.M. analyzed data; and C.K.F.,
S.S.M., M.P., P.E.M., and P.M.G. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
Data deposition: The data have been achived on Figshare, https://figshare.com/articles/
Placebo_csv/2062479.
1
To whom correspondence should be addressed. Email: cyrus.foroughi@gmail.com.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.
1073/pnas.1601243113/-/DCSupplemental.
www.pnas.org/cgi/doi/10.1073/pnas.1601243113 PNAS Early Edition
|
1of5
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
was complete. We chose to place this information here instead of
the discussion to accurately portray the current recruitment
standards within the field.) All but one author responded. We
determined that 17 (of 19) studies used overt recruitment
methods that could have introduced a self-selection bias. Specifi-
cally, 17 studies explicitly mentioned cognitiveor braintraining.
Of those 17, we found that 11 studies further suggested the po-
tential for improvement or enhancement. Only two studies didnt
mention either (Table S1). A comparison of effect sizes listed in the
Au et al. (25) meta-analysis by these three methods of recruitment
(i.e., overt, overt and suggestive, and covert) lends further credence
to the possibility of a confounded placebo effect. For all of the
studies that overtly recruited, Hedgesg=0.27; for all of the studies
that overtly recruited and suggested improvement, Hedgesg=0.28;
and for the studies that covertly recruited, Hedgesg=0.11. Lastly,
we searched the internet (via Google) for the terms participate in a
brain training studyand brain training participate.The top 10
results for both searches revealed six separate laboratories that are
actively and overtly recruiting individuals to participants in either a
brain training studyor a cognitive training study.Taken to-
gether, these findings provide clear evidence that suggestive re-
cruitment methods are common and that such recruitment may
contribute to the positive outcomes reported in the cognitive-
training literature. We therefore hypothesized that overt and
suggestive recruitment would be sufficient to induce positive
posttraining outcomes.
Materials and Methods
We designed a procedure to intentionally induce a placebo effect via overt
recruitment. Our recruitment targeted two populations of participants using
different advertisements varying in the degree to which they evoked an ex-
pectation of cognitive improvement (Fig. 1). Once participants self-selected
into the two groups, they completed two pretraining fluid intelligence tests
followed by 1 h of cognitive training and then completed two posttraining
fluid intelligence tests on the following day. Two individual difference metrics
regarding beliefs about cognition and intelligence were also collected as po-
tential moderators. The researchers who interacted with participants were
blind to the goal of the experiment and to the experimental condition. Aside
from their means of recruitment, all participants completed identical cogni-
tive-training experiments. All participants read and signed an informed con-
sent form before beginning the experiment. The George Mason University
Institutional Review Board approved this research.
We recruited the placebo group (n=25) with flyers overtly advertising a
study for brain training and cognitive enhancement (d). The text Numerous
studies have shown working memory training can increase fluid in-
telligencewas clearly visible on the flyer. We recruited the control group
(n=25) with a visually similar flier containing generic content that did not
mention brain training or cognitive enhancement. We determined the
sample sizes for both groups based upon two a priori criteria: (i) Previous,
significant training studies had sample sizes of 25 or fewer (7, 8); and
(ii) statistical power analyses (power 0.7) on between-group designs dic-
tated a sample size of 25 per group for a moderate to large effect size (d
0.7). Our rationale for the first criterion was that we were trying to replicate
previous training studies, but with the additional manipulation of a placebo
that had been omitted in those studies. The second criterion simply allowed
us a good chance to find a reasonably large and important effect with the
sample size we selected. In sum, we felt that the sample size allowed for a
good replication of prior studies, but restricted us to finding only worth-
while results to report. The final sample of participants consisted of 19 males
and 31 females, with an average age of 21.5 y (SD =2.3). The groups (n=50;
25 for each condition) did not differ by age [t(48) =0.18, P=0.856] or by
gender composition [χ
2
(1) =0.76, P=0.382].
After the pretests (Gf assessments described below), participants com-
pleted 1 h of cognitive training with an adaptive dual n-back task (SI Materials
and Methods).Wechosethistaskfortworeasons:First,itiscommonlyusedin
cognitive training research, and, second, a high-face-validity task was required
to maintain the credibility of the training regimen [compare placebo pain
medication appearing identical to the real medication (35)]. In this task, par-
ticipants were presented with two streams of information: auditory and
Fig. 1. Recruitment flyers for placebo (Left) and control (Right) groups.
2of5
|
www.pnas.org/cgi/doi/10.1073/pnas.1601243113 Foroughi et al.
visuospatial. There were eight stimuli per modality that were presented at a
rate of 3 s per stimuli. For each stream, participants decided whether the
current stimulus matched the stimulus that was presented nitems ago. Our n-
back task was an adaptive version in which the level of nchanged as perfor-
mance increased or decreased within each block.
Results
All analyses were conducted by using mixed-effects linear re-
gression with restricted maximum likelihood. As expected, both
groupstraining performance improved over time [B=0.016,
SE =0.002, t(48) =10.5, P<0.001]. All participants began at
2-back; 18% did not advance beyond a 2-back, 14% finished
training at a 4-back, and 68% at a 3-back. Training performance
did not differ by group [B=0.002, SE =0.002, t(48) =1.00, P=
0.321): Both the placebo and control groups completed training
with a similar degree of success. A placebo effect can occur in the
absence of training differences between groups.
The placebo effect does, however, necessitate an effect on the
outcome of interest. Pretraining and posttraining fluid intelligence
was measured with Ravens Advanced Progressive Matrices
(RAPM) and Bochumer Matrices Test (BOMAT), two tests of
inductive reasoning widely used to assess Gf (3638). No base-
line differences were found between groups on either test
[t(48) =0.063, P=0.939 and t(48) =0.123, P=0.938, re-
spectively]. We observed a main effect of time on test performance
inwhichscoresonbothintelligence tests increased from pretraining
to posttraining. These main effects of time on both intelligence
measures, however, were qualified by an interaction by group
[RAPM: B=0.65, SE =0.19, t(48) =3.41, P=0.0013, d=0.98;
and BOMAT: B=0.82, SE =0.18, t(48) =4.63, P<0.0001, d=
1.34]. Specific contrasts showed that these moderation effects
were entirely driven by the participants in the placebo groupthe
only individuals in the study to score significantly higher on
posttraining compared with pretraining sessions for both RAPM
[B=1.04, SE =0.19, t(48) =5.46, P<0.0001, d=0.50] and for
the BOMAT [B=1.28, SE =0.18, t(48) =7.22, P<0.0001, d=
0.39]. Extrapolating RAPM to IQ (25, 39, 40), these improve-
ments equate to a 5- to 10-point increase on a standardized
100-point IQ test (SI Materials and Methods). In contrast, the
pretraining and posttraining scores for participants in the
control group were statistically indistinguishable, both for
RAPM [B=0.12, SE =0.19, t(48) =0.63, P=0.922] and
for the BOMAT [B=0.12, SE =0.18, t(48) =0.68, P=
0.905]. The results are summarized in Tables S2S4 and depicted in
Fig. 2. Interestingly, pooling the data across groups to form one
sample (combining the self-selection and control groups) revealed
significant posttraining outcomes [B=0.41, SE =0.11, t(49) =3.90,
P=0.0003, d=0.28 (RAPM); and B=0.50, SE =0.15, t(49) =
4.69, P<0.0001, d=0.21 (BOMAT)]. That is, the effect from the
placebo group was strong enough to overcome the null effect from
the control group (when pooled).
We also observed differences between groups for scores on the
Theories of Intelligence scale, which measures beliefs regarding
the malleability of intelligence (34). The participants in the
placebo group reported substantially higher scores on this index
compared with controls [B=14.96, SE =1.93, t(48) =7.75, P<
0.0001, d=2.15], indicating a greater confidence that in-
telligence is malleable. These findings indicate that our manip-
ulation via recruitment flyer produced significantly different
groups with regard to expectancy. We did not detect differences
in Need for Cognition scores (41) [B=0.56, SE =5.67, t(48) =
0.10, P=0.922] (Fig. 3). Together, these results support the
interpretation that participants self-selected into groups based
on differing expectations.
We also tested whether the response time to volunteer for our
study influenced the aforementioned findings. Specifically, we
noticed that the placebo condition appeared to fill faster than
the control condition did (366 vs. 488 h). It is possible that speed
of signup might represent another measure foror perhaps
gradations withinthe strength of the placebo effect. The vol-
unteer response time differences by group failed to produce
a significant effect on either the RAPM [B=0.04, SE =0.17,
t(46) =0.23, P=0.819] or the BOMAT [B=0.20, SE =0.16,
t(46) =1.28, P=0.201]. Volunteer response time also failed to
explain the improvement observed within the placebo group alone,
on RAPM [B=0.20, SE =0.20, t(23) =0.95, P=0.341] and
BOMAT [B=0.26, SE =0.22, t(23) =1.22, P=0.237] (Fig. 4).
Researchers have hypothesized that a training dosage effect
may exist, such that the quality of performance on a training task
is associated with the degree of subsequent skill transfer (7).
However, as discussed previously, no prepost improvements oc-
curred within the control group, even though all participants per-
formed equally well on the training task. Consequently, training
performance did not predict subsequent performance improvement
on its own [B=0.017, SE =0.20, t(46) =0.09, P=0.930], nor did it
moderate the effect of group on the observed test performance
improvements [B=0.16, SE =0.28, t(46) =0.58, P=0.567]
(Fig. 5). Therefore, our data do not support the dosage-effect
hypothesis.
Discussion
We provide strong evidence that placebo effects from overt and
suggestive recruitment can affect cognitive training outcomes.
These findings support the concerns of many researchers (24, 27,
2932), who suggest that placebo effects may underlie positive
outcomes seen in the cognitive-training literature. By capitalizing
on the self-selecting tendencies of participants with strong pos-
itive beliefs about the malleability of intelligence, we were able to
induce an improvement in Gf after 1 h of working memory
training. We acknowledge that the flyer itself could have induced
the positive beliefs about the malleability of intelligence. Either
way, these findings present an alternative explanation for effects
reported in the cognitive-training literature and in the brain-
training industry, demonstrating the need to account for placebo
effects in future research.
Importantly, we do not claim that our study revealed a pop-
ulation of individuals whose intelligence was truly changed by the
Fig. 2. Estimated marginal means of the RAPM (Left) and BOMAT (Right)
scores by time and group; errors bars represent SEs.
Fig. 3. Estimated marginal means of the Theories of Intelligence and Need
for Cognition scales by group; error bars represent SEs.
Foroughi et al. PNAS Early Edition
|
3of5
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
training that they received in our study. It is extremely unlikely
that individuals in the placebo group increased their IQ by 510
points with 1 h of cognitive training. Three elements of our de-
sign and results support this position. First, a single, 1-h training
session is far less than the traditional 15 or more hours spread
across weeks commonly used in training studies (8, 10, 23). We
argue that the use of a very short training period was sufficient to
avoid a true training effect. Second, we observed similar baseline
scores on both of the fluid intelligence tests between groups,
suggesting that both groups were equally engaged in the exper-
iment. Thus, initial nonequivalence between groups or regression
artifacts are likely absent from our design. Third, equivalent
performance on the training task between groups suggests that
the differences in posttraining intelligence were not the (direct)
result of training. If groups showed dramatically different train-
ing effects on the dual n-back task, it might follow that one group
showed higher posttraining scores on the test of general cogni-
tive ability.
Therefore, our study, to our knowledge, is the first to explicitly
model the main effect of expectancy effects while controlling for
the effect of training. That is, because our design was unlikely to
have produced true training effects, our positive effects on Gf are
solely the result of overt and suggestive recruitment. Although
posttraining gains in fluid intelligence are typically discussed in
terms of a main effect of training (7, 8, 10, 11), we argue that
such studies cannot rule out an interaction between training and
effects from overt and suggestive recruitment. Furthermore,
based on the evidence we reviewed above, we are unaware of any
previous studies that obtained a positive main effect of training
in the absence of expectation or self-selection. Indeed, to our
knowledge, the rigor of double-blind randomized clinical trials is
nonexistent in this research area.
Moving forward, we suggest that researchers exercise care in
their design of cognitive training studies. Our findings raise phil-
osophical concerns and questions that merit discussion within the
scientific field so that this area of inquiry can advance. We discuss
two different schools of thought about how to recruit participants
and design training studies. We hope that this work can begin a
conversation leading to a consensus on how to best design future
research in this field.
First, following in the tradition of randomized controlled trials
used in medicine, one approach suggests that recruitment and
study design should be as covert as possible (29, 32). Specifically,
several research groups have argued for the need to remove
study-specific information from the recruitment and briefing
procedures, avoid providing the goals of the research to partic-
ipants, and omit mention of any anticipated outcomes (29, 32,
33). The purpose of such a design would be to minimize any
confounding effects (e.g., placebo or expectation). Our earlier
review of the Au et al. (25) meta-analyses revealed two studies
that followed this approach.
Alternatively, the second approach suggests that we should
only recruit participants who believe that the training will work
and that we should do this using overt methods. Such a screening
process would eliminate participants whose prior beliefs would
prevent an otherwise effective treatment from having an effect.
That is, if a participant does not care about the training, puts
little effort in, and/or is motivated solely by something else (e.g.,
money), they are not likely to improve with any intervention,
including cognitive training. Although positive expectancies
would be overrepresented in such an overtly recruited sample,
proper use of active controls should allow for training effects to
be isolated from expectation. This view is in line with some from
the medical domain who argue that researchers can make use of
participant expectation to better test treatment effects in ran-
domized controlled trials (42). This view is also in line with some
from the psychotherapy domain who argue that motivation is
important for treatment effectiveness (43).
One interesting consideration is the likelihood that these two
design approaches recruit from different subpopulations. Dweck
(34) has shown that individuals hold implicit beliefs regarding
whether or not intelligence is malleable and that these beliefs
predict a number of learning and academic outcomes. Thus, it is
possible that the benefits from cognitive training occur only in
individuals who believe the training will be effective. That being
said, this possibility is not applicable to our data because our
design eliminated a main effect of training. It will be important
in future work to investigate the relation between expectation
and processes of learning during cognitive training.
Our data do not allow us to understand the field as a whole;
instead, they allow us to understand existing limitations to cur-
rent research that require further exploration. To wit, we iden-
tified expectancy as a major factor that needs to be considered
for a fuller understanding of training effects. More rigorous de-
signs such as double-blind, block randomized controlled trials that
measure multiple outcomes may offer a better testof these
cognitive training effects. Blinding subjects to cognitive training
may be the biggest obstacle in these designsas pointed out by
Boot et al. (29), because participants become aware of the goals of
the study. Furthermore, assessing expectancy and personal theo-
ries of intelligence malleability (cf. ref. 34) before randomization
to ensure adequate representation in all groups would allow us to
better assess the true training effects and the potential for ex-
pectancy to produce effects alone or in interaction with training.
Finally, researchers should use more measures of Gf to determine
whether positive outcomes are the result of latent changes or
changes in test-specific performance. We are aware of no study to
dateincluding the present onethat uses these rigorous meth-
ods. (We include the present one by design. Our goal was to de-
termine whether a main effect of expectation existed using
methods similar to published research.) By using such methods,
we can begin to understand whether true training effects exist and
Fig. 4. Improvement in test scores from pretraining to posttraining by
group and speed of participant sign up, split into fast and slow (z=1 and
z=1 of minutes since experiment onset, respectively). Error bars represent SE.
Fig. 5. Improvement in test scores from pretraining to posttraining by
group and performance on training task (z=1 and z=1 of training per-
formance, respectively). Error bars represent SE.
4of5
|
www.pnas.org/cgi/doi/10.1073/pnas.1601243113 Foroughi et al.
are generalizable to samples (and perhaps populations) beyond
those who expect to improve.
Conclusion
Our findings have important implications for cognitive-training
research and the brain-training industry at large. Previous cog-
nitive-training results may have been inadvertently influenced by
placebo effects arising from recruitment or design. For the field
of cognitive training to advance, it is important that future work
report recruitment information and include the Theories of In-
telligence Scale (34) to determine the relation between observed
effects of training and of expectancy. The brain-training industry
may be advised to temper their claims until the role of placebo
effects is better understood. Many commercial brain-training
websites make explicit claims about the effectiveness of their
training that are not currently supported by many in the scientific
community (ref. 44; cf. ref. 45). Consistent with that concern, one
of the largest brain-training companies in the world agreed
in January 2016 to pay a $2 million fine to the Federal Trade
Commission for deceptive advertising about the benefits of their
programs (46). The deceptionexaggerated claims of training
efficacymay be fueling a placebo effect that may contaminate
actual brain-training effects.
We argue that our findings also have broad implications for
the advancement of science of human cognition; in a recent
replication effort published in Science, only 36% (35 of 97) of
the psychological science studies (including those that fall under
the broad category of neuroscience) were successfully replicated
(47). Failure to control or account for placebo effects could have
contributed to some of these failed replications. Our goal in any
experiment should be to take every step possible to ensure that
the effects we seek are the result of manipulated interventions
not confounds that go unreported or undetected.
ACKNOWLEDGMENTS. This work was supported by George Mason Univer-
sity; the George Mason University Provost PhD Awards; Office of Naval
Research Grant N00014-14-1-0201; and Air Force Office of Scientific Research
Grant FA9550-10-1-0385.
1. Selk J (2013) Amidst billion-dollar brain fitness industry, a free way to train your brain.
Forbs. Available at www.forbes.com/sites/jasonselk/2013/08/13/amidst-billion-dollar-
brain-fitness-industry-a-free-way-to-train-your-brain/#6b3457647d41. Accessed May 8,
2016.
2. Shapiro AK (1971) Handbook of Psychotherapy and Behavior Change (Wiley, New
York).
3. Turner JA, Deyo RA, Loeser JD, Von Korff M, Fordyce WE (1994) The importance of
placebo effects in pain treatment and research. JAMA 271(20):16091614.
4. Bouchard TJ, Jr, Lykken DT, McGue M, Segal NL, Tellegen A (1990) Sources of human
psychological differences: The Minnesota Study of Twins Reared Apart. Science
250(4978):223228.
5. Plomin R, Pedersen NL, Lichtenstein P, McClearn GE (1994) Variability and stability in
cognitive abilities are largely genetic later in life. Behav Genet 24(3):207215.
6. Larsen L, Hartmann P, Nyborg H (2008) The stability of general intelligence from early
adulthood to middle-age. Intelligence 36(1):2934.
7. Jaeggi SM, Buschkuehl M, Jonides J, Perrig WJ (2008) Improving fluid intelligence with
training on working memory. Proc Natl Acad Sci USA 105(19):68296833.
8. Jaeggi SM, et al. (2010) The relationship between n-back performance and matrix
reasoningImplications for training and transfer. Intelligence 38(6):625635.
9. Jaeggi SM, Buschkuehl M, Jonides J, Shah P (2011) Short- and long-term benefits of
cognitive training. Proc Natl Acad Sci USA 108(25):1008110086.
10. Jaeggi SM, Buschkuehl M, Shah P, Jonides J (2014) The role of individual differences in
cognitive training and transfer. Mem Cognit 42(3):464480.
11. Rudebeck SR, Bor D, Ormond A, OReilly JX, Lee AC (2012) A potential spatial working
memory training task to improve both episodic memory and fluid intelligence. PLoS
One 7(11):e50431.
12. Strenziok M, et al. (2014) Neurocognitive enhancement in older adults: Comparison
of three cognitive training tasks to test a hypothesis of training transfer in brain
connectivity. Neuroimage 85(Pt 3):10271039.
13. Neisser U, et al. (1996) Intelligence: Knowns and unknowns. Am Psychol 51(2):77101.
14. Watkins MW, Lei P-W, Canivez GL (2007) Psychometric intelligence and achievement:
A cross-lagged panel analysis. Intelligence 35(1):5968.
15. Schmidt FL, Hunter JE (1998) The validity and utility of selection methods in personnel
psychology: Practical and theoretical implications of 85 years of research findings.
Psychol Bull 124(2):262275.
16. Gottfredson LS (2004) Intelligence: Is it the epidemiologistselusive fundamental
causeof social class inequalities in health? J Pers Soc Psychol 86(1):174199.
17. Gottfredson LS, Deary IJ (2004) Intelligence predicts health and longevity, but why?
Curr Dir Psychol Sci 13(1):14.
18. Whalley LJ, Deary IJ (2001) Longitudinal cohort study of childhood IQ and survival up
to age 76. BMJ 322(7290):819.
19. OToole BI, Stankov L (1992) Ultimate validity of psychological tests. Pers Individ Dif
13(6):699716.
20. Ceci SJ, Williams WM (1997) Schooling, intelligence, and income. Am Psychol 52(10):
10511058.
21. Strenze T (2007) Intelligence and socioeconomic success: A meta-analytic review of
longitudinal research. Intelligence 35(5):401426.
22. Willis SL, et al.; ACTIVE Study Group (2006) Long-term effects of cognitive training on
everyday functional outcomes in older adults. JAMA 296(23):28052814.
23. Harrison TL, et al. (2013) Working memory training may increase working memory
capacity but not fluid intelligence. Psychol Sci 24(12):24092419.
24. Redick TS, et al. (2013) No evidence of intelligence improvement after working
memory training: A randomized, placebo-controlled study. J Exp Psychol Gen 142(2):
359379.
25. Au J, et al. (2015) Improving fluid intelligence with training on working memory:
A meta-analysis. Psychon Bull Rev 22(2):366377.
26. Karbach J, Verhaeghen P (2014) Making working memory work: A meta-analysis of
executive-control and working memory training in older adults. Psychol Sci 25(11):
20272037.
27. Dougherty MR, Hamovitz T, Tidwell JW (2016) Reevaluating the effectiveness of
n-back training on transfer through the Bayesian lens: Support for the null. Psychon
Bull Rev 23(1):306316.
28. Greenwood PM, Parasuraman R (November 16, 2015) The mechanisms of far
transfer from cognitive training: Review and hypothesis. Neuropsychology, 10.1037/
neu0000235.
29. Boot WR, Simons DJ, Stothart C, Stutts C (2013) The pervasive problem with placebos
in psychology: Why active control groups are not sufficient to rule out placebo ef-
fects. Perspect Psychol Sci 8(4):445454.
30. Melby-Lervåg M, Hulme C (2013) Is working memory training effective? A meta-analytic
review. Dev Psychol 49(2):270291.
31. Melby-Lervåg M, Hulme C (2016) There is no convincing evidence that working
memory training is effective: A reply to Au et al. (2014) and Karbach and Verhaeghen
(2014). Psychon Bull Rev 23(1):324330.
32. Shipstead Z, Redick TS, Engle RW (2012) Is working memory training effective?
Psychol Bull 138(4):628654.
33. Boot WR, Blakely DP, Simons DJ (2011) Do action video games improve perception
and cognition? Front Psychol 2:226.
34. Dweck C (2000) Self-Theories: Their Roles in Motivation, Personality, and Development
(Psychology Press, New York).
35. Evans FJ (1974) The placebo response in pain reduction. Adv Neurol 4:289296.
36. Hossiep R, Turck D, Hasella M (1999) Bochumer Matrizentest (BOMAT) Advanced
(Hogrefe, Goettingen, Germany).
37. Raven JC, Court JH, Raven J, Kratzmeier H (1994) Advanced Progressive Matrices
(Oxford Psychologists Press, Oxford).
38. Raven JC, Court JH (1998) Ravens Progressive Matrices and Vocabulary Scales (Oxford
Psychologists Press, Oxford).
39. Frey MC, Detterman DK (2004)Scholastic assessment or g? Therelationship between the
Scholastic Assessment Test and general cognitive ability. Psychol Sci 15(6):373378.
40. Raven J, Raven JC, Court JH (1998) Manual for the Ravens Progressive Matrices and
Vocabulary Scales (Oxford Psychologists Press, Oxford).
41. Cacioppo JT, Petty RE (1982) The need for cognition. J Pers Soc Psychol 42(1):805818.
42. Torgerson DJ, Klaber-Moffett J, Russell IT (1996) Patient preferences in randomised
trials: Threat or opportunity? J Health Serv Res Policy 1(4):194197.
43. Ryan RM, Lynch MF, Vansteenkiste M, Deci EL (2010) Motivation and autonomy in
counseling, psychotherapy, and behavior change: A look at theory and practice.
Couns Psychol 39(2):193260.
44. Stanford Center on Longevity (2014) A consensus on the brain training industry
from the scientific community. Available at longevity3.stanford.edu/b log/2014/10/
15/the-consensus-on-the-brain-training-industry-from-the-scientific-community-2/.
Accessed May 8, 2016.
45. Cognitive Training Data (2015) Open letter response to the Stanford Center on lon-
gevity. Available at www.cognitivetrainingdata.org/. Accessed May 8, 2016.
46. Federal Trade Commission (2016) Lumosity to pay 2$ million to settle FTC deceptive
advertising charges for its brain trainingprogram. Available at https://www.ftc.gov/
news-events/press-releases/2016/01/lumosity-pay-2-million-settle-ftc-deceptive-advertising-
charges. Accessed May 8, 2016.
47. Open Science Collaboration (2015) PSYCHOLOGY. Estimating the reproducibility of
psychological science. Science 349(6251):aac4716.
Foroughi et al. PNAS Early Edition
|
5of5
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
... Despite a plethora of subsequent studies supporting app-industry claims (Cognitive Training Data, 2019), meta-analytic reviews reveal a lack of strong evidence of cognitive benefits from playing games (Sala et al., 2018), specifically for the commercial brain-training genre (Kable et al., 2017;Simons et al., 2016). Concerns, including the lack of transfer of trained non-declarative skills to everyday performances (Melby-Lervåg et al., 2016), methodological intricacies in understanding games' efficacy in cognitive training (Simons et al., 2016), the versatility of sensory experience that games can feature (Green, 2018), individual differences (Jaeggi et al., 2014) and the possibility of the placebo effect (Foroughi et al., 2016) fuel the continuing debate over the role and efficacy of games in cognitive training. ...
... However, many discrepancies exist in the self-reported perceptions about brain-training (e.g., Ng et al., 2020;Rabipour et al., 2018). More importantly, user perceptions and beliefs about potential benefits manifest in experimental studies as placebo effects-how participants' expectations of specific outcomes impact the results of studies (Foroughi et al., 2016;Tiraboschi et al., 2019). Individual differences such as intrinsic motivations, need for cognition, beliefs, and perceptions about cognitive training influence positive findings but are largely unaccounted for in most experimental studies (Jaeggi et al., 2014). ...
... Still, perceptions appeared to poorly predict the frequency of use (Ng et al., 2020). Researchers maintain that appeal and optimism might result from a placebo effect, and preexisting expectations might impact the experience and perceptions about use (e.g., Foroughi et al., 2016;Rabipour and Davidson, 2015;Torous et al., 2016). However, these studies do not consider other factors like the entertainment experience of braintraining games, which in and of itself provides a valuable additional dimension to the work mentioned above (e.g., Franceschini et al., 2022). ...
Article
This article investigates the role of games in cognitive training through multimethod deductions from user references to the phenomenon, a consideration absent in the current literature. The results of qualitative thematic and quantitative textual analyses on a corpus of commercial brain-training apps' reviews (N ¼ 30,000) paint a complicated picture of user perceptions of brain-training services, where recreation from gaming takes precedence over concerns of cognitive gains. While users remain uncertain about benefits in cognitive functions, positive perceptions suggest the affective utility of playing brain-training games and the self-regulatory benefits of using the apps. These findings suggest a link between brain-training and psychological variables associated with complex entertainment experiences from playing digital games. The critical role of entertainment from gameplay and associated psychological dimensions are discussed as they relate to our findings and emerging perspectives on the role of gaming in cognitive and affective functions.
... Researchers have found that information phrasing in flyers can affect the perception, understanding, and decision-making of prospective participants. 21,22 Foroughi et al. 23 used flyers that looked similar but contained different information to recruit their experimental and control group participants that were designed to differ according to specific characteristics. By phrasing study information as "brain training & cognitive enhancement" in the experimental condition and "email today & participate in a study" in the control condition, they were able to recruit participants who held strong positive beliefs in the malleability of intelligence in the experimental condition and their counterparts who did not hold such beliefs in the control condition. ...
... By phrasing study information as "brain training & cognitive enhancement" in the experimental condition and "email today & participate in a study" in the control condition, they were able to recruit participants who held strong positive beliefs in the malleability of intelligence in the experimental condition and their counterparts who did not hold such beliefs in the control condition. 23 For weight researchers, weight-related information such as body measurements, weight-related social experiences and attitudes, dietary intake, and eating habits in recruitment materials might cause selection bias. Thus, the current study examined participants' characteristics by manipulating the phrasing of weightrelated information included in the recruitment materials. ...
Article
Full-text available
Objective Although 82% of American adults have a body mass index (BMI) of over 25, individuals with elevated BMI are considered difficult to recruit for studies. Effective participant identification and recruitment are crucial to minimize the likelihood of sampling bias. One understudied factor that could lead to sampling bias is the study information presented in recruitment materials. In the context of weight research, potential participants with higher weight may avoid studies that advertise weight‐related procedures. Thus, this study experimentally manipulated the phrasing of weight‐related information included in recruitment materials and examined its impact on participants' characteristics. Methods Two visually similar flyers, either weight‐salient or neutral, were randomly posted throughout a university campus to recruit participants ( N = 300) for a short survey, assessing their internalized weight bias, anticipated and experienced stigmatizing experiences, eating habits, and general demographic characteristics. Results Although the weight‐salient (vs. neutral) flyer took 18.5 days longer to recruit the target sample size, there were no between flyer differences in respondents' internalized weight bias, anticipated/experienced weight stigma, disordered eating behaviors, BMI, or perceived weight. Absolute levels of these variables, however, were low overall. Conclusion Providing detailed information about study procedures allows participants to have more autonomy over their participation without differentially affecting participant characteristics.
... A growing body of literature has been debating on the malleability of cognitive functions according to expectations (i.e., how much the single participant places a positive value on the intervention and expects it will improve a certain ability) 116 www.nature.com/scientificreports/ field with mixed results [118][119][120][121][122] . Crucially, some people respond to expectations whereas others do not 123 . ...
Article
Full-text available
Despite intense and costly treatments, developmental dyslexia (DD) often persists into adulthood. Several brain skills unrelated to speech sound processing (i.e., phonology), including the spatial distribution of visual attention, are abnormal in DD and may represent possible treatment targets. This study explores the efficacy in DD of rightward prismatic adaptation (rPA), a visuomotor adaptation technique that enables visuo-attentive recalibration through shifts in the visual field induced by prismatic goggles. A digital intervention of rPA plus cognitive training was delivered weekly over 10 weeks to adolescents with DD (aged 13–17) assigned either to treatment (N = 35) or waitlist (N = 35) group. Efficacy was evaluated by repeated measures MANOVA assessing changes in working memory index (WMI), processing speed index (PSI), text reading speed, and words/pseudowords reading accuracy. rPA treatment was significantly more effective than waitlist (p ≤ 0.001; ηp2 = 0.815). WMI, PSI, and reading speed increased in the intervention group only (p ≤ 0.001, ηp2 = 0.67; p ≤ 0.001, ηp2 = 0.58; p ≤ 0.001, ηp2 = 0.29, respectively). Although modest change was detected for words and pseudowords accuracy in the waitlist group only (words: p ≤ 0.001, d = 0.17, pseudowords: p = 0.028; d = 0.27), between-group differences were non-significant. rPA-coupled cognitive training enhances cognitive and reading abilities in adolescents with DD. This innovative approach could have implications for early remedial treatment.
... Thus, do PlayStation gamers have higher expectations compared to Xbox gamers? Research like Tiraboschi et al. (2019) suggests so, but no study has evaluated the effects of expectancy regarding action games on different platforms (see Edwards et al., 2021;Foroughi et al., 2016;Ng et al., 2020;Parong et al., 2022;Rabipour & Davidson, 2015;Rabipour et al., 2018;Ziv et al., 2022). Furthermore, although this study focused on action game players, it is not possible to identify all game categories accessed by participants during their gaming journey. ...
Article
Full-text available
Objective: The present study aimed to investigate the effects of different types of engagement with action video games on optimizing attentional resources and working memory through a task with three distinct complexities. Method: A computerized version of the Continuous Performance Test was administered to a sample of 85 participants, divided into four groups based on their weekly engagement with action games on PlayStation and Xbox platforms. Dependent variables included discriminability value (d′), criteria (c), reaction time, and recall rate in the operational memory task. Results: The findings demonstrated a significant effect across the complexities. Furthermore, the results indicated that participants engaged in gaming for 11 hr or more per week exhibited better performance compared to those dedicating 10 hr or less. Moreover, players engaging in gaming 1–3 times per week suggested inferior outcomes compared to those engaging four times or more per week. Conclusions: These outcomes suggest that the amount of time/weekly hours dedicated to action video games is an important factor for achieving better performance in attention and operational memory tests among video game players. This study holds significance in assessing the cognitive effects of games and their contributions to cognition-based rehabilitation practices utilizing cognitive training through gaming.
... Both types of control groups allow for disentangling learning effects arising from completing the same assessment multiple times from intervention-specific effects. Active control groups additionally control for non-specific intervention effects such as expectation and placebo effects [6,7]. Experimental and control groups are then compared before and after the intervention, and sometimes at a follow-up (typically between 1 month to a year). ...
Article
Full-text available
Affordable and easy-to-administer interventions such as cognitive training, cognitively stimulating everyday leisure activities, and non-invasive brain stimulation techniques, are promising avenues to counteract age-related cognitive decline and support people in maintaining cognitive health into late adulthood. However, the same pattern of findings emerges across all three fields of cognitive intervention research: whereas improvements within the intervention context are large and often reliable, generalisation to other cognitive abilities and contexts are severely limited. These findings suggest that while cognitive interventions can enhance the efficiency with which people use their existing cognitive capacity, these interventions are unlikely to expand existing capacity limits. Therefore, future research investigating generalisation of enhanced efficiency constitutes a promising avenue for developing reliably effective cognitive interventions.
... In recent years, 'smart drugs' such as modafinil have been employed to enhance cognitive performance, although these come with considerable side-effects 5,6 . While factors such as motivation, behavioural training and self-efficacy may be able to improve cognitive performance 7,8 , it has been suggested that positive treatment expectations and placebo effects in the absence of any pharmacological treatment may also be harnessed to improve different aspects of cognitive performance [9][10][11] . ...
Article
Full-text available
The use of so-called ‘smart drugs’ such as modafinil to improve cognitive performance has recently attracted considerable attention. However, their side effects have limited user enthusiasm. Open-label placebo (OLP) treatment, i.e., inert treatments that are openly disclosed to individuals as having no active pharmacological ingredient, has been shown to improve various medical symptoms and conditions, including those related to cognitive performance. OLP treatment could therefore be an exciting alternative to pharmacological cognitive enhancers. Here, we used a randomized-controlled design to investigate the effect of a 21-day OLP treatment on several sub-domains of cognitive performance in N = 78 healthy volunteers. Subjective and objective measures of cognitive performance as well as different measures of well-being were obtained before and after the treatment period. Using a combination of classic Frequentist and Bayesian analysis approaches showed no additional benefit from OLP treatment in any of the subjective or objective measures of cognitive performance. Our study thus highlights possible limitations of OLP treatment in boosting cognitive performance in healthy volunteers. These findings are discussed in the light of expectancy-value considerations that may determine OLP efficacy.
Preprint
Full-text available
Navigating uncertain environments is a fundamental challenge for adaptive behavior, and affective states such as anxiety and apathy can profoundly influence an individual's response to uncertainty. Uncertainty encompasses both volatility and stochasticity, where volatility refers to how rapidly the environment changes and stochasticity describes outcomes resulting from random chance. This study investigates how anxiety and apathy modulate perceptions of environmental volatility and stochasticity and how these perceptions impact exploratory behavior. In a large online sample (N = 1001), participants completed a restless three-armed bandit task, and their choices were analyzed using latent state models to quantify the computational processes. We found that anxious individuals attributed uncertainty more to environmental volatility than stochasticity, leading to increased exploration, particularly after reward omission. Conversely, apathetic individuals perceived uncertainty as more stochastic than volatile, resulting in decreased exploration. The ratio of perceived volatility to stochasticity mediated the relationship between anxiety and exploratory behavior following adverse outcomes. These findings reveal distinct computational mechanisms underlying anxiety and apathy in uncertain environments. Our results provide a novel framework for understanding the cognitive and affective processes driving adaptive and potentially maladaptive behaviors under uncertainty, with implications for the characterization and treatment of neuropsychiatric disorders.
Article
Full-text available
Introduction Employment is recognized as a fundamental human right, which correlates with better physical and mental health. Importantly, well-designed work, which considers the physical, social, and psychological impacts of work, can serve to enhance the cognitive abilities of workers. Although often overlooked, work for individuals with disabilities, including cognitive impairments, is equally important for their physical and mental well-being. What has not been established, however, is whether well-designed work can also enhance the cognitive abilities of individuals with cognitive impairments. Methods Using a longitudinal study design, we investigated the impact of well-designed work on the cognitive abilities of 60 participants (operators) at the AMIPI Foundation factories, which employ individuals with cognitive impairments to produce electrical cables and harnesses for the automobile industry. The same operators were assessed at three different time points: upon hiring (n = 60), and after working in the factory for 1 year (n = 41, since 19 left the factory) and 2 years (n = 28, since 13 more left the factory). We used five cognitive tests evaluating: (1) finger and manual dexterity, bimanual dexterity, and procedural memory using the Purdue Pegboard; (2) sustained and selective attention using the Symbol Cancellation Task; (3) short- and long-term declarative verbal memory and long-term verbal recognition memory using Rey's Audio-Verbal Learning Test; (4) short- and long-term visual recognition memory using the Continuous Visual Memory Test; and (5) abstract reasoning using Raven's Standard Progressive Matrices. Results We observed improvements in procedural memory, sustained and selective attention, and short- and long-term visual recognition memory after working in the factory for 1 or 2 years. We did not observe improvements in finger or manual dexterity or bimanual dexterity, nor short- or long-term declarative verbal memory or verbal recognition memory, nor abstract reasoning. Discussion We conclude that, in addition to improving physical and mental well-being, well-designed manufacturing work can serve as a training intervention improving some types of cognitive functioning in individuals with cognitive impairments.
Article
Full-text available
Objective: General intelligence is important for success in daily life, fueling interest in developing cognitive training as an intervention to improve fluid ability (Gf). A major obstacle to the design of effective cognitive interventions has been the paucity of hypotheses bearing on mechanisms underlying transfer of cognitive training to Gf. Despite the large amounts of money and time currently being expended on cognitive training, there is little scientific agreement on how, or even whether, Gf can be heightened by such training. Method: We review the relevant strands of evidence on cognitive-training-related changes in (a) cortical mechanisms of distraction suppression, and (b) activation of the dorsal attention network (DAN). We hypothesize that training-related increases in control of attention are important for what is termed far transfer of cognitive training to untrained abilities, notably to Gf. Results: We review the evidence that distraction suppression evident in behavior, neuronal firing, scalp electroencephalography, and hemodynamic change is important for protecting target processing during perception and also for protecting targets held in working memory. Importantly, attentional control also appears to be central to performance on Gf assessments. Consistent with this evidence, forms of cognitive training that increase ability to ignore distractions (e.g., working memory training and perceptual training) not only affect the DAN but also affect transfer to Gf. Conclusions: Our hypothesis is supported by existing evidence. However, to advance the field of cognitive training, it is necessary that competing hypotheses on mechanisms of far transfer of cognitive training be advanced and empirically tested. (PsycINFO Database Record
Article
Full-text available
This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and the 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.
Article
Full-text available
Empirically analyzing empirical evidence One of the central goals in any scientific endeavor is to understand causality. Experiments that seek to demonstrate a cause/effect relation most often manipulate the postulated causal factor. Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study. Science , this issue 10.1126/science.aac4716