ArticlePDF Available

What is the “science of science communication”?

Authors:

Abstract and Figures

This essay seeks to explain what the “science of science communication” is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an “inherent contradiction”; it is a problem to be solved — by the science of science communication understood as a “new political science” for perfecting enlightened self-government.
What is the “science of science communication”?
Dan M. Kahan
This essay seeks to explain what the “science of science communication” is
by doing it. Surveying studies of cultural cognition and related dynamics, it
demonstrates how the form of disciplined observation, measurement, and
inference distinctive of scientific inquiry can be used to test rival hypotheses
on the nature of persistent public conflict over societal risks; indeed, it
argues that satisfactory insight into this phenomenon can be achieved only
by these means, as opposed to the ad hoc story-telling dominant in popular
and even some forms of scholarly discourse. Synthesizing the evidence,
the essay proposes that conflict over what is known by science arises from
the very conditions of individual freedom and cultural pluralism that make
liberal democratic societies distinctively congenial to science. This tension,
however, is not an “inherent contradiction”; it is a problem to be solved —
by the science of science communication understood as a “new political
science” for perfecting enlightened self-government.
Abstract
Risk communicationKeywords
Introduction Public opinion on societal risks presents a disorienting spectacle. Is the earth
warming up as a result of human activity? Can nuclear wastes be safely stored in
deep underground rock formations? Can natural gas be safely extracted by
hydraulic fracturing of bedrock? Will inoculating adolescent girls against the
human papilloma virus — an extremely common sexually transmitted disease
responsible for cervical cancer — lull them into engaging in unprotected sex,
thereby increasing their risk of pregnancy or of other STDs? Does allowing citizens
to carry concealed handguns in public increase crime — or reduce it by deterring
violent predation?
Never have human societies known so much about mitigating the dangers they face
but agreed so little about what they collectively know. Because this disjunction
features the persistence of divisive conflict in the face of compelling scientific
evidence, we can refer to it as the “science communication paradox” (Figure 2).
Resolving this paradox is the central aim of a new science of science communication.
Its central findings suggest that intensifying popular conflict over collective
knowledge is in fact a predictable byproduct of the very conditions that make free,
democratic societies so hospitable to the advancement of science. But just as science
Essay Journal of Science Communication 14(03)(2015)Y04 1
has equipped society to repel myriad other threats, so the science of science
communication can be used to fashion tools specifically suited to dispelling the
science communication paradox.
Figure 1. Polarization over risk. Scatterplots relate risk perceptions to political outlooks for
members of nationally representative sample (N = 1800), April–May 2014 [Kahan, 2015].
The “public
irrationality
thesis”
What is the “science of science communication”? One could easily define it with
reference to some set of signature methods and aims [Fischhoff and Scheufele,
2013]. But more compelling is simply to do the science of science communication —
to show what it means to approach the science communication paradox scientifically.
The most popular explanation for the science communication paradox can be called
the “public irrationality thesis” or “PIT.” Members of the public, PIT stresses, are
not very science literate. In addition, they do not think like scientists. Scientists
assess risk in a conscious, deliberate fashion, employing the analytical reasoning
necessary to make sense of empirical evidence. Members of the public, in contrast,
appraise hazards intuitively, on the basis of fast-acting unconscious emotions. As a
result, members of the public overestimate dramatic or sensational risks like
terrorism and discount more remote but more consequential ones — like climate
change [Weber, 2006; Marx et al., 2007; Sunstein, 2007; Sunstein, 2005].
PIT features genuine cognitive mechanisms known to be important in various
settings [Kahneman, 2003; Frederick, 2005]. It therefore supplies a very plausible
explanation of the science communication paradox.
But there will inevitably be a greater number of plausible accounts of any complex
social phenomenon than can actually be true [Watts, 2011]. Cognitive psychology
supplies a rich inventory of dynamics — “dissonance avoidance”, “availability
cascades”, “tipping points”, “emotional numbing”, “fast vs. slow cognition”, and
the like. Treating these as a grab bag of argument templates, any imaginative op-ed
writer can construct a seemingly “scientific” account of public conflict over risk.
Conjectures of this sort are not a bad thing. But those who offer them should
acknowledge that they are only hypotheses, in need of empirical testing, and not
hold them forth as genuine empirical “explanations.” Otherwise, our understanding
of the science communication paradox will drown in a sea of just-so stories.
So does PIT withstand empirical testing? If the reason members of the public fail to
take climate change as seriously as scientists think they should is that the public
JCOM 14(03)(2015)Y04 2
lacks the knowledge and capacity necessary to understand empirical information,
then we would expect the gap between public and expert perceptions to narrow as
members of the public become more science literate and more proficient in critical
reasoning.
But that does not happen (Figure 2). Members of the public who score highest in
one or another measure of science comprehension, studies show, are no more
concerned about global warming than those who score the lowest [Kahan, 2015;
Kahan et al., 2012]. The same pattern, moreover, characterizes multiple other
contested risks, such as the ones posed by nuclear power, fracking, and private
possession of firearms [Kahan, 2015].
Figure 2. Impact of science comprehension on climate change polarization. Error bars are
0.95 confidence intervals (N = 1540) [Kahan et al., 2012].
The “cultural
cognition
thesis”
Another plausible conjecture — another hypothesis about the science
communication paradox — is the “cultural cognition thesis” (CCT). CCT posits that
certain types of group affinities are integral to the mental processes ordinary
members of the public use to assess risk [Kahan et al., 2010].
“Motivated reasoning” refers to the tendency of people to conform their
assessments of all sorts of evidence to some goal unrelated to accuracy [Sood, 2013;
Kunda, 1990]. Students from rival colleges, for example, can be expected to form
opposing perceptions when viewing a film of a disputed officiating call in a football
game between their schools, consistent with their stake in experiencing emotional
solidarity with their peers [Hastorf and Cantril, 1954].
CCT says this same thing occurs when members of the public access information
about contested societal risks. When positions on facts become associated with
opposing social groups — not universities but rather everyday networks of people
linked by common moral values, political outlooks, and social norms —
individuals selectively assess evidence in patterns that reflect their group
identities [Kahan, 2011].
Numerous studies support CCT. In one, my colleagues and I examined the impact
of cultural cognition on perceptions of scientific consensus [Kahan, Jenkins-Smith
JCOM 14(03)(2015)Y04 3
and Braman, 2013]. We asked our subjects — a large, nationally representative
sample of U.S. adults — to indicate whether they regarded particular scientists as
“experts” whose views an ordinary citizen ought to take into account on climate
change, nuclear waste disposal, and gun control. We picked these issues precisely
because they feature disputes over empirical, factual issues among opposing
cultural groups.
The scientists were depicted as possessing eminent qualifications, including
degrees from, and faculty appointments at, prestigious universities. However, half
the study subjects saw a book excerpt in which the featured scientist took the “high
risk” position (global warming is occurring; underground disposal of nuclear waste
is unsafe; permitting carrying of concealed handguns increases crime) and half an
excerpt in which the same scientist took the “low risk” position (there’s no clear
evidence human-caused global warming; underground disposal of nuclear wastes
is safe; permitting concealed carry reduces crime).
The subjects’ assessments of the scientists’ expertise, we found, depended on the fit
between the position attributed to the expert and the position held by most of the
subjects’ cultural peers. If the featured scientist was depicted as endorsing the
dominant position in a subject’s cultural group, the subject was highly likely to
classify that scientist as an “expert” on that issue; if not, then not (Figure 3). Like
sports fans motivated to see the officiating replay as supporting their team, the
subjects selectively credited or discredited the evidence we showed them — the
position of a highly qualified scientist — in a manner supportive of their group’s
position.
Figure 3. Biased perceptions of scientific expertise. Colored bars reflect 0.95 confidence
intervals (N = 1336) [Kahan, Jenkins-Smith and Braman, 2013].
If this is how members of the public assess evidence of “expert consensus” outside
the lab, we should expect members of diverse cultural groups to be polarized not
just on particular risks but also on the weight of scientific opinion on those risks. In
a survey component of the study, we found exactly that: subjects of diverse
affiliations all strongly believed that the position that predominated in their group
was consistent with “scientific consensus.” In relation to National Academy of
Sciences “expert consensus reports”, all the groups were as likely to be right as
wrong across the run of issues.
JCOM 14(03)(2015)Y04 4
Science
comprehension
and polarization
PIT and CCT have also squared off face-to-face. Under PIT, one should expect
individuals who are high in science comprehension to use their knowledge and
reasoning proficiency to form risk perceptions supported by the best available
scientific evidence. Individuals who lack such knowledge and reasoning
proficiencies must “go with their gut”, relying on intuitive heuristics like “what do
people like me believe?” [Weber and Stern, 2011; Sunstein, 2006]. Accordingly,
under PIT one would predict that as members of opposing cultural groups become
more science literate and more adept at analytical reasoning — and thus less
dependent on heuristic substitutes for science comprehension — they should
converge in beliefs on climate change.
But the evidence refutes this prediction. In fact, the most science-comprehending
members of opposing cultural groups, my colleagues and other researchers [Kahan
et al., 2012; Hamilton, Cutler and Schaefer, 2012] have found, are the most polarized
(Figure 4).
This is the outcome CCT predicts. If people can be expected to fit their assessments
of evidence to the dominant position within their cultural groups, then those
individuals most adept in reasoning about scientific data should be even “better” at
forming culturally congenial beliefs than their less adept peers. This hypothesis is
borne out by experiments showing that individuals who score highest on tests of
one or another reasoning disposition opportunistically use that disposition to
search out evidence supportive of their cultural predispositions and explain away
the rest.
Figure 4. Polarizing impact of science comprehension on climate-change risk perceptions.
Nationally representative sample (N = 1540). Shaded areas represent 0.95 confidence inter-
vals [Kahan et al., 2012].
Pathological vs.
normal cases
Scientific investigation of the science communication paradox, then, suggests that
CCT furnishes a more satisfactory explanation than PIT. But it also reveals
something else: such conflict — including the magnification of it by science
comprehension — is not the norm. From the dangers of consuming artificially
sweetened beverages to the safety of medical x-rays to the carcinogenic effect of
exposure to power-line magnetic fields, the number of issues that do not culturally
polarize the public is orders of magnitude larger than the number that do (Figure 5
and Figure 6).
JCOM 14(03)(2015)Y04 5
Figure 5. “Polarized” vs. “unpolarized” risk perceptions. Scatterplots relate risk perceptions
to political outlooks for members of nationally representative sample (N = 1800), [Kahan,
2015].
Members of the public definitely do not have a better grasp of the science on the
myriad issues that don’t polarize them than they have of the few that do. In order
simply to live — much less live well — individuals need to accept as known by
science much more than they could comprehend or verify on their own. They do
this by becoming experts at figuring out who knows what about what. It does not
matter, for example, that half the U.S. population (science literacy tests show)
believe “antibiotics kill viruses as well as bacteria” [National Science Foundation,
2014]: they know they should go to the doctor and take the medicine she prescribes
when they are sick.
The place in which people are best at exercising this knowledge-recognition skill,
moreover, is inside of identity-defining affinity groups. Individuals spend most of
their time with people who share their basic outlooks, and thus get most of their
information from them. They can also read people “like them” better — figuring
out who genuinely knows what’s known by science and who is merely pretending
to [Watson, Kumar and Michaelsen, 1993].
This strategy is admittedly insular. But that is not usually a problem either: all the
major cultural groups with which people identify are amply stocked with highly
science-comprehending members and all enjoy operational mechanisms for
JCOM 14(03)(2015)Y04 6
Figure 6. Science comprehension and polarization. Nationally representative sample (N =
1800), April-May 2014. Shaded areas represent 0.95 confidence intervals [Kahan, 2015].
transmitting scientific knowledge to their members. Any group that consistently
misled its members on matters known to science and of consequence to their
well-being would soon die out. Thus, ordinary members of diverse groups
ordinarily converge on what is known by science.
Persistent nonconvergence — polarization — is in fact pathological. It occurs when
factual issues become entangled in antagonistic cultural meanings that transform
positions on them into badges of loyalty to opposing groups. In that circumstance,
the same process that usually guides ordinary members of the public to what’s
known by science will systematically deceive them.
Popper’s
revenge. . .
It’s no accident that the best philosophical exposition of science’s distinctive way of
knowing — The Logic of Scientific Discovery [Popper, 1959] — and one of if not the
best philosophical expositions of liberal democracy — The Open Society and its
Enemies [Popper, 1966] — were both written by Karl Popper. Only in a society that
denies any institution the authority to stipulate what must be accepted as true,
Popper recognized, can individuals be expected to develop the inquisitive and
disputatious habits of mind that fuel the scientific engine of conjecture and
refutation.
JCOM 14(03)(2015)Y04 7
But as Popper understood, removing this barrier to knowledge does not dispense
with the need for reliable mechanisms for certifying what science knows. What’s
distinctive of the Popperian “liberal republic of science” is not the absence of a
social process for certifying valid knowledge but the multiplication of potential
certifiers in the form of the pluralistic communities entered into by freely reasoning
citizens.
Again, these communities typically will converge on what’s known to science. But
as the volume of knowledge and number of cultural certifiers both continue to
grow, the occasions for disagreement among cultural groups necessarily increases.
An expanding number of conflicts is thus guaranteed by sheer fortuity alone,
although the occurrence of them can no doubt be instigated for strategic gain as
well. Thus, the science communication paradox — the simultaneous increase in
knowledge and conflict over what’s known — is built into the constitution of the
liberal republic of science. The science communication paradox is Popper’s
revenge.
The
disentanglement
principle
But as Popper also taught, there are no immutable forces at work in human history.
The same tools used to fashion a scientific account of the source of the science
communication paradox can be used to dispel it. The fundamental source of the
paradox, empirical study suggests, is the entanglement of opposing factual beliefs
with people’s identities as members of one or another cultural group. It’s logical to
surmise, then, that the solution is to disentangle knowledge and identity when
communicating scientific information [Kahan, 2015].
Lab experiments have been used to model this dynamic. In one, my research group
tested U.S. and U.K. subjects’ assessments of valid evidence on global
warming [Kahan et al., 2015]. As expected, those we had first exposed to
information on carbon-emission reductions were even more polarized on the
validity of the global-warming evidence than were members of a control group.
The images and language used to advocate carbon-emission limits triggered
cultural cognition by accentuating the symbolic association between belief in
climate change and conflict between groups defined by their opposing moral
attitudes toward commerce, industry, and free markets.
Polarization dissipated, however, among subjects who had first been exposed to
information on plans to study geoengineering. This technology resonates with the
values of cultural groups whose members prize the use of human ingenuity to
overcome environmental limits. By affirming rather than denigrating their cultural
identities, the information on geoengineering dissolved the conflict those
individuals experienced between crediting human-caused global warming and
forming stances that express their defining commitments.
This lab-study insight comports with studies of “disentanglement” strategies in
real-world settings. For example, research shows that standardized test questions
that assess “belief” in evolution don’t genuinely measure knowledge of either
evolutionary science or science generally. Instead, they measure commitment to a
form of cultural identity that features religiosity (Figure 7) [Kahan, 2015; Roos,
2012; Bishop and Anderson, 1990].
JCOM 14(03)(2015)Y04 8
Figure 7. Disentangling identity from knowledge. Colored bars are 0.95 levels of confidence.
Standardized test items on evolution generate biased results when administered to highly
religious persons, but the effect can be erased by “disentangling” identity and knowledge in
the item wording [Kahan, 2015].
Consistent with this finding, education researchers have devised instructional
protocols that avoid conflating students’ knowledge of evolutionary science with
their professions of “belief in” it. By disentangling acquisition of knowledge from
the obligation to make an affirmation that denigrates religious students’ identities,
these instructional methods enable students who say they “don’t disbelieve in”
evolution to learn the elements of the modern synthesis — natural selection,
random mutation, and genetic variance — just as readily as nonreligious students
who say they “do believe in” it [Lawson and Worsnop, 1992; Lawson, 1999].
Real-world communicators have also successfully used disentanglement to
promote public engagement with climate science. Members of the Southeast
Florida Regional Climate Compact — a coalition of local governments in Broward,
Miami-Dade, Monroe, and Palm Beach Counties — have adopted a “Regional
Climate Action Plan” containing over 100 distinct mitigation and adaption
measures.
As it happens, the residents of Southeast Florida are as polarized on whether
human activity is causing global warming as are those in the rest of the U.S. But the
deliberative process that generated the Regional Climate Action Plan didn’t put that
JCOM 14(03)(2015)Y04 9
question; instead, officials, guided by evidence-based methods, focused,
relentlessly, on how communities could use scientific knowledge to address the
region’s practical, everyday needs.
The highly participatory process that led to adoption of the Regional Climate
Action Plan enveloped residents with vivid, genuine examples of diverse local
stakeholders — including businesses and local homeowner associations — evincing
confidence in climate science through their words and actions. That process
disentangled “what should we do with what we know”, a question that unifies
Southeast Floridians, from “whose side are you on”, the divisive question that
shapes the national climate science debate [Kahan, 2015].
These examples teach a common lesson — the science communication
disentanglement principle. To negotiate the dynamics that form Popper’s Revenge,
science communication professionals must protect citizens from having to choose
between knowing what’s known by science and being who they are as members of
diverse cultural communities.
A “new political
science. . . ”
But like other forms of scientific insight geared to protecting human societies from
danger, the disentanglement principle cannot be expected to implement itself.
Government regulatory procedures will need to be revised, programs of education
reorganized, and professional norms updated to refine and exploit the knowledge
generated by the science of science communication.
Identifying the precise nature of these reforms and the means for implementing
them, moreover, will likewise require empirical study and not mere imaginative
story-telling. These were the central themes of a pair of historic colloquia on the
science of science communication recently sponsored by the National Academy of
Sciences in 2012 and 2013.
As aristocratic forms of government yielded to modern democratic ones in the
early 19th century, Tocqueville famously called for a “new political science for a
world itself quite new” [Tocqueville, Reeve and Spencer, 1838]. Today, mature
liberal democracies require a “new political science”, too, one suited to the
distinctive challenge of enabling citizens to reliably recognize the enormous stock
of knowledge that their freedom and diversity make possible.
The science of science communication is that new political science.
References Bishop, B. A. and Anderson, C. W. (1990). ‘Student conceptions of natural selection
and its role in evolution’. Journal of Research in Science Teaching 27 (5),
pp. 415–427.
Fischhoff, B. and Scheufele, D. A. (2013). ‘The science of science communication’.
Proceedings of the National Academy of Sciences 110 (Supplement 3),
pp. 14031–14032.
Frederick, S. (2005). ‘Cognitive Reflection and Decision Making’. Journal of Economic
Perspectives 19 (4), pp. 25–42.
Hamilton, L. C., Cutler, M. J. and Schaefer, A. (2012). ‘Public knowledge and
concern about polar-region warming’. Polar Geography 35 (2), pp. 155–168.
JCOM 14(03)(2015)Y04 10
Hastorf, A. H. and Cantril, H. (1954). ‘They saw a game: A case study’. The Journal
of Abnormal and Social Psychology 49 (1), pp. 129–134.
Kahan, D. M., Jenkins-Smith, H. and Braman, D. (2013). ‘Cultural Cognition of
Scientific Consensus’. J. Risk. Res. 14, pp. 147–174.
Kahan, D. M. (2011). ‘Fixing the Communications Failure’. Nature 463, pp. 296–297.
(2015). ‘Climate-Science Communication and the Measurement Problem’.
Advances in Political Psychology 36, pp. 1–43.
Kahan, D. M., Braman, D., Cohen, G. L., Slovic, P. and Gastil, J. (2010). ‘Who Fears
the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the
Mechanisms of Cultural Cognition’. Law Human Behav. 34 (6), pp. 501–516. DOI:
10.1007/s10979-009-9201-0.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. and
Mandel, G. (2012). ‘The polarizing impact of science literacy and numeracy on
perceived climate change risks’. Nature Climate Change 2, pp. 732–735. DOI:
10.1038/nclimate1547.
Kahan, D. M., Jenkins-Smith, H., Tarantola, T., Silva, C. L. and Braman, D. (2015).
‘Geoengineering and Climate Change Polarization: Testing a Two-Channel
Model of Science Communication’. Annals of the American Academy of Political
and Social Science 658, pp. 192–222. DOI:10.1177/0002716214559002.
Kahneman, D. (2003). ‘Maps of Bounded Rationality: Psychology for Behavioral
Economics’. Am. Econ. Rev. 93 (5), pp. 1449–1475.
Kunda, Z. (1990). ‘The Case for Motivated Reasoning’. Psychological Bulletin 108,
pp. 480–498.
Lawson, A. E. (1999). ‘A scientific approach to teaching about evolution & special
creation’. The American Biology Teacher 61 (4), pp. 266–274. DOI:
10.2307/4450669.
Lawson, A. E. and Worsnop, W. A. (1992). ‘Learning about evolution and rejecting a
belief in special creation: Effects of reflective reasoning skill, prior knowledge,
prior belief and religious commitment’. Journal of Research in Science Teaching 29
(2), pp. 143–166.
Marx, S. M., Weber, E. U., Orlove, B. S., Leiserowitz, A., Krantz, D. H., Roncoli, C.
and Phillips, J. (2007). ‘Communication and mental processes: Experiential and
analytic processing of uncertain climate information’. Global Environ. Chang. 17
(1), pp. 47–58. DOI:10.1016/j.gloenvcha.2006.10.004.
National Science Foundation (2014). Science and Engineering Indicators. Arlington,
VA, U.S.A.: National Science Foundation.
Popper, K. R. (1959). The logic of scientific discovery. New York, U.S.A.: Basic
Books.
(1966). The open society and its enemies. 5th ed. London, U.K.: Routledge and
K. Paul.
Roos, J. M. (2012). ‘Measuring science or religion? A measurement analysis of the
National Science Foundation sponsored science literacy scale 2006–2010’. Public
Understanding of Science 23 (7), pp. 797–813. DOI:10.1177/0963662512464318.
Sood, A. M. (2013). ‘Motivated Cognition in Legal Judgments-An Analytic Review’.
Annual Review of Law and Social Science 9, pp. 307–325.
Sunstein, C. R. (2005). Laws of Fear: Beyond the Precautionary Principle.
Cambridge, U.K.; New York, U.S.A.: Cambridge University Press.
(2006). ‘Misfearing: A reply’. Harvard Law Review 119 (4), pp. 1110–1125.
(2007). ‘On the Divergent American Reactions to Terrorism and Climate
Change’. Columbia Law Rev. 107 (2), pp. 503–557.
JCOM 14(03)(2015)Y04 11
Tocqueville, A. de, Reeve, H. and Spencer, J. C. (1838). Democracy in America. New
York, U.S.A.: G. Dearborn & Co.
Watson, W. E., Kumar, K. and Michaelsen, L. K. (1993). ‘Cultural Diversity’s Impact
on Interaction Process and Performance: Comparing Homogeneous and
Diverse Task Groups’. The Academy of Management Journal 36 (3), pp. 590–602.
Watts, D. J. (2011). Everything Is Obvious: *Once You Know the Answer. How
Common Sense Fails. London, U.K.: Atlantic Books.
Weber, E. (2006). ‘Experience-Based and Description-Based Perceptions of
Long-Term Risk: Why Global Warming does not Scare us (Yet)’. Climatic Change
77 (1), pp. 103–120.
Weber, E. U. and Stern, P. C. (2011). ‘Public Understanding of Climate Change in the
United States’. Am. Psychologist 66, pp. 315–328.
Author Dan Kahan is the Elizabeth K. Dollard Professor of Law and Professor of
Psychology at Yale Law School. He is a member of the Cultural Cognition Project
(www.culturalcognition.net), an interdisciplinary team of scholars who use
empirical methods to examine the impact of group values on perceptions of risk
and science communication. E-mail: dan.kahan@yale.edu.
Dan M. Kahan (2015). ‘What is the “science of science communication”?’.How to cite
JCOM 14 (03), Y04.
This article is licensed under the terms of the Creative Commons Attribution - NonCommercial -
NoDerivativeWorks 4.0 License.
ISSN 1824 – 2049. Published by SISSA Medialab. http://jcom.sissa.it/.
JCOM 14(03)(2015)Y04 12
... On five occasions in the 26 interviews, we found mentions of what we called the "disruptive model of science communication teaching". These were references to educational practices that address and promote structural changes to the traditional science-society relationship, particularly in the traditional role of researchers (e.g., Interviews 1, 3, 5, 7, 26), but also in other groups of social actors (e.g., Interviews 3,5,7,26), and in the way scientific knowledge is produced (e.g., Interviews 5,7,26). Table 6 summarizes the main findings of this model: ...
... "Knowledge co-production" (e.g., Interviews 1, 3), the "roles played by different public" or specific societal actors (e.g., Interviews 3, 5), "public engagement" (e.g., Interview 7), "citizen science" (e.g., Interviews 3, 7), "responsible research and innovation" (e.g., Interviews 1,5,7,26), and the "democratization of science" (e.g., Interviews 7,26) are concepts that emerge from the interviews in relation to this model. This approach to teaching science communication was considered especially important in relation to communicating the "controversy" (e.g., Interview 5), the "limits of science" (e.g., Interviews 5, 7), and the "uncertainty" and the "ignorance" (e.g., Interview 7). ...
... Hence, the "nature of science" (e.g., Interviews 1, 4, 5, 14), the "nature of society" (e.g., Interviews 4, 10, 14), and the "science-society relationship" (e.g., Interviews 1,4,5,10,14,17,22) are key elements that a science communicator has to understand to be able to properly do their work. ...
Article
Full-text available
Changes in the communication ecosystem have generated profound transformations in current science communication. In the same way, the coexistence of diverse actors with different objectives and professional standards also raises new ethical dilemmas. The main objective of this research was to identify existing models of teaching science communication to scientists and professional communicators worldwide. To this end, we conducted 26 semi-structured interviews with science communication teachers from 15 different countries. From these interviews, we identified three models of teaching science communication to scientists: (A) the practical model, where skills such as writing, public speaking, etc., are taught; (B) the reflective model that teaches theory and the history of science communication to enable researchers to understand the relationship between science and society; and (C) the disruptive model, where traditional roles of scientific knowledge production as well as relationships and power roles in science are challenged. On the other hand, we have identified two models for professional science communicators: (A) the professional model, which is subdivided into two different approaches—theoretical (historical review, understanding of the science–society relationships, etc.) and skill-based (writing, audiovisual, etc.) that coexist in teaching programs—and (B) the research model, where tools, concepts, and methodologies for science communication research are taught.
... One explanation for this may be that commonly-used narratives of fear are difficult to sustain in the long-term and can be counterproductive to deep and resilient action [O'Neill & Nicholson-Cole, 2009;Kundzewicz, Matczak, Otto & Otto, 2020]. Social identity strongly determines engagement with climate threats, and even inventive efforts to draw attention to the urgency of the crisis can be wasted if the audience is not primed to be engaged [Kahan, 2015]. As many previous authors have shown, there is a temptation to perpetuate the deficit model of communication in the well-meaning production of new materials and media, but this is largely ineffective in garnering enthusiasm for climate action [Pearce et al., 2015;Simis et al., 2016]. ...
Article
The ocean has a vast capacity for absorbing heat and carbon dioxide, seriously threatening local habitats for marine life. Challenges in connecting wider society with this crisis may originate in its poor visibility for non-specialists: the data can be inaccessible and hard to relate to. In a series of immersive community workshops, participants created artworks combining recent physical ocean climate data recorded in Otago, New Zealand, with impacts on local species from published studies. We found that crafting visual stories was a powerful way to distill greater meaning from complex climate data, and engage participants with harmful changes underway locally.
Article
Full-text available
A substantial body of research has demonstrated that science knowledge is correlated with attitudes towards science, with most studies finding a positive relationship between the two constructs; people who are more knowledgeable about science tend to be more positive about it. However, this evidence base has been almost exclusively confined to high and middle-income democracies, with poorer and less developed nations excluded from consideration. In this study, we conduct the first global investigation of the science knowledge-attitude relationship, using the 2018 Wellcome Global Monitor survey. Our results show a positive knowledge-attitude correlation in all but one of the 144 countries investigated. This robust cross-national relationship is consistent across both science literacy and self-assessed measures of science knowledge.
Article
How do cultural biases, trust in government, and perceptions of risk and protective actions influence compliance with regulation of COVID-19? Analyzing Chinese ( n = 646) and American public opinion samples ( n = 1,325) from spring 2020, we use Grid–Group Cultural Theory and the Protective Action Decision Model to specify, respectively, cultural influences on public risk perceptions and decision-making regarding protective actions. We find that cultural biases mostly affect protective actions indirectly through public perceptions. Regardless of country, hierarchical cultural biases increase protective behaviors via positive perceptions of protective actions. However, other indirect effects of cultural bias via public perceptions vary across both protective actions and countries. Moreover, trust in government only mediates the effect of cultural bias in China and risk perception only mediates the effect of cultural bias in the United States. Our findings suggest that regulators in both countries should craft regulations that are congenial to culturally diverse populations.
Article
Information technologies have been developed and used by government agencies and public authorities to address societal issues, but their effectiveness often hinges on public support and participation. This is evidenced in the use of digital contact tracing (DCT) technology to contain the spread of the coronavirus. Despite the efforts of public authorities and technology firms to develop and promote DCT, its adoption in the United States had been low and uneven. This research resolves the puzzle by showing that the public’s mixed views on DCT are caused by their cultural worldviews, which represent their values and attitudes toward collective responsibility in addressing personal needs as well as social hierarchies and established norms in regulating behaviors. These worldviews influence not only their perceptions of the risks and benefits of the technology but also how they interpret information about the technology. Being more aware of the technology may contribute to, rather than correct, the biases resulting from individuals’ prominent cultural worldviews. This research has practical implications for policymakers and technology developers, highlighting the importance of considering cultural worldviews in communication strategies and technology design. It offers a unique perspective on the interplay between worldviews, technology, and public perception, providing valuable insights for navigating the complex landscape of emerging technologies addressing diverse societal issues.
Article
Full-text available
Science is among humanity’s greatest achievements, yet scientific censorship is rarely studied empirically. We explore the social, psychological, and institutional causes and consequences of scientific censorship (defined as actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality). Popular narratives suggest that scientific censorship is driven by authoritarian officials with dark motives, such as dogmatism and intolerance. Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups. This perspective helps explain both recent findings on scientific censorship and recent changes to scientific institutions, such as the use of harm-based criteria to evaluate research. We discuss unknowns surrounding the consequences of censorship and provide recommendations for improving transparency and accountability in scientific decision-making to enable the exploration of these unknowns. The benefits of censorship may sometimes outweigh costs. However, until costs and benefits are examined empirically, scholars on opposing sides of ongoing debates are left to quarrel based on competing values, assumptions, and intuitions.
Article
The Cambridge Handbook of Political Psychology provides a comprehensive review of the psychology of political behaviour from an international perspective. Its coverage spans from foundational approaches to political psychology, including the evolutionary, personality and developmental roots of political attitudes, to contemporary challenges to governance, including populism, hate speech, conspiracy beliefs, inequality, climate change and cyberterrorism. Each chapter features cutting-edge research from internationally renowned scholars who offer their unique insights into how people think, feel and act in different political contexts. By taking a distinctively international approach, this handbook highlights the nuances of political behaviour across cultures and geographical regions, as well as the truisms of political psychology that transcend context. Academics, graduate students and practitioners alike, as well as those generally interested in politics and human behaviour, will benefit from this definitive overview of how people shape – and are shaped by – their political environment in a rapidly changing twenty-first century.
Article
In this chapter, I review current research on the relationship between personality and political preferences, with an eye to its complexities and the ways in which it is conditioned on other variables – including the contextual factors mentioned at the outset. To provide context, I briefly review research on the structure of political preferences. Next, I summarise a now-substantial body of work suggesting a relationship between rigidity in personality and right-wing political preferences, and then describe moderators of and boundary conditions to this relationship. Finally, in an effort to reconcile increasingly varied findings on political differences in cognition and motivation, I offer an integrative perspective on when the relationship between rigidity and political differences will be ideologically asymmetric and when it will be symmetric.
Article
Full-text available
The interaction processes of culturally homogeneous and culturally diverse groups were studied for 17 weeks. Initially, homogeneous groups scored higher on both process and performance effectiveness. Over time, both homogeneous and heterogeneous groups showed improvement on process and performance, and between-group differences converged. By week 17, there were no differences in process or overall performance, but the heterogeneous groups scored higher on two task performance measures. Implications for management and future research are given.
Article
Full-text available
The cultural cognition thesis posits that individuals rely extensively on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests that a two-channel science communication strategy, combining information content (Channel 1) with cultural meanings (Channel 2), could promote open-minded assessment of information across diverse communities. We test this kind of communication strategy in a two-nation (United States, n = 1,500; England, n = 1,500) study, in which scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. We found that cultural polarization over the validity of climate change science is offset by making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions. We also tested the hypothesis, derived from a competing model of science communication, that exposure to information on geoengineering would lead citizens to discount climate change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.
Article
Full-text available
High scientific literacy is widely considered a public good. Methods of assessing public scientific knowledge or literacy are equally important. In an effort to measure lay scientific literacy in the United States, the National Science Foundation (NSF) science literacy scale has been a part of the last three waves of the General Social Survey. However, there has been debate over the validity of some survey items as indicators of science knowledge. While many researchers treat the NSF science scale as measuring a single dimension, previous work (Bann and Schwerin, 2004; Miller, 1998, 2004) suggests a bidimensional structure. This paper hypothesizes and tests a new measurement model for the NSF science knowledge scale and finds that two items about evolution and the big bang are more measures of a religious belief dimension termed "Young Earth Worldview" than they are measures of scientific knowledge. Results are replicated in seven samples.
Article
Full-text available
In 2006 and 2010, before and after the International Polar Year, the General Social Survey asked cross-sections of the US public for their knowledge and opinions about polar regions. The opinion items sought respondents’ levels of concern about global warming in polar regions, and whether they favored opening Antarctica for development or reserving it for science. Polar knowledge scores show significant improvement from 2006 to 2010, while general science literacy scores and opinions remain largely unchanged. Regression of concern and Antarctic items on background characteristics, ideology, education and the two knowledge tests finds that ideology and knowledge have the most consistent effects. Conservative ideology negatively predicts all six concern items and supports for reserving the Antarctic. Polar knowledge exhibits a positive effect on most of the concern items and on support for reserving the Antarctic. General science knowledge has mainly positive effects on concern and Antarctic opinions as well, but its effects are moderated by ideology. These findings support two contrasting but not mutually exclusive views about the role of information: that more science information generally leads to greater concern about environmental changes, or greater support for science; but also that some informed but strongly ideological respondents acquire information selectively in ways that reinforce their existing beliefs.
Article
How and when do legal decision makers' preferred outcomes inadvertently drive their judgments? This psychological phenomenon, known as motivated cognition or motivated reasoning, has become an important topic of investigation among scholars conducting experimental research at the intersection of law and psychology. This article presents an overview of that literature, discusses some of its legal applications and implications, highlights areas that require further investigation, and considers some potential ways to curtail the covert operation of motivated cognition in the legal arena.
Article
This article examines the science-of-science-communication measurement problem. In its simplest form, the problem reflects the use of externally invalid measures of the dynamics that generate cultural conflict over risk and other policy-relevant facts. But at a more fundamental level, the science-of-science-communication measurement problem inheres in the phenomena being measured themselves. The “beliefs” individuals form about a societal risk such as climate change are not of a piece; rather they reflect the distinct clusters of inferences that individuals draw as they engage information for two distinct ends: to gain access to the collective knowledge furnished by science and to enjoy the sense of identity enabled by membership in a community defined by particular cultural commitments. The article shows how appropriately designed “science comprehension” tests—one general and one specific to climate change—can be used to measure individuals’ reasoning proficiency as collective-knowledge acquirers independently of their reasoning proficiency as cultural-identity protectors. Doing so reveals that there is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes over societal risks is the contamination of the science-communication environment with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.
Article
Seeming public apathy over climate change is often attributed to a deficit in comprehension. The public knows too little science, it is claimed, to understand the evidence or avoid being misled. Widespread limits on technical reasoning aggravate the problem by forcing citizens to use unreliable cognitive heuristics to assess risk. An empirical study found no support for this position. Members of the public with the highest degrees of science literacy and technical reasoning capacity were not the most concerned about climate change. Rather, they were the ones among whom cultural polarization was greatest. This result suggests that public divisions over climate change stem not from the public’s incomprehension of science but from a distinctive conflict of interest: between the personal interest individuals have in forming beliefs in line with those held by others with whom they share close ties and the collective one they all share in making use of the best available science to promote common welfare.