ArticlePDF Available

Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All

Authors:
Fake News on Social Media: People Believe What They Want to
Believe When it Makes No Sense at All
Patricia Moravec, Randall K. Minas and Alan R. Dennis
Kelley School of Business Indiana University
Abstract
Fake news (i.e., misinformation) on social media has sharply increased over the past few years.
We conducted an experiment collecting behavioral and EEG data from 83 social media users to
understand whether they could detect fake news on social media, and whether the presence of a
fake news flag affected their cognition and judgment. We found that the presence of a fake news
flag triggered increased cognitive activity and users spent more time considering the headline.
However, the flag had no effect on judgments about truth; flagging headlines as false did not
influence users’ beliefs. A post-hoc analysis shows that confirmation bias is pervasive, with
users more likely to believe news headlines that align with their political opinions. Headlines that
challenge their opinions receive little cognitive attention (i.e., they are ignored) and users are less
likely to believe them.
1
Fake News on Social Media: People Believe What They Want to
Believe When it Makes No Sense at All
“There is today a special need for propaganda analysis. America is beset by
a confusion of conflicting propagandas, a Babel of voices, warnings-charges,
counter-charges, assertions, and contradictions assailing us continually through
press, radio and newsreel…” (Institute for Propaganda Analysis 1938, p. 1).
We are facing nothing less than a crisis in our democracy based on the
systematic manipulation of data to support the relentless targeting of citizens,
without their consent, by campaigns of disinformation and messages of hate.”
(House of Commons 2018)
INTRODUCTION
In the early days of the Internet, people argued that the Internet would enable greater
transparency of information, which would increase the quality of democracies (Abramson et al.
1990; Tewksbury 2003). The availability of information from various news sources would
enable people to find their own information from non-traditional news outlets, and this decreased
reliance on a narrow set of traditional news sources would improve democracy (defined as “the
belief in freedom and equality between people, or a system of government based on this belief
(Dictionary 2018)). This vision has been realized with the rise of non-traditional news on social
media, but some might argue that the prevalence of fake news on social media has harmed
democracy, rather than improved it. An editorial in Science calls on the scientific community to
help reporters and the general public better identify and avoid fake news (Weiss 2017).
Social media has become a common source for news; more than 50% of American adults
read news on social media (Gottfried and Shearer 2016). Social media is different from other
media providing news (e.g., TV news, news websites, and mobile phone news apps) because
users do not choose the source of all of the articles they see on social media. Instead, proprietary
algorithms provide targeted information with little transparency. With other news media, users
pick the source first, and do so with a familiarity of the nature of the source (Rice et al. 2018).
2
With social media such as Facebook, articles from a wide variety of sources appear on
users’ newsfeeds. News articles are intermixed with sponsored articles (i.e., paid advertisements)
and posts from family and friends. All of these may be intentionally or unintentionally true or
false, but some are explicitly designed to influence (Shane 2017). For example, Cambridge
Analytica developed tools to influence users after it gained access to more than 50 million users’
data (Granville 2018). About 23% of social media users report that they have accidently or
intentionally shared fake news (Barthel et al. 2016). Over 60% say that fake news leaves them
confused about what to believe (Barthel et al. 2016).
Social media has moved quality control for detecting fake news from trained journalists
to regular users (Kim and Dennis 2018). About 84% of Americans believe that they can detect
fake news (Barthel et al. 2016), but how do users detect fake news when most users have no
direct knowledge of the facts (i.e., they have not witnessed the events)?
In this study, we examine the effect of a Facebook “fake news” flag and how social
media users respond to it. Simply put, we examine the question: are fake news flags effective in
altering users’ beliefs? We use electroencephalography (EEG) to examine cognitive processes
(Dimoka et al. 2012; Vance et al. 2018). We found that flagging articles as fake triggered more
cognitive activity, but it did not change users’ beliefs in them. We further found that articles that
aligned with the user’s a priori opinions triggered increased cognitive activity, with users more
likely to believe them; articles that challenged users’ opinions were less thoroughly considered
and were less likely to be believed. Our findings triangulate around one explanation:
confirmation bias; users believe what matches their prior opinions, undeterred by the actual truth
of an article or a fake news flag. As John Mellencamp said in his 2004 song, Walk Tall, “People
believe what they want to believe/when it makes no sense at all(Mellencamp 2004).
3
PRIOR THEORY AND RESEARCH
News has always been questionable in its reliability (McGrath 1986). Even before the rise
of the Internet, certain newspapers were known for their biases and potentially distorted news
(McGrath 1986). The Internet enables people to access thousands of different news sources,
rather than being bound by traditional sources, increasing their exposure to biased and distorted
news. The 2016 US presidential election was rife with news, both true and false. Compounding
this difficulty, social media users had to contend with the intentional production and distribution
of fake news, whether created to generate revenue from advertisements (Kirby 2016) or to
influence the election (Fisher 2016; Shane 2017; Sydell 2016). Social media platforms have been
criticized for not taking sufficient action to prevent the spread of fake news (Maheshwari 2017).
In the sections below, we examine why assessing the truthfulness of news on social
media is difficult, and how flagging fake articles as “disputed by third-party fact-checkers” and
confirmation bias may influence users’ beliefs. We focus on Facebook as a leader in social
media; Facebook has over 2 billion active users and is a popular source for news (Gottfried and
Shearer 2016; Statista 2018).
Assessing the Truthfulness of News on Social Media
Context matters (Johns 2006; Johns 2017). “One way to develop richer theories that
provide actionable advice is to take the context into greater consideration” (Hong et al. 2014,
p.2). Much past IS research has examined work contexts, but our focus is on social media. Most
individuals use social media for hedonic purposes (Chauhan and Pillai 2013), such as seeking
entertainment or connecting with friends, rather than utilitarian purposes (Johnson and Kaye
2015). Following steps 1-4 in Table 1 of Hong et al., we ground our research in two general
theories (confirmation bias and cognitive dissonance) and examine how they influence behavior
4
in this use context. We identified three context-specific factors that make consuming news on
social media different from other contexts in which users view information on the Internet.
First, the user’s mindset is different, which affects how information is processed. The
consumption of news on social media is different than the consumption of information elsewhere
on the Internet. For example, it is well known that some product reviews are fake (Dwoskin and
Shaban 2018; Dwoskin and Timberg 2018; Roberts 2013). A key difference between fake
reviews and fake news is that users do not read product reviews for entertainment; they read
reviews for information to make a decision, knowing there is a monetary incentive to make the
best, most well-informed decision. Thus, users reading fake reviews are in a utilitarian mindset;
their goal is to understand the meaning of the information in the review and to decide which
reviews should be considered in making a decision. Minas et al (2014) examined a utilitarian
mindset in virtual team interactions in a decision-making context. The study found confirmation
bias was present while individuals processed information in a decision-making team-based chat.
In contrast, the hedonic mindset when reading social media news means the user’s goal is not to
determine what is true and fake; instead the goal is enjoyment and pleasure. The user will avoid
effortful activities that feel like work (e.g., thoughtful information processing) and activities that
do not bring enjoyment (e.g., reading stories that your favorite sports team lost). Users engage
with articles that make them feel good, which tend to be articles supporting their beliefs.
Second, the source of the information is not clear. With Internet news and traditional
news media, we visit the web site of our favorite news network or open our local newspaper; we
pick the source before we read articles and do so with some understanding of the source’s
limitations (Rice et al. 2018). Facebook is different because users do not choose the source of the
articles; instead, Facebook’s algorithms choose the articles. Although some users subscribe to
5
certain sources by following them on social media, many other sources arrive on our newsfeeds
from advertisements, sharing by friends, and algorithmic decisions. Articles from many different
sources—some reputable, some disreputable—are intermixed. A fake news article may be
presented between a CNN article and Aunt Martha’s cookies. The source of the story is obscured
(Kim and Dennis 2018), and users in a hedonic mindset are not motivated to invest effort to find
and understand the source (Kim and Dennis 2018).
Finally, the sheer volume of fake news makes it challenging to separate truth from
fiction. More fake news articles are shared on social media than real news (Silverman 2016).
Many fake news sites have appeared on Facebook with the express purpose of spreading
carefully-crafted propaganda or to discredit a specific person (BBC 2017). Their low cost and
ubiquity is one reason that fake news is common on social media (Barthel et al. 2016).
These three contextual factorsa hedonic mindset, a lack of cognizance of the source,
and the volume of fake news—combine to create a context in which social media users do not
think as critically as they should when presented with news on social media. More than half the
articles shared on Twitter are shared without the user reading them, let alone thinking critically
about them (Gabielkov et al. 2016). Nonetheless, research shows that there is a bias toward
taking an opinion on contentious social media topics, rather than remaining neutral (Jonas 2001).
This is true even when users lack information on the topic—or bother to read an article—which
helps the spread of fake news (Jonas 2001).
Fact-Checking Fake News
In response to the rise of fake news, fact checking services have become more common.
Many solutions have been developed to automate fact-checking. Truthy (Ratkiewicz et al. 2011)
and Hoaxy (Shao et al. 2016) are two such solutions, which can provide relatively quick results
6
for news articles. Truthy fact-checks sites that are well-known for verifying the truth of news
articles, such as Snopes.com, politifact.com, and factcheck.org, as well as checking known
disreputable sites for fake news articles. Fact-checking can influence credibility, especially if
done by independent fact-checkers (Wintersieck 2017).
Facebook incorporated fact-checking into its platform and began flagging fake news
articles in late 2016 by appending a statement that an article was “disputed by 3rd party fact-
checkers” when fact-checkers determined an article was fake (Schaedel 2017). Thus, fact-
checking was integrated into the presentation of the article; users did not need to invest effort to
seek out a third-party fact-checking site. Facebook discontinued the flag in late 2017 (Meixler
2017). One might conclude that Facebook’s actions indicate that fact-checking fake news is not
effective. However, the undisclosed reasons why a for-profit corporation makes decisions—
especially when its goals are unclear (c.f., Zuckerberg 2016)—are not theoretically compelling.
After Facebook discontinued its fake news flag, third parties began offering their own
fake news flagging services that can be integrated into Facebook. For example, NewsGuard
provides a browser plugin using source reliability ratings from teams of expert journalists and
consultants for more than 4,500 news sites that account for 98% of the online news in the U.S.
(NewsGuard 2018). The plugin automatically displays a fake news flag whenever content from a
disreputable source is displayed, whether in Facebook or any Web site.
Fact-checking is most important when the user wants to believe a headline; flagging a
headline when the user was unlikely to believe it without the flag adds little value. Thus, we
focus on the situation where a user is inclined to believe a fake headline, but it is flagged as false.
The Effects of Confirmation Bias
One factor influencing belief is confirmation bias: people prefer information that matches
7
their prior beliefs (Koriat et al. 1980; Minas et al. 2014; Nickerson 1998). Confirmation bias is a
bias against information that challenges one’s beliefs (Nickerson 1998); it is driven by the
fundamental nature of our cognition (Kahneman 2011).
Researchers have long argued that there are two distinctly different cognitive processes,
and there are many dual process models of cognition (Evans 2008). Two complementary models
emerged in the 1980s. The Heuristic-Systematic Model (HSM) (Chaiken 1980; Chaiken and
Eagly 1983) argues that attitudes are formed by the systematic application of considerable
cognitive effort to comprehend and evaluate the validity of available information (called the
systematic route), or by exerting little cognitive effort using simple heuristics on readily
accessible information (called the heuristic route). The Elaboration Likelihood Model (ELM)
(Cacioppo et al. 1986; Petty and Cacioppo 1986) argues that attitudes are formed based on
deliberate and active consideration of available information to evaluate the true merits of a
particular position (called the central route) or as a result of a less cognitively involved
assessment of simple positive or negative cues in the context (called the peripheral route).
There are distinctions between HSM and ELM, but they share a common fundamental
basis. Both argue that there are two distinct conscious cognitive processes by which attitudes are
formed, and that these two processes differ in the amount of cognitive processing expended (e.g.,
a quantitative difference) and in the cognitive approach used to evaluate information (e.g., a
qualitative difference). Both argue that individuals choose which route to invoke based on their
ability and motivation to engage in extensive cognition. Both have evolved to argue that the
routes are not distinct, so cognition is more of a continuum of processing (Kitchen et al. 2014).
ELM is the more popular and is still used today (Cacioppo et al. 2018), although some
researchers dispute the notion of dual process models (Melnikoff and Bargh 2018).
8
Many newer dual process models have been developed (Evans 2008; Evans and
Stanovich 2013), because research suggests that many of the fundamental arguments of HSM
and ELM (as revised over time in response to criticisms (Kitchen et al. 2014)) are not accurate.
For example, the routes are not mutually exclusive (both can be used); the routes are not on a
continuum (they are separate); individuals do not choose the route to use (the heuristic route is
automatic); individuals cannot avoid the heuristic route (its use is involuntary); and the
systematic route cannot operate by itself (the heuristic route always precedes it) (De Neys 2018;
Evans 2008; Evans and Stanovich 2013; Kahneman 2011; Pennycook et al. 2018).
In this paper, we adopt the widely accepted dual process model of Stanovich (1999) and
Kahneman (2011) who call these separate processes System 1 and System 2. We note that
Stanovich has more recently suggested using the terms Type 1 and Type 2 because the use of the
word “system” implies there are separate areas in the brain that are dedicated to each type of
cognition, which is not the case (Evans and Stanovich 2013).
System 1 runs continuously, and delivers conclusions automatically and involuntarily
(Kahneman 2011). Intuition is System 1 at work (Achtziger and Alós-Ferrer 2013; Dennis and
Minas 2018). When we receive new information, our System 1 cognition automatically searches
long-term memory for confirming evidence and generates a response in less than one second
(Bargh and Ferguson 2000; Carlston and Skowronski 1994; Fazio et al. 1986). This process is
nonconscious and unavoidable; we cannot prevent it (Evans and Stanovich 2013; Kahneman
2011). It supplies these assessments, even though they are not asked for (Bellini-Leite 2013;
Dennis and Minas 2018; Kahneman 2011; Thompson 2013). System 1 is a set of subsystems that
run in parallel triggered by different bits of incoming information (Bellini-Leite 2013; Evans
2008; Evans 2014; Thompson 2013). When the different subsystems produce matching results,
9
System 1 produces a “Feeling of Rightness” (FOR) that says it is confident about its conclusions
(Bago and De Neys 2017; De Neys 2014; Thompson et al. 2011). When there is conflict among
subsystems’ results, FOR creates a sense that something is not right (Bago and De Neys 2017).
In contrast, System 2 cognition is single-threaded (Dennis and Minas 2018) and has much
less processing capacity (Evans 2014). System 2 is under our deliberate control, so we can
choose to invoke it, but it is easily overwhelmed. System 2 cognition is effortful (Kahneman
2011), and most humans are “cognitive misers” who attempt to minimize cognitive effort (Taylor
and Fiske 1978). Thus, we tend to adopt the conclusions of System 1, often without thought
(Kahneman 2011). Common triggers causing us to invoke System 2 are a negative stimulus or a
surprise (Kahneman 2011), or a FOR that indicates conflicting results (Bago and De Neys 2017).
The net result is confirmation bias (Nickerson 1998). When we see new information, our
System 1 automatically, and in less than one second, confirms that it matches our prior
knowledge and we are inclined to believe it. Or, our System 1 tells us that it does not match and
we should not believe it (Kahneman 2011). Unless we are motivated to expend cognitive effort
and invoke System 2, we simply accept the conclusion of System 1 with little thought
(Kahneman 2011). And if we were to invoke System 2, how would it help us determine if a news
story was true? Unless we have witnessed the events in a story there is no unambiguous way to
determine if the story is true or false. Thus, people are likely to accept their System 1 conclusion
and believe information that matches pre-existing views (Allcott and Gentzkow 2017).
Confirmation bias also affects the time taken. System 1’s conclusion that new
information matches our beliefs is produced in less than a second and simply accepting this takes
little time. If our System 1 indicates that we should reject the new information, we are inclined to
spend only a little longer before discarding it (Haidt 2012; Kahneman 2011). Minas et al (2014)
10
found that in a utilitarian mindset, all pieces of information are initially considered (by System
1), but the only factual information that matched a priori beliefs was selected for System 2
processing. Similarly, Turel and Qahri-Saremi (2016) found that System 1 was linked to
impulsive and problematic use of social media and System 2 to more rational and controlled use.
Two aspects of the social media context suggest that social media may exacerbate
confirmation bias. First, research suggests that individuals in a hedonic mindset may be less
likely to critically consider information than those in a utilitarian mindset, as their consumption
is tied to what they desire reality to be, rather than what they know to be real (Hirschman and
Holbrook 1982). When in a hedonic mindset we are less likely to expend the cognitive effort to
invoke System 2, and more likely to accept System 1’s biased conclusions.
Second, social media enables users to choose the news they like and learns their
preferences so that it deliberately displays more articles matching their choices. This causes a
decreased range of information displayed on a user’s newsfeed, so that the news on social media
is often biased (The Wall Street Journal 2016). Users’ realities on Facebook differ based on what
they read and who their friends are (The Wall Street Journal 2016). There are sharp differences
in liberal and conservative newsfeeds, with fake news aligned with political beliefs more likely
to be seen and shared by users in “echo chambersof biased information (Bozdag and van den
Hoven 2015; Cerf 2016; Colleoni et al. 2014). This bias inundates users with newsreal and
fake—that supports their views (Bennett and Iyengar 2008; Knobloch‐Westerwick and Lavis
2017). Such a stream of biased messages intensifies confirmation bias (Nickerson 1998).
Creating Cognitive Dissonance
One approach to interrupting confirmation bias is to create cognitive dissonance by
adding a fake news flag to false stories. Cognitive dissonance occurs when users are presented
11
with two pieces of conflicting information that both cannot be true (Festinger 1962; Mills 1999),
which in this case is a fake story users want to believe because it aligns with their a priori beliefs
and a flag that says it is false. System 1 makes an instant judgement but the conflicting
information makes this judgement difficult (Kahneman 2011); the results are unreliable and the
FOR tells us something is amiss (Bago and De Neys 2017). This contradiction causes cognitive
discomfort (Aronson 1969). The user must decide either to ignore the discomfort or invest effort
to resolve it. If the issue is unimportant to them, users typically ignore the cognitive dissonance
and accept what their prior beliefs say (Nickerson 1998). Otherwise, they invest effort by
invoking System 2 to decide which piece of conflicting information is true (Aronson 1969;
Kahneman 2011) which takes more time and requires greater cognitive activity.
When System 2 goes to work to resolve the dissonance, it is influenced, sometimes very
strongly, by the unreliable results of System 1 (Kahneman 2011). The System 1 results are stored
in working memory and become part of the problem space (Thompson 2013). System 2 has
equal access to the information and System 1’s unreliable result, and uses both (Thompson
2013). Information is often ambiguous and can be interpreted in different ways (Srull and Wyer
1979). System 2 gives more weight to System 1’s result than to the facts that produced it (Srull
and Wyer 1980; Srull and Wyer 1983). Thus, an erroneous System 1 result has greater influence
on our subsequent System 2 conclusions than the factual information (Dennis and Minas 2018).
In summary, we argue that placing a fake news flag on a story aligned with a user’s
beliefs will trigger cognitive dissonance. If the dissonance is strong enough, the user will invoke
System 2 and expend greater cognitive effort to consider the headline and the fake news flag.
The use of System 2 cognition will be indicated by the user taking more time to make a judgment
about whether to believe the story or not, and by cognitive activity in certain brain regions. Our
12
study uses the neurophysiological responses measured by EEG as an indicator of cognitive
activity. We focus on activity in the frontal cortex because it has been linked with cognitive
activity associated with what we commonly consider to be “thinking”: arousal, memory
encoding, memory retrieval, insight and consciousness (Başar et al. 1999; Klimesch 2012;
Krause et al. 2000; Minas et al. 2017; Pizzagalli 2007). The result of this System 2 cognition is a
judgment about the credibility of the story, and we theorize that the fake news flag will reduce
credibility. Thus, we have three hypotheses:
H1: Social media users will exhibit increased cognitive activity in the frontal cortex when seeing
a fake news flag on a headline aligned with their beliefs.
H2: Social media users will spend more time when seeing a fake news flag on a headline aligned
with their beliefs.
H3: Social media users will perceive headlines aligned with their beliefs that are flagged as fake
as being less credible.
METHOD
Participants
Eighty-three undergraduates were recruited from a large business core course. All were
experienced with social media. Age ranged from 18 to 34 (mean 19.5) and 39% were female.
Three reported being left-handed and since a third of left-handed people have differences in brain
structure, we removed all three participants from our EEG analyses.
Task
Participants read 50 fact-based news headlines and assessed their credibility. The
headlines covered 10 topics related to US politics and were actually true or false. Forty headlines
were designed to be possibly true or false, though verifiably one or the other (e.g., Trump
defunds Planned Parenthood, minimum wage should be $21.72 to keep pace with inflation). Ten
headlines were controls intended to be more clearly true (e.g. Trump launches Twitter tirade;
13
Hollywood celebrities oppose Trump). See Appendix A for headlines. Participants spent an
average of 10.5 seconds reading each headline before beginning to answer questions about it.
Treatment
The experiment mimicked the Facebook display, although participants were not able to
like, comment, or share the story. A flag matching Facebook’s fake news flag was randomly
assigned to 20 of the 40 non-control headlines (including those actually true) (see Figure 1).
Measures
The primary behavioral dependent variable was the credibility of the headline, measured
using three 7-point items from Beltramini (Beltramini 1988): believability, credibility, and
truthfulness. The Cronbach alpha was 0.94, indicating adequate reliability.
The second behavioral dependent variable was the time participants took to form their
credibility assessment. The time was measured from the initial display of the headline until the
participant clicked a button to display the credibility questions.
The alignment of a headline with the participant’s political beliefs was coded as a binary
variable; headlines positively supporting the participant’s beliefs were coded as a 1; headlines
that did not were coded as 0. We used ten sources of self-reported data to assess the extent to
which a headline aligned with participants’ political beliefs. The participants reported their
political affiliation on a 4-point scale (Democrat, Independent leaning Democrat, Independent
leaning Republican, and Republication), which was collapsed into either Democrat (first two
responses) or Republican (second two responses). They reported who they would vote for
(Clinton or Trump) if the 2016 presidential election was held today. The election had been held
four to six months prior. They also answered eight items (7-point scale, with 4 as neutral)
measuring their political conservatism (Everett 2013) across topics related to the headlines. Our
14
sample was fairly balanced politically, with 47% being self-reported Republicans and 53%
Democrats; 31% reported that they would vote for Trump at the time of the study. See Appendix
B for more information.
Two raters independently matched the headlines to the single most closely matching item
out of the ten political belief items and agreed on 46 of the 50 headlines (92%); differences were
resolved. For example, an anti-abortion headline was matched to the anti-abortion item (with
those scoring 5-7 being susceptible to confirmation bias). A gun-rights headline was matched to
the gun-rights item. A Trump-supporting headline was matched to a Trump voter.
Changes in cognition were measured using time-frequency analysis of EEG data. EEG is
a neurophysiological tool that enables the examination of neurophysiological changes that occur
during information processing on the order of milliseconds (Berger 1929). EEG measures small
electrical signals produced in the superficial areas of the underlying cortical regions. These
electrical signals form complex wave patterns at specific frequencies that are related to cognitive
activity. Berger’s early research showed the importance of the alpha wave (originally called
“Berger waves”), and its potential to indicate specific mental processes, including arousal,
memory and consciousness (Pizzagalli 2007).
A 2012 review concluded that alpha-band waves (8-13Hz) indicate brain activity across
many brain regions (Klimesch 2012). Alpha waves have been shown to change reliably in
response to stimuli (Klimesch 2012). When a region of the brain becomes active, alpha waves
desynchronize, leading to lower alpha levels (Cohen 1995); thus alpha wave desynchronization
indicates higher levels of cognitive activity (Kelly et al. 2006; Klimesch 2012; Makeig et al.
2002). The upper alpha frequency band (~10-13Hz) shows encoding memory processes in the
parietal and frontal cortex regions (Kilner et al. 2005; Klimesch et al. 1997; Klimesch et al.
15
2001; Klimesch et al. 1996; Moretti et al. 2013).
We use time-frequency analysis, event-related spectral perturbation (ERSP), to analyze
event-related desynchronization (ERD) (Makeig 1993). It is important to note that, despite the
similar acronym, time-frequency analysis (e.g., ERSP) differs from traditional event-related
potential (i.e., ERP) studies in that it examines a frequency band (e.g., alpha wave) over a
specified time-period. Event-related potentials, common in cognitive neuroscience studies,
examine specific waveforms that occur at a specified time period (e.g., P300 is a positive spike
in neural activity that occurs at 300 milliseconds in response to rare events). In time-frequency
analysis, we look for a pattern of changes (i.e., spectral changes) over a period of several
seconds. We analyzed the last 4 seconds the participant viewed the headline to account for the
time participants read the headlines. The alpha frequency band was examined for significant
alpha ERD in the 10 to 13Hz frequency band as has been suggested and done in prior research
(Minas et al. 2014; Müller-Putz et al. 2015). See Appendix C for more details.
We used a 14-channel Emotiv wireless EEG device (see Figure 2). There is a concern
that wireless data collection may suffer from dropped packets due to interference. The data
includes a variable indicating if any signal loss occurred during data collection and marking
packets that were interpolated. We removed all artifacts and interpolated data. There has been
debate within the cognitive neuroscience community about the validity of low-cost EEG systems
like Emotiv. Many studies have scrutinized the Emotiv device in a variety of settings such as,
examining working memory (Wang et al. 2015), auditory analysis (Badcock et al. 2013), mobile
brain-computer interfaces (Debener et al. 2012), detection of the P300 wave (Ramírez-Cortes et
al. 2010; Wang et al. 2015), human-computer interaction (Taylor and Schmidt 2012), and
hemispheric asymmetry (Friedman et al. 2015). These studies have found Emotiv to obtain a
16
reliable and valid signal of underlying cortical activity as good as larger high-density systems
albeit with lower spatial resolution so that the edges of regions are not as sharp and clear.
RESULTS
Behavioral Results
We used Hierarchical Linear Modeling (HLM) to analyze the credibility and time data;
see Table 1. Confirmation bias is present, with participants more likely to believe headlines to be
credible when they aligned with the user’s political beliefs (t(4150)=2.46, p=0.014). The fake
news flag (Flagged as False) had no effect (t(4150)=0.52, p=0.601). Surprisingly, participants
were more likely to believe that true headlines were less credible (t(4150)=2.45, p=.014). We
note that participants had difficulty assessing whether headlines were true or false; they correctly
assessed only 44%. Participants spent 1.4 seconds longer considering a headline when the
headline was flagged as false (t(4150)=3.32, p=0.001), and an additional 1.9 seconds when the
headline was flagged as false and the headline aligned with their beliefs (t(4150)=2.45, p=0.014).
These results support H2 (that users take more time when seeing a fake flag on a headline
aligned with their beliefs). However, H3 was not supported: the fake news flag did not reduce the
credibility of headlines aligned with beliefs.
Neurophysiological Results
We examined the cognition triggered by a headline that supported the participant’s
beliefs but was flagged as being false. When a brain region is active, desynchronization of neural
activity in the alpha band occurs (called “alpha blocking”) (Potter and Bolls 2012), thus event-
related desynchronization (ERD) is an indicator of cognitive activity. Event-related spectral
perturbation analysis produces a set of areas within the brain (called clusters) showing the
location of ERD and whether there are significant differences between the treatments. With this
17
analysis, the researcher does not specify which superficial layers of cortex to test (all regions are
tested), which is a major strength of this approach; it is not limited to a priori decisions which
may or may not fit the reality of participant cognition. The clusters identified by the analysis may
or may not align with the regions that the researcher has hypothesized about, and may span
several distinct brain regions, making interpretation challenging. However, when a cluster
includes a theorized region, it is a powerful signal supporting the theory, because nothing in the
analysis directed the software to consider the theorized region; the region emerged from the data.
The challenge occurs when a cluster shows significant differences between treatments in
regions the researcher did not theorize prior to the analysis. In this case, the researcher must
interpret what cognitive activity in those region(s) means. Unfortunately, there are often several
ways to interpret activity as each brain region is responsible for a complex set of disparate
activities (Poldrack 2011). This has been called the reverse inference problem (Poldrack 2011).
For researchers grounded in quantitative positivist traditions, the reverse inference
problem is intractable (Fischer 1970) because it is usually impossible to apply deductive
reasoning to unequivocally produce a result (Fischer 1970). For researchers grounded in
qualitative interpretivist methods, reverse inference is normal science using abductive reasoning
(Dubois and Gadde 2002; Peirce 1931-1958). Abduction is a form of scientific reasoning that
starts with data and sorts through the possible explanations to find the most appropriate
explanation (Dubois and Gadde 2002; Peirce 1931-1958). Quantitative researchers routinely use
abduction to build theory (Van de Ven 2007), so the question is not whether to use abduction,
but rather when to use it – before or after data collection, or both.
We strongly advocate for reverse inference using abduction after data collection when the
analysis shows significant differences in regions that were not theorized prior to data collection.
18
After all, what is the alternative? Ignore significant effects that were not theorized? Ignoring the
unexpected is not good science. For us, reverse inference is not a problem; it is an opportunity
for discovering the unanticipated. Care must be taken in experimental design to eliminate as
many competing explanations as possible, and when interpreting unexpected results, we must use
abductive reasoning to determine the most likely explanation (Poldrack 2011). Subsequent
research can then theorize and test the newly discovered unexpected results for generalizability.
Our analysis produced two neurological clusters with significant differences that suggest
participants experienced cognitive dissonance. The first cluster was as hypothesized in frontal
cortices (see Figure 3). Participants showed significantly more ERD in the frontal cortices for
headlines that supported their beliefs but were flagged as false. The differences are across the
upper alpha band and are spread throughout the time period. Increased ERD in the frontal
cortices is associated with increased cognitive activity, including arousal, memory, and
consciousness (Pizzagalli 2007) and arousal, memory access, and consciousness (Başar et al.
1999; Krause et al. 2000; Pizzagalli 2007). The frontal cortices are active during deliberate
cognitive tasks and high-order cognitive processes (Başar et al. 1999; Kilner et al. 2005;
Klimesch et al. 1997; Klimesch et al. 2001; Klimesch et al. 1996; Krause et al. 2000; Moretti et
al. 2013). This indicates that participants spent more cognitive activity considering headlines that
supported their beliefs but were flagged as false than other headlines.
The second cluster also shows ERD in the frontal cortices, but also includes some
unexpected activity in the right parietal region (Figure 4). Activity in the right parietal can
indicate encoding and retrieving a stimulus in working memory (Foxe and Snyder 2011; Gevins
et al. 1997; Mevorach et al. 2006). Increased ERD in the right parietal cortex is indicative of
directing attention toward salient stimuli (Foxe and Snyder 2011; Mevorach et al. 2006), and
19
turning toward a stimulus (rather than away) (Schutter et al. 2001). It has also been linked to
sustained attention to a stimulus being retained in working memory and encoding or retrieval of
semantic memory (Gevins et al. 1997; Klimesch et al. 1997; Klimesch et al. 2001). Both
interpretations provide a similar conclusion: individuals paid more attention to headlines
supporting their beliefs that were flagged as false.
Taken together, these two clusters indicate that subjects spent more cognitive effort
considering a headline that supported their beliefs but was flagged as false compared to other
headlines (e.g., those supporting their beliefs but not flagged, or challenging their beliefs
(flagged or not)). This increased cognitive activity also corresponds to the increased time that
subjects spent on these headlines (Table 1). H3 is supported.
Post hoc Analysis on Confirmation Bias
We also conducted a post hoc analysis investigating cognition in the presence of
confirmation bias, because past research in a different context (virtual team decision making – a
utilitarian mindset) has found that individuals are more likely to engage in cognitive activity
when they encounter information that supports their opinions and simply ignore information that
opposes their opinions (Minas et al. 2014). The context in this study is different because social
media users are in a hedonic rather than a utilitarian mindset (Chauhan and Pillai 2013). Hedonic
and utilitarian motivation have been shown to have differential effects on confirmation bias
(Borrero and Henao 2017; Stone and Wood 2018).
This post hoc analysis found increased cognitive activity in two clusters when
participants saw headlines aligned with their opinions (i.e., when confirmation bias was present):
the frontal cortices (Figure 5a); and the right parietal and somatosensory region (Figure 5b). As
noted above, increased activity in the frontal cortices is linked to increased higher order
20
cognitive processes, while increased activity in the right parietal is linked to focusing attention.
Increased ERD in the somatosensory region has been linked to motion, planning for motion, or
tactile sensations (Hari et al. 1998; Porro et al. 1996), which is hard to interpret in this situation.
We conclude that once participants realized that a headline supported their political opinions,
they directed their cognitive attention to the headline, but when they realized that a headline
challenged their opinions, they did not direct attention to it.
DISCUSSION
Our study examined whether a fake news flag helped social media users discern true
news from fake news. Our results show that the fake news flag did not influence user beliefs,
although it triggered more cognition and increased the time spent considering the headline.
Instead, users were more likely to believe news headlines they wanted to be true. Table 2
summarizes the results.
The EEG results show two interesting findings. First, in the absence of a fake news flag,
significantly more event-related alpha desynchronization was observed in the frontal cortices and
the right parietal for headlines that supported the participant’s beliefs. In other words, once
participants recognized that a headline supported their beliefs, they devoted attention to it; once
they recognized that a headline opposed their beliefs, they did not process it further.
Second, when participants viewed headlines that supported their beliefs but were flagged
as fake, cognitive dissonance occurred. The EEG results showed activation in the frontal cortices
and right parietal, which suggests that participants engaged in additional cognition to resolve the
cognitive dissonance. However, this additional cognition only resulted in users deciding to
disregard the fake news flag and believe the fake news article.
We see three key conclusions in our results. First, the presence of a fake news flag did
21
not affect how participants perceived the credibility of the headlines. The time and EEG results
indicate that the flag caused cognitive dissonance and induced participants to think more deeply
about the truth of the headline. However, the cognitive dissonance triggered by the flag was not
enough to overcome participants’ inherent confirmation bias; although they thought more, this
additional thought did not cause them to believe the headline less. The flag was simply not
strong enough to make users overcome their a priori beliefs. Perhaps in the era of fake news,
users are more likely to dismiss information that challenges their opinions as being fake. This
fake news flag is an ineffective remedy for fake news; again, we note that Facebook discontinued
its use in 2017 (Meixler 2017).
Second, confirmation bias drives beliefs. Participants were more likely to believe and
process headlines that aligned with their beliefs. This likely did not result from additional
knowledge; participants were not correct in their perceptions of truth. Rather, participants likely
found these topics more credible because any attempt to believe that the headlines were false
would result in cognitive dissonance. Rather than expend cognitive effort to consider the actual
truth of the article, participants rejected reality in favor of their a priori beliefs (Allcott and
Gentzkow 2017; Koriat et al. 1980; McKenzie 2006; Nickerson 1998). We confirm that social
media is highly subject to confirmation bias.
Third, the real underlying truth of the headlines had little effect on whether participants
believed the headlines or not. Participants were not more likely to believe headlines that were
verifiably true. It may be that an increased awareness of fake news may have caused participants
to be naturally more skeptical of all headlines presented; the mean credibility score of 3.7 across
all headlines suggests a slight bias towards skepticism.
Limitations
22
We began by theorizing that the context of social media is important; social media use is
often hedonic as users browse on cell phones while waiting in line, on laptops while watching
TV, and so on. Yet we studied use in the cold, clinical context of a lab experiment, where we
could carefully control exogenous factors. Ecological validity was challenged, as participants
were wearing an Emotiv headset, the headlines were displayed on separate pages instead of a
scrolling homepage, and participants were not on their own social media pages. This setting may
have triggered a utilitarian mindset of thinking more deliberately about the headlines than the
normal everyday setting of social media use, perhaps weakening the effects we found. Thus, the
effects observed in the lab may be understated; the real problem may be worse. We note that
other studies using neurophysiological data have the same limitations (Vance et al. 2018).
Our study also suffers from the other usual limitations of laboratory studies. Participants
were undergraduate students, which may not be representative of the population as a whole
(Koriat et al. 1980). They had experience with social media, but care should be taken when
generalizing to populations that may not have as much experience. The neuroanatomy of some
young adults changes through their teens into their mid-20s, with neuroplasticity (i.e., changes to
neural structures and networks) also prevalent in adulthood (Draganski et al. 2004; Sowell et al.
1999). However, we are unaware of any studies that show systematic differences in alpha
attenuation between young adults and adults decades older.
Additional limitations come from the context of fake news itself. This context is
continually changing, as concerns about fake news wax and wane, types of misinformation
change and online social media platforms are revised in response.
Implications for Research
First and foremost, our research shows that the fake news flag triggered more cognitive
23
activity and caused users to spend more time when the flag was placed on headlines they wanted
to believe. However, it did not change the result of that cognition. At one level, this indicates that
fact-checking may have promise (because it did trigger deeper cognition to resolve cognitive
dissonance). But it may be optimistic to believe that a simple “disputed” flag might trigger the
deep introspection needed to overcome confirmation bias and resolve cognitive dissonance.
Future research needs to develop and test a stronger signaling mechanism for the results of fact-
checking, one that might be strong enough to overcome confirmation bias. Perhaps this could be
a different flag with stronger words, or a different type of intervention altogether.
Second, the post-hoc analysis indicates that future research needs to understand how we
can overcome confirmation bias in the use of social media. Our results show that once users
recognize that a headline challenges their a priori beliefs, they stopped thinking about it. In other
words, confirmation bias is so strong in social media use that users simply stop thinking about
information they don’t like. We need more research on confirmation bias in social media use.
Third, we used EEG to complement other sources of data such as self-reported data (i.e.,
beliefs in the headlines) and observed data (i.e., time taken), as has been advocated in the use of
neurophysiological tools in IS research (Dimoka et al. 2012). The primary advantages of
neurophysiological data are that they are generally not susceptible to subjectivity bias, social
desirability bias, and demand effects (Dimoka et al. 2012). The use of three distinct types of data
enabled us to triangulate across the different sources to better understand the phenomenon
(Dimoka et al. 2012). We encourage future research to consider using neurophysiological data.
Finally, our results add to the growing list of evidence that fake news is becoming a
major societal problem (House of Commons 2018; Weiss 2017). Many solutions have been
proposed, and many pundits have offered opinions. However, we have little empirical research
24
on the effectiveness (or lack thereof) of the many proffered options. We need more research on
ways to improve social media users’ ability to discern truth from fiction, and more research on
ways to induce social media users to invest more time and attention in the news they see and to
restrain from spreading fake news without reading it (Gabielkov et al. 2016).
Implications for Practice
Facebook’s fake news flag had no effect on beliefs. It did not induce participants to
conclude that a news article was less credible. They spent more time when a headline they
supported was flagged as fake, but the flag did not change their beliefs. Perhaps more
importantly, the actual truth of a headline did not influence users’ beliefs; users were generally
unable to accurately separate true news from fake news. We conclude that we need to develop a
better method for warning social media users of fake news.
People will continue to consume news on social media and will continue to struggle to
determine its truthfulness. The sheer volume of fake news on social media (Silverman 2016)
means that this problem is unlike any we have seen before; "quantity has a quality all its own."
(attributed to Josef Stalin). There are real and demonstrable consequences from enabling the
spread of posts that are verifiably false in order to spread disinformation or profit from users’
gullibility (House of Commons 2018). We believe that Facebook and other social media firms
have a responsibility to better enable their users to discern truth from fiction (see House of
Commons 2018).
25
Figure 1: Fake News Flag on a Facebook Headline
Figure 2. Position of the electrodes on the EEG headset with labels along the 10-20
system.
26
Figure 3 Differences in Frontal Cortices Cluster due to Cognitive Dissonance
The top left and middle panel shows alpha activation for headlines we theorize create dissonance (one that support
the participant’s beliefs and were flagged as false (middle panel) versus all other headlines (left panel); cooler colors
(i.e., blue) indicates greater cognition (i.e., greater event-related desynchronization). The right panel shows
significant differences (in red) between the two panels, at p = .01 with a false discovery rate correction for multiple
comparisons. In the scalp map, red indicates the regions identified as being active (i.e., contributing the most
variance) in the cluster.
27
Figure 4. Differences in Frontal and Right Parietal Cluster due to Cognitive Dissonance
This analysis shows cognition for headlines that supported a participant’s political beliefs and were flagged as false
(middle panel) versus all other headlines (left panel); blue indicates greater desynchronization. The right panel
shows significant differences (in red) between the two panels, at p = .01 with a false discovery rate correction.
28
a) Frontal cortices
b) Right parietal and somatosensory
Figure 5: Differences due to Confirmation Bias
Headlines supporting the participants beliefs trigger greater cognitive activity in the frontal cortices (Panel a) and in
the right parietal and somatosensory regions (Panel b).
29
Table 1: Behavioral Outcomes
Note: * p<.05, ** p<.01, *** p<.001
Table 2: Summary of Results
Hypothesis
Description
Supported
H1
Social media users will exhibit increased cognitive activity
in the frontal cortex when seeing a fake news flag on a
headline aligned with their beliefs.
Yes
H2
Social media users will spend more time when seeing a fake
news flag on a headline aligned with their beliefs.
Yes
H3
Social media users will perceive headlines aligned with their
beliefs that are flagged as fake as being less credible.
No
Time Spent in Seconds
Factor
Coefficient
P-value
Coefficient
P-value
Intercept
3.328***
0.000
8.166
0.062
Political Party (Democrat=1)
0.312*
0.016
-0.913
0.248
Gender
0.053
0.689
0.169
0.836
Age
0.012
0.738
0.116
0.598
Aligned with Beliefs
0.177*
0.014
-0.637
0.179
Actually True
-0.130*
0.014
-0.064
0.855
Flagged as False
0.033
0.601
1.364***
0.001
Flagged X Aligned with Beliefs
-0.035
0.762
1.884*
0.014
30
REFERENCES
Abramson, J. B., Orren, G. R., and Arterton, F. C. 1990. Electronic Commonwealth: The Impact
of New Media Technologies on Democratic Politics. Basic Books, Inc.
Achtziger, A., and Alós-Ferrer, C. 2013. "Fast or Rational? A Response-Times Study of
Bayesian Updating," Management Science (60:4), pp. 923-938.
Allcott, H., and Gentzkow, M. 2017. "Social Media and Fake News in the 2016 Election,"
National Bureau of Economic Research.
Aronson, E. 1969. "The Theory of Cognitive Dissonance: A Current Perspective," Advances in
experimental social psychology (4), pp. 1-34.
Badcock, N. A., Mousikou, P., Mahajan, Y., de Lissa, P., Thie, J., and McArthur, G. 2013.
"Validation of the Emotiv Epoc® Eeg Gaming System for Measuring Research Quality
Auditory Erps," PeerJ (1), p. e38.
Bago, B., and De Neys, W. 2017. "Fast Logic?: Examining the Time Course Assumption of Dual
Process Theory," Cognition (158), pp. 90-109.
Bargh, J. A., and Ferguson, M. J. 2000. "Beyond Behaviorism: On the Automaticity of Higher
Mental Processes," Psychological bulletin (126:6), p. 925.
Barthel, M., Mitchell, A., and Holcomb, J. 2016. "Many Americans Believe Fake News Is
Sowing Confusion," Pew Research Center (15).
Başar, E., Başar-Eroğlu, C., Karakaş, S., and Schürmann, M. 1999. "Are Cognitive Processes
Manifested in Event-Related Gamma, Alpha, Theta and Delta Oscillations in the Eeg?,"
Neuroscience Letters (259:3), pp. 165-168.
BBC. 2017. "Prices for Fake News Campaigns Revealed," in: BBC News.
Bellini-Leite, S. d. C. 2013. "The Embodied Embedded Character of System 1 Processing,"
Mens Sana Monographs (11:1), pp. 239-252.
Beltramini, R. F. 1988. "Perceived Believability of Warning Label Information Presented in
Cigarette Advertising," Journal of Advertising (17:2), pp. 26-32.
Bennett, W. L., and Iyengar, S. 2008. "A New Era of Minimal Effects? The Changing
Foundations of Political Communication," Journal of Communication (58:4), pp. 707-
731.
Berger, H. 1929. "On the Eeg in Humans," Arch. Psychiatr. Nervenkr (87), pp. 527-570.
Borrero, S., and Henao, F. 2017. "Can Managers Be Really Objective? Bias in Multicriteria
Decision Analysis," Academy of Strategic Management Journal (16:1).
Bozdag, E., and van den Hoven, J. 2015. "Breaking the Filter Bubble: Democracy and Design,"
Ethics and Information Technology (17:4), pp. 249-265.
Cacioppo, J. T., Cacioppo, S., and Petty, R. E. 2018. "The Neuroscience of Persuasion: A
Review with an Emphasis on Issues and Opportunities," Social neuroscience (13:2), pp.
129-172.
Cacioppo, J. T., Petty, R. E., Kao, C. F., and Rodriguez, R. 1986. "Central and Peripheral Routes
to Persuasion: An Individual Difference Approach," Journal of Personality and Social
Psychology (51:5), pp. 1032-1043.
Carlston, D. E., and Skowronski, J. J. 1994. "Savings in the Relearning of Trait Information as
Evidence for Spontaneous Inference Generation," (66), pp. 840-856.
Cerf, V. G. 2016. "Information and Misinformation on the Internet," Commun. ACM (60:1), pp.
9-9.
31
Chaiken, S. 1980. "Heuristic Versus Systematic Information Processing and the Use of Source
Versus Message Cues in Persuasion," Journal of Personality and Social Psychology
(39:2), pp. 752-766.
Chaiken, S., and Eagly, A. H. 1983. "Communication Modality as a Determinant of Persuasion:
The Role of Communicator Salience," Journal of Personality and Social Psychology
(45:2), pp. 241-256.
Chauhan, K., and Pillai, A. 2013. "Role of Content Strategy in Social Media Brand
Communities: A Case of Higher Education Institutes in India," Journal of Product &
Brand Management (22:1), pp. 40-51.
Cohen, L. 1995. Time-Frequency Analysis. Prentice hall.
Colleoni, E., Rozza, A., and Arvidsson, A. 2014. "Echo Chamber or Public Sphere? Predicting
Political Orientation and Measuring Political Homophily in Twitter Using Big Data,"
Journal of Communication (64:2), pp. 317-332.
De Neys, W. 2014. "Conflict Detection, Dual Processes, and Logical Intuitions: Some
Clarifications," Thinking & Reasoning (20:2), pp. 169-187.
De Neys, W. 2018. Dual Process Theory 2.0. New York: Routledge.
Debener, S., Minow, F., Emkes, R., Gandras, K., and Vos, M. 2012. "How About Taking a Low‐
Cost, Small, and Wireless Eeg for a Walk?," Psychophysiology (49:11), pp. 1617-1621.
Dennis, A. R., and Minas, R. K. 2018. "Security on Autopilot: Why Current Security Theories
Hijack Our Thinking and Lead Us Astray," the DATABASE for Advances in Information
Systems (49:SI), pp. 15-38.
Dictionary, C. 2018. "Democracy," in: Cambridge Dictionary.
Dimoka, A., Banker, R. D., Benbasat, I., Davis, F. D., Dennis, A. R., Gefen, D., Gupta, A.,
Ischebeck, A., Kenning, P. H., and Pavlou, P. A. 2012. "On the Use of
Neurophysiological Tools in Is Research: Developing a Research Agenda for Neurois,"
MIS Quarterly (36:3).
Draganski, B., Gaser, C., Busch, V., Schuierer, G., Bogdahn, U., and May, A. 2004.
"Neuroplasticity: Changes in Grey Matter Induced by Training," Nature (427:6972), p.
311.
Dubois, A., and Gadde, L.-E. 2002. "Systematic Combining: An Abductive Approach to Case
Research," Journal of business research (55:7), pp. 553-560.
Dwoskin, E., and Shaban, H. 2018. "Facebook Will Now Ask Users to Rank News
Organizations They Trust," in: The Washington Post.
Dwoskin, E., and Timberg, C. 2018. "How Merchants Use Facebook to Flood Amazon with
Fake Reviews," in: The Washington Post.
Evans, J. S. B. T. 2008. "Dual-Processing Accounts of Reasoning, Judgment, and Social
Cognition," Annual Review of Psychology (59:1), pp. 255-278.
Evans, J. S. B. T. 2014. "Two Minds Rationality," Thinking & Reasoning (20:2), pp. 129-146.
Evans, J. S. B. T., and Stanovich, K. E. 2013. "Dual-Process Theories of Higher Cognition:
Advancing the Debate," Perspectives on Psychological Science (8:3), pp. 223-241.
Everett, J. A. C. 2013. "The 12 Item Social and Economic Conservatism Scale (Secs)," PLOS
ONE (8:12), p. e82131.
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., and Kardes, F. R. 1986. "On the Automatic
Activation of Attitudes," Journal of Personality and Social Psychology (50:2), pp. 229-
238.
Festinger, L. 1962. A Theory of Cognitive Dissonance. Stanford university press.
32
Fischer, D. H. 1970. Historians' Fallacies. Harper and Row New York.
Fisher, M., J.W. Cox, P. Hermann,. 2016. "Pizzagate: From Rumor, to Hashtag, to Gunfire in
D.C.." The Washington Post.
Foxe, J. J., and Snyder, A. C. 2011. "The Role of Alpha-Band Brain Oscillations as a Sensory
Suppression Mechanism During Selective Attention," Frontiers in psychology (2), p. 154.
Friedman, D., Shapira, S., Jacobson, L., and Gruberger, M. 2015. "A Data-Driven Validation of
Frontal Eeg Asymmetry Using a Consumer Device," Affective Computing and Intelligent
Interaction (ACII), 2015 International Conference on, pp. 930-937.
Gabielkov, M., Ramachandran, A., Chaintreau, A., and Legout, A. 2016. "Social Clicks: What
and Who Gets Read on Twitter?," ACM SIGMETRICS / IFIP Performance 2016, Antibes
Juan-les-Pins, France.
Gevins, A., Smith, M. E., McEvoy, L., and Yu, D. 1997. "High-Resolution Eeg Mapping of
Cortical Activation Related to Working Memory: Effects of Task Difficulty, Type of
Processing, and Practice," Cerebral cortex (New York, NY: 1991) (7:4), pp. 374-385.
Gottfried, J., and Shearer, E. 2016. News Use across Social Medial Platforms 2016. Pew
Research Center.
Granville, K. 2018. "Facebook and Cambridge Analytica: What You Need to Know as Fallout
Widens," The New York Times.
Haidt, J. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion.
Vintage.
Hari, R., Forss, N., Avikainen, S., Kirveskari, E., Salenius, S., and Rizzolatti, G. 1998.
"Activation of Human Primary Motor Cortex During Action Observation: A
Neuromagnetic Study," Proceedings of the National Academy of Sciences (95:25), pp.
15061-15065.
Hirschman, E. C., and Holbrook, M. B. 1982. "Hedonic Consumption: Emerging Concepts,
Methods and Propositions," Journal of Marketing (46:3), pp. 92-101.
Hong, W., Chan, F. K. Y., Thong, J. Y. L., Chasalow, L. C., and Dhillon, G. 2014. "A
Framework and Guidelines for Context-Specific Theorizing in Information Systems
Research," Information Systems Research (25:1), pp. 111-136.
House of Commons. 2018. "Disinformation and ‘Fake News’: Select Committee on Digital,
Culture, Media and Sport Interim Report ", from
https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-
culture-media-and-sport-committee/news/fake-news-report-published/
Institute for Propaganda Analysis. 1938. "Propaganda Analysis: Volume I of the Publications of
the Institute for Propaganda Analysis, Inc. With New Materials to Aid Student and Adult
Groups in the Analysis of Today's Propaganda," Institute for Propaganda Analysis (I).
Johns, G. 2006. "The Essential Impact of Context on Organizational Behavior," Academy of
Management Review (31:2), pp. 386-408.
Johns, G. 2017. "Reflections on the 2016 Decade Award: Incorporating Context in
Organizational Research," Academy of Management Review (42:4), pp. 577-595.
Johnson, T. J., and Kaye, B. K. 2015. "Reasons to Believe: Influence of Credibility on
Motivations for Using Social Networks," Computers in human behavior (50), pp. 544-
555.
Jonas, E. S. S.-H. 2001. "Confirmation Bias in Sequential Information Search after Preliminary
Decisions: An Expansion of Dissonance Theoretical Research on Selective Exposure to
Information," Journal of Personality and Social Psychology (80:4), p. 14.
33
Kahneman, D. 2011. Thinking, Fast and Slow. Macmillan.
Kelly, S. P., Lalor, E. C., Reilly, R. B., and Foxe, J. J. 2006. "Increases in Alpha Oscillatory
Power Reflect an Active Retinotopic Mechanism for Distracter Suppression During
Sustained Visuospatial Attention," Journal of Neurophysiology (95:6), pp. 3844-3851.
Kilner, J. M., Mattout, J., Henson, R., and Friston, K. J. 2005. "Hemodynamic Correlates of Eeg:
A Heuristic," NeuroImage (28:1), pp. 280-286.
Kim, A., and Dennis, A. R. 2018. "Says Who?: How News Presentation Format Influences
Perceived Believability and the Engagement Level of Social Media Users," Proceedings
of the Hawaii International Conference on System Sciences, Waikoloa, HI..
Kirby, E. J. 2016. "The City Getting Rich from Fake News," BBC News.
Kitchen, P. J., Kerr, G., Schultz, D. E., McColl, R., and Pals, H. 2014. "The Elaboration
Likelihood Model: Review, Critique and Research Agenda," European Journal of
Marketing (48:11/12), pp. 2033-2050.
Klimesch, W. 2012. "Alpha-Band Oscillations, Attention, and Controlled Access to Stored
Information," Trends in Cognitive Sciences (16:12), pp. 606-617.
Klimesch, W., Doppelmayr, M., Pachinger, T., and Ripper, B. 1997. "Brain Oscillations and
Human Memory: Eeg Correlates in the Upper Alpha and Theta Band," Neuroscience
Letters (238:1), pp. 9-12.
Klimesch, W., Doppelmayr, M., Stadler, W., Pöllhuber, D., Sauseng, P., and Röhm, D. 2001.
"Episodic Retrieval Is Reflected by a Process Specific Increase in Human
Electroencephalographic Theta Activity," Neuroscience Letters (302:1), pp. 49-52.
Klimesch, W., Schimke, H., Doppelmayr, M., Ripper, B., Schwaiger, J., and Pfurtscheller, G.
1996. "Event-Related Desynchronization (Erd) and the Dm Effect: Does Alpha
Desynchronization During Encoding Predict Later Recall Performance?," International
Journal of Psychophysiology (24:1-2), pp. 47-60.
Knobloch‐Westerwick, S., and Lavis, S. M. 2017. "Selecting Serious or Satirical, Supporting or
Stirring News? Selective Exposure to Partisan Versus Mockery News Online Videos,"
Journal of Communication (67:1), pp. 54-81.
Koriat, A., Lichtenstein, S., and Fischhoff, B. 1980. "Reasons for Confidence," Journal of
Experimental Psychology: Human learning and memory (6:2), p. 107.
Krause, C. M., Sillanmäki, L., Koivisto, M., Saarela, C., Häggqvist, A., Laine, M., and
Hämäläinen, H. 2000. "The Effects of Memory Load on Event-Related Eeg
Desynchronization and Synchronization," Clinical Neurophysiology (111:11), pp. 2071-
2078.
Maheshwari, S. 2017. "20th Century Fox Gives Real Apology for a Fake News Campaign," in:
The New York Times. Business Day.
Makeig, S. 1993. "Auditory Event-Related Dynamics of the Eeg Spectrum and Effects of
Exposure to Tones," Electroencephalography and Clinical Neurophysiology (86:4), pp.
283-293.
Makeig, S., Westerfield, M., Jung, T.-P., Enghoff, S., Townsend, J., Courchesne, E., and
Sejnowski, T. 2002. "Dynamic Brain Sources of Visual Evoked Responses," Science
(295:5555), pp. 690-694.
McGrath, G. 1986. "Measuring the Concept of Credibility," Journalism Quarterly (63:3), p. 12.
McKenzie, C. R. 2006. "Increased Sensitivity to Differentially Diagnostic Answers Using
Familiar Materials: Implications for Confirmation Bias," Memory & Cognition (34:3), pp.
577-588.
34
Meixler, E. 2017. "Facebook Is Dropping Its Fake News Red Flag Warning after Finding It Had
the Opposite Effect," Time.
Mellencamp, J. 2004. "Walk Tall," in: Words & Music: John Mellencamp's Greatest Hits. Island.
Melnikoff, D. E., and Bargh, J. A. 2018. "The Mythical Number Two," Trends in Cognitive
Sciences (22:4), pp. 280-293.
Mevorach, C., Humphreys, G. W., and Shalev, L. 2006. "Effects of Saliency, Not Global
Dominance, in Patients with Left Parietal Damage," Neuropsychologia (44:2), pp. 307-
319.
Mills, J. 1999. "Improving the 1957 Version of Dissonance Theory," in Cognitive Dissonance:
Progress on a Pivotal Theory in Social Psychology. Washington, DC, US: American
Psychological Association, pp. 25-42.
Minas, R. K., Dennis, A. R., Potter, R. F., and Kamhawi, R. 2017. "Triggering Insight: Using
Neuroscience to Understand How Priming Changes Individual Cognition During
Electronic Brainstorming," Decision Sciences.
Minas, R. K., Potter, R. F., Dennis, A. R., Bartelt, V. L., and Bae, S. 2014. "Putting on the
Thinking Cap: Using Neurois to Understand Information Processing Biases in Virtual
Teams," Journal of Management Information Systems (30:4), pp. 49-82.
Moretti, D. V., Paternicò, D., Binetti, G., Zanetti, O., and Frisoni, G. B. 2013. "Eeg Upper/Low
Alpha Frequency Power Ratio Relates to Temporo-Parietal Brain Atrophy and Memory
Performances in Mild Cognitive Impairment," Frontiers in Aging Neuroscience (5), p.
63.
Müller-Putz, G. R., Riedl, R., and Wriessnegger, S. C. 2015. "Electroencephalography (Eeg) as a
Research Tool in the Information Systems Discipline: Foundations, Measurement, and
Applications," CAIS (37), p. 46.
NewsGuard. 2018. "Newsguard: Criteria for and Explanation of Ratings." Retrieved July 28,
2018, 2018, from https://newsguardtechnologies.com/our-ratings/
Nickerson, R. S. 1998. "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises," Review
of General Psychology (2:2), p. 26.
Peirce, C. S. 1931-1958. "Collected Papers of Charles Sanders Peirce," in: Volumes 1-8.
Cambridge University Press: Harvard, MA.
Pennycook, G., Neys, W. D., Evans, J. S. B. T., Stanovich, K. E., and Thompson, V. A. 2018.
"The Mythical Dual-Process Typology," Trends in Cognitive Sciences (22:8), pp. 667-
668.
Petty, R. E., and Cacioppo, J. T. 1986. Communication and Persuasion: Central and Peripheral
Routes to Attitude Change. New York: Springer-Verlag.
Pizzagalli, D. A. 2007. "Electroencephalography and High-Density Electrophysiological Source
Localization," in Handbook of Psychophysiology, J. Cacioppo and G.B. Tassinary (eds.).
New York: Cambridge University Press, pp. 56-84.
Poldrack, R. A. 2011. "Inferring Mental States from Neuroimaging Data: From Reverse
Inference to Large-Scale Decoding," Neuron (72:5), p. 6.
Porro, C. A., Francescato, M. P., Cettolo, V., Diamond, M. E., Baraldi, P., Zuiani, C., Bazzocchi,
M., and Di Prampero, P. E. 1996. "Primary Motor and Sensory Cortex Activation During
Motor Performance and Motor Imagery: A Functional Magnetic Resonance Imaging
Study," Journal of Neuroscience (16:23), pp. 7688-7698.
Potter, R. F., and Bolls, P. 2012. Psychophysiological Measurement and Meaning: Cognitive and
Emotional Processing of Media. Routledge.
35
Ramírez-Cortes, J. M., Alarcon-Aquino, V., Rosas-Cholula, G., Gomez-Gil, P., and Escamilla-
Ambrosio, J. 2010. "P-300 Rhythm Detection Using Anfis Algorithm and Wavelet
Feature Extraction in Eeg Signals," Proceedings of the World Congress on Engineering
and Computer Science: International Association of Engineers San Francisco, pp. 963-
968.
Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., and Menczer, F.
2011. "Truthy: Mapping the Spread of Astroturf in Microblog Streams," Proceedings of
the 20th international conference companion on World wide web: ACM, pp. 249-252.
Rice, R. E., Gustafson, A., and Hoffman, Z. 2018. "Frequent but Accurate: A Closer Look at
Uncertainty and Opinion Divergence in Climate Change Print News," Environmental
Communication (12:3), pp. 301-321.
Roberts, D. 2013. "Yelp’s Fake Review Problem," Fortune magazine.
Schaedel, S. 2017. "How to Flag Fake News on Facebook." from
http://www.factcheck.org/2017/07/flag-fake-news-facebook/
Schutter, D. J., Putman, P., Hermans, E., and van Honk, J. 2001. "Parietal Electroencephalogram
Beta Asymmetry and Selective Attention to Angry Facial Expressions in Healthy Human
Subjects," Neuroscience letters (314:1-2), pp. 13-16.
Shane, S. 2017. "The Fake Americans Russia Created to Influence the Election." from
https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html
Shao, C., Ciampaglia, G. L., Flammini, A., and Menczer, F. 2016. "Hoaxy: A Platform for
Tracking Online Misinformation," Proceedings of the 25th International Conference
Companion on World Wide Web, pp. 745-750.
Silverman, C. 2016. "This Analysis Shows How Viral Fake Election News Stories Outperformed
Real News on Facebook," Buzzfeed News (16).
Sowell, E. R., Thompson, P. M., Holmes, C. J., Jernigan, T. L., and Toga, A. W. 1999. "In Vivo
Evidence for Post-Adolescent Brain Maturation in Frontal and Striatal Regions," Nature
neuroscience (2:10), p. 859.
Srull, T. K., and Wyer, R. S. 1979. "The Role of Category Accessibility in the Interpretation of
Information About Persons: Some Determinants and Implications," Journal of
Personality and Social Psychology (37:10), pp. 1660-1672.
Srull, T. K., and Wyer, R. S. 1980. "Category Accessibility and Social Perception: Some
Implications for the Study of Person Memory and Interpersonal Judgments." US:
American Psychological Association, pp. 841-856.
Srull, T. K., and Wyer, R. S. 1983. "The Role of Control Processes and Structural Constraints in
Models of Memory and Social Judgment," Journal of Experimental Social Psychology
(19:6), pp. 497-521.
Stanovich, K. E. 1999. Who Is Rational?: Studies of Individual Differences in Reasoning.
Psychology Press.
Statista. 2018. "Number of Monthly Active Facebook Users Worldwide as of 1st Quarter 2018
(in Millions)."
Stone, D. F., and Wood, D. H. 2018. "Cognitive Dissonance, Motivated Reasoning, and
Confirmation Bias: Applications in Industrial Organization," Handbook of Behavioral
Industrial Organization.
Sydell, L. 2016. "We Tracked Down a Fake-News Creator in the Suburbs. Here’s What We
Learned." National Public Radio.
36
Taylor, G. S., and Schmidt, C. 2012. "Empirical Evaluation of the Emotiv Epoc Bci Headset for
the Detection of Mental Actions," Proceedings of the Human Factors and Ergonomics
Society Annual Meeting (56:1), pp. 193-197.
Taylor, S. E., and Fiske, S. T. 1978. "Salience, Attention, and Attribution: Top of the Head
Phenomena," in Advances in Experimental Social Psychology, L. Berkowitz (ed.).
Academic Press, pp. 249-288.
Tewksbury, D. 2003. "What Do Americans Really Want to Know? Tracking the Behavior of
News Readers on the Internet," Journal of Communication (2003:December), p. 17.
The Wall Street Journal. 2016. "Blue Feed, Red Feed." from http://graphics.wsj.com/blue-feed-
red-feed/
Thompson, V. A. 2013. "Why It Matters: The Implications of Autonomous Processes for Dual
Process Theories—Commentary on Evans & Stanovich (2013)," Perspectives on
Psychological Science (8:3), pp. 253-256.
Thompson, V. A., Prowse Turner, J. A., and Pennycook, G. 2011. "Intuition, Reason, and
Metacognition," Cognitive Psychology (63:3), pp. 107-140.
Turel, O. f., and Qahri-Saremi, H. 2016. "Problematic Use of Social Networking Sites:
Antecedents and Consequences from a Dual-System Theory Perspective," Journal of
Management Information Systems (33:4), pp. 1087-1116.
Van de Ven, A. H. 2007. Engaged Scholarship: A Guide for Organizational and Social
Research. Oxford University Press.
Vance, A., Jenkins, J. L., and Anderson, B. B. 2018. "Tuning out Security Warnings: A
Longitudinal Examination of Habituation through Fmri, Eye Tracking, and Field
Experiments," MIS Quarterly (42:2), pp. 355-380.
Wang, S., Gwizdka, J., and Chaovalitwongse, W. A. 2015. "Using Wireless Eeg Signals to
Assess Memory Workload in the α-Back Task," IEEE Transactions on Human-Machine
Systems (PP:99), pp. 1-12.
Weiss, R. 2017. "Nip Misinformation in the Bud," Science (358:6362), pp. 427-427.
Wintersieck, A. L. 2017. "Debating the Truth: The Impact of Fact-Checking During Electoral
Debates," American Politics Research (45:2), pp. 304-331.
Zuckerberg, M. 2016. "Status Update." Facebook.com.
... Our research model investigates the influence of GAI on job crafting, and we explore the moderating effects of cognitive offloading using EEG measurements. Due to the non-comparability of brain structures between participants [32], we opted for a mixed factorial design [33]. As we analyze cognitive offloading while using GAI, we require a brainwave comparison between the control condition (no GAI support) and the treatment conditions (GAI or GAI + prompting examples). ...
... Participants are required to be well-rested for high-quality EEG measurement. Given that a third of left-handers have variations in their brain structure [32], we exclude those from our experiment. ...
... For collecting the neurophysiological data, use the consumer-grade Emotiv EPOC device, as it has been previously used in leading IS journals (for instance, [32,44]). While there has been an ongoing discussion in the NeuroIS community about the quality of consumer-grade devices, Riedl et al. [45] found that consumer-grade devices nowadays gain broader acceptance, especially for time-frequency analyses. ...
Conference Paper
Full-text available
In the era of ChatGPT and other generative AI tools, white-collar workers are given tremendous potential to simplify everyday tasks. Within vocational psychology, this phenomenon is known as job crafting. We conduct an electroencephalography-based mixed-factorial experiment to explore the underlying mechanisms of how and why the use of generative AI tools can lead to job crafting. Relying on cognitive load theory and resource demand theory, we measure the effects of ChatGPT use and prompt engineering guidance in strategic thinking tasks. We hypothesize that individuals who use ChatGPT without and with prompting examples rely on cognitive offloading to avoid cognitive effort, affecting resource demands. An initial evaluation of our experiment task design provides promising results. We plan our experiment with participants who are familiar with executive assistant tasks. Our expected results contribute to the ongoing discussion of ICT-enabled job crafting and provide empirical-driven explanations of AI-enabled job crafting mechanisms.
... Cognitive processing: Cognitive processing is considered a necessary requirement for correctly identifying fake news [52,56,59]. If users engage in cognitive processing by thinking critically about the content, they are less likely to fall for fake news. ...
... The tendency to spend cognitive effort further depends on users' mindset. Users on social media are generally in a hedonic mindset, where they are seeking pleasure and entertainment, rather than spending cognitive effort into separating real from fake news [52]. ...
... However, the subjects in our experiment did not engage more in cognitive processing when fake news articles are written with greater complexity. A lack of cognitive processing is an important factor of why users fall for fake news [52,58,59]. Therefore, fake news articles written with greater complexity may be more successful in deceiving the public. ...
Article
Fake news on social media has large, negative implications for society. However, little is known about what linguistic cues make people fall for fake news and, hence, how to design effective countermeasures for social media. In this study, we seek to understand which linguistic cues make people fall for fake news. Linguistic cues (e.g., adverbs, personal pronouns, positive emotion words, negative emotion words) are important characteristics of any text and also affect how people process real vs. fake news. Specifically, we compare the role of linguistic cues across both cognitive processing (related to careful thinking) and affective processing (related to unconscious automatic evaluations). To this end, we performed a within-subject experiment where we collected neurophysiological measurements of 42 subjects while these read a sample of 40 real and fake news articles. During our experiment, we measured cognitive processing through eye fixations, and affective processing in situ through heart rate variability. We find that users engage more in cognitive processing for longer fake news articles, while affective processing is more pronounced for fake news written in analytic words. To the best of our knowledge, this is the first work studying the role of linguistic cues in fake news processing. Altogether, our findings have important implications for designing online platforms that encourage users to engage in careful thinking and thus prevent them from falling for fake news.
... É notório e está fartamente documentado pela literatura que o problema das fake news é complexo, multifatorial, e está longe de ser solucionado. Entre as hipóteses explicativas para o fenômeno pode-se elencar tendências psicossociais, como o viés de confirmação (Moravec et al., 2018) e o sistema de crenças que envolve grupos sociais (Tajfel, 1981); características das redes sociais que favorecem a formação de câmaras de eco (Recuero, 2020) e a viralização ou disseminação acelerada de conteúdos, inclusive por robôs (Ruediger et al., 2018); além de fatores mais amplos como a polarização política e a radicalização ideológica (Miguel, 2019). Reconhecendo que existe um debate sobre a dimensão semântica do termo, incluindo diferenciações entre disinformation e misinformation, que foge aos objetivos deste trabalho, cabe esclarecer que, por fake news, entendemos qualquer conteúdo falso produzido deliberadamente para enganar e circular no ambiente digital, neste caso com o objetivo de obter ganhos políticos e/ou ideológicos (Gomes & Dourado, 2019;Tandoc Jr. et al., 2017 No rol de iniciativas que buscam mitigar a desinformação, encontram-se ações como a valorização do jornalismo profissional e das agências de checagem (Schuetz et al., 2021); a responsabilização legal de produtores e propagadores de conteúdos falsos (Brasil, 2020); a regulação das empresas de plataforma (Valente, 2019); e a educação/literacia midiática, voltada para capacitar usuários a identificar e se proteger de notícias enganosas (Dame Adjin-Tettey, 2022;Jang & Kim, 2018;Jones-Jang et al., 2021). ...
Article
Full-text available
Este artigo investiga a comunicação do TSE no X no enfrentamento à desinformação sobre as urnas eletrônicas na campanha eleitoral de 2022, por meio da análise de publicações e comentários dos usuários. A análise resultou em quatro categorias temáticas: fraude, segurança, auditoria, e funcionalidade, e indicou tendência de reações negativas dos usuários. A discussão compara os resultados com as ações previstas pelo Tribunal, apontando avanços e limitações da comunicação no combate às fake news.
... Moreover, as many media and communication studies have already demonstrated, contrary to positive and informative messages, fake news and violent contents are more likely to be shared and trusted by those people who already support a precise position or ideology fostered by these messages (Diehl and Lee 2022;Moravec et al. 2018). Fake news can indeed reinforce and corroborate prejudice regardless of its manipulative intent (Flintham et al. 2018). ...
Article
Full-text available
The pontificate of Pope Francis is proving to be one of the most controversial within the Catholic world, particularly because of the several objections and protests it has raised in the most traditional currents of Catholicism. This theological and political opposition to Bergoglio’s pontificate has been the subject of many studies, which have focused, in particular, on the growing harshness of this debate in North American Catholic circles. Following these studies, the present contribution aims to study how this polarization spreads and is amplified through the online communication of these groups by providing an analysis of a specific case study: a tweet published by the account of the Pontifical Academy for Life (PAL). The PAL is a Vatican institution founded by John Paul II and renewed by Pope Francis in its membership and purpose (Global Bioethics). The Academy is perceived as the cutting edge of Pope Francis’ “progressivism”, especially regarding sensitive issues such as marriage, family, and euthanasia. For this reason, the Twitter account of PAL is considered the ideal platform to observe the languages, expressions, and content that characterize the opposition to Bergoglio’s pontificate today.
Research Proposal
Full-text available
Social media usage patterns are crucial to the propagation of false information. Unknowingly, users get caught up in the loop of creating, sharing, and interacting with material. The habit expert Wendy Wood highlights the close connection between false information and the design of social media platforms. It goes beyond personal characteristics and emerges as a result of platform architecture as a whole. Fighting false information necessitates a multipronged approach, as it continues to undermine confidence in established news outlets. Programs for media literacy, platform improvements, and fact-checking efforts are all essential in halting the spread of false information.
Article
The current media ecosystem, marked by immediacy and social networks dynamics, has created a fertile field for disinformation. Faced with its exponential growth, since 2014, research has focused on combating false content in the media. From a descriptive approach, this study has analyzed 200 documents on fact-checking and fake news published between 2014 and 2022 in scientific journals indexed in Scopus. This study has found that Europe and the United States are leading the way in the number of journals and authors publishing on the subject. The United States universities are the ones that host the most significant number of authors working on fact-checking, while the methodologies used, mostly ad hoc due to the novelty of the topic, allow to reflect on the need to promote work focused on the design, testing, and evaluation of prototypes or real experiences within the field. The most common contributions analyzed include typologies of false content and media manipulation mechanisms, models for evaluating and detecting disinformation, proposals to combat false content and strengthen verification mechanisms, studies on the role of social media in the spread of disinformation, efforts to develop media literacy among the public and journalists, case studies of fact-checkers, identification of factors that influence the belief in fake news, and analysis of the relationship between disinformation, verification, politics, and democracy. It is concluded that it is essential to develop research that connects the academy with the industry to raise awareness of the need to address these issues among the different actors in the media scenario.
Article
Social media has become a powerful conduit for misinformation during major public events. As a result, an extant body of research has emerged on misinformation and its diffusion. However, the research is fragmented and has mainly focused on understanding the content of misinformation messages. Little attention is paid to the production and consumption of misinformation. This study presents the results of a detailed comparative analysis of the production, consumption, and diffusion of misinformation with authentic information. Our findings, based on extensive use of computational linguistic analyses of COVID-19 pandemic-related messages on the Twitter platform, revealed that misinformation and authentic information exhibit very different characteristics in terms of their contents, production, diffusion, and their ultimate consumption. To support our study, we carefully selected a sample of 500 widely propagated messages confirmed by fact-checking websites as misinformation or authentic information about pandemic-related topics from the Twitter platform. Detailed computational linguistic analyses were performed on these messages and their replies ( N = 198,750). Additionally, we analyzed approximately 1.2 million Twitter user accounts responsible for producing, forwarding, or replying to these messages. Our extensive and detailed findings were used to develop and propose a theoretical framework for understanding the diffusion of misinformation on social media. Our study offers insights for social media platforms, researchers, policymakers, and online information consumers about how misinformation spreads over social media platforms.
Article
We conduct a survey on a large representative sample of Slovak population to examine the role of analytic thinking, scientific reasoning, conspiracy mentality, and conspiracy beliefs in trust in COVID‐19 fake news and willingness to share it. We find that the ability to distinguish between fake and real news about COVID‐19 is significantly negatively correlated with conspiracy mentality and with beliefs in pandemic‐related conspiracy theories. Analytic thinking is not a significant predictor. Although fake news is generally less likely to be trusted and shared than real news, when fake news is consistent with preexisting opinions, people are more willing to share it compared with belief‐consistent real news. We also find that people are mostly overconfident in their ability to distinguish between fake and real news, and we identify a subpopulation of people that refuse to get vaccinated who trust fake COVID‐19 news significantly more than real news. Thus, consistency with one's beliefs is the best indicator of trust in fake news and willingness to share such news.
Article
Full-text available
Strategic management frequently requires making decisions in complex situations. Strategic decision-making is widely assumed to be an objective process supported by rational analytical tools. A strategic decision, however, involves both a structured part, which can be dealt with using different analytical tools, and an unstructured part, which must be dealt with by means of the decision maker's judgment, intuition and experience. Therefore, individual leanings are inherent to complex strategic decisions. To make such decisions, the human brain appeals to diverse heuristics or shortcuts to analyze and simplify contextual information. These mental strategies are not infallible; on the contrary, there is abundant evidence that humans' minds are prone to cognitive biases or traps that cloud objectivity when making decisions. In a managerial context, such biases or traps can result in suboptimal or inefficient decisions that undermine organizational value. It is not surprising, therefore, that there are so many tools and techniques devised to make (arguably) objective management decisions, based on evidence. Among other disciplines that facilitate decision making and thus reduce cognitive biases, operations research (OR) offers multiple analytical methodologies and procedures, such as Multi-Criteria Decision Analysis (MCDA). However, the relationship between cognitive biases and these decision-making tools has not been amply investigated. Empirical evidence on how individual differences and cognitive factors influence the effectiveness of these tools in practice is still lacking, and it is uncertain whether they contribute effectively to reducing cognitive biases, or if, on the contrary, cognitive biases interfere with the tools' effectiveness. This paper presents an experimental study involving undergraduate and graduate students where their individual differences, as measured by their hedonic or utilitarian leanings, are linked to two motivational biases, respectively known as confirmation and desirability of choice, and how these biases influence a decision made through MCDA techniques. Results suggest that experimental subjects employ MCDA to confirm previously conceived decisions, rather than using the tool to explore the problem situation, make sense of their preferences and choose the most appropriate course of action accordingly. This is, it appears that subjects make use of the OR tool to support a decision they have already made in their own minds. Consequently, the effectiveness of MCDA and other strategic or decision-making tools might be affected by individual differences and corresponding motivational biases. These results also suggest that there is a very fine division between valid, preference-based decisions and biased preconceptions. In particular, these experimental results question the effectiveness of multicriteria analysis tools; more generally, this research suggests that a totally rational decision making process-and therefore a completely objective, analytical, tool-supported, evidence-based management-might be a utopic endeavor.
Article
Full-text available
Research in the fields of information systems and human-computer interaction has shown that habituation-decreased response to repeated stimulation-is a serious threat to the effectiveness of security warnings. Although habituation is a neurobiological phenomenon that develops over time, past studies have only examined this problem cross-sectionally. Further, past studies have not examined how habituation influences actual security warning adherence in the field. For these reasons, the full extent of the problem of habituation is unknown. We address these gaps by conducting two complementary longitudinal experiments. First, we performed an experiment collecting fMRI and eye-tracking data simultaneously to directly measure habituation to security warnings as it develops in the brain over a five-day workweek. Our results show not only a general decline of participants' attention to warnings over time but also that attention recovers at least partially between workdays without exposure to the warnings. Further, we found that updating the appearance of a warning-that is, a polymorphic design-substantially reduced habituation of attention. Second, we performed a three-week field experiment in which users were naturally exposed to privacy permission warnings as they installed apps on their mobile devices. Consistent with our fMRI results, users' warning adherence substantially decreased over the three weeks. However, for users who received polymorphic permission warnings, adherence dropped at a substantially lower rate and remained high after three weeks, compared to users who received standard warnings. Together, these findings provide the most complete view yet of the problem of habituation to security warnings and demonstrate that polymorphic warnings can substantially improve adherence.
Article
Full-text available
Most current information systems security theories assume a rational actor making deliberate decisions, yet recent research in psychology suggests that such deliberate thinking is not as common as we would expect. Much of human behavior is controlled by nonconscious automatic cognition (called System 1 cognition). The deliberate rational cognition of System 2 is triggered when System 1 detects something that is not normal; otherwise we often operate on autopilot. When we do engage System 2 cognition, it is influenced by the System 1 cognition that preceded it. In this paper we present an alternative theoretical approach to information security that is based on the nonconscious automatic cognition of System 1. In a System 1 world, cognition is a sub-second process of pattern-matching a stimulus to an existing person-context heuristic. These person-context heuristics are influenced by personality characteristics and a lifetime of experiences in the context. Thus System 1 theories are closely tied to individuals and the specific security context of interest. Methods to improve security compliance take on a very new form; the traditional approaches to security education and training that provide guidelines and ways to think about security have no effect when behavior is controlled by System 1, because System 1 cognition is instant pattern matching not deliberative. Thus in a System 1 world, we improve security by changing the heuristics used by System 1's pattern matching and/or by changing what System 1 sees as "normal" so that it triggers the deliberate cognition of System 2. In this article, we examine System 1 and System 2 cognition, while calling for increased research to develop theories of System 1 cognition in the cybersecurity literature.
Article
It is often said that there are two types of psychological processes: one that is intentional, controllable, conscious, and inefficient, and another that is unintentional, uncontrollable, unconscious, and efficient. Yet, there have been persistent and increasing objections to this widely influential dual-process typology. Critics point out that the 'two types' framework lacks empirical support, contradicts well-established findings, and is internally incoherent. Moreover, the untested and untenable assumption that psychological phenomena can be partitioned into two types, we argue, has the consequence of systematically thwarting scientific progress. It is time that we as a field come to terms with these issues. In short, the dual-process typology is a convenient and seductive myth, and we think cognitive science can do better.
Article
The prevalence of uncertainty and opinion divergence frames in climate change news reporting has generated concerns about the misrepresentation of scientific consensus. We first develop reliable, valid, and more nuanced measures of often-conflated types of uncertainty and opinion divergence frames. Then we analyse the co-occurrence combinations of those distinct types of opinions, sources, and topics in mainstream climate change news stories between 2005 and 2015. Results indicate that while uncertainty and opinion divergence frames are indeed frequent, once clearly distinguished, they in general accurately reference non-scientist sources (e.g. government officials) and topics that do not have a scientific consensus (e.g. the severity of climate change effects).
Article
We build on prior theory and research on electronic brainstorming to examine how achievement priming influences individual cognition leading to changes in individual behavior and ultimately team performance. We conducted a repeated measures experiment using electroencephalography with 53 subjects performing two brainstorming tasks. We found that priming altered cognition in the left and right regions of the frontal cortex; that is, achievement priming triggered cognition in areas of the brain related to creative and insightful cognition while the placebo treatment led to cognition in areas related to language production. Thus, priming did not induce “more” cognition, but rather triggered changes in the nature of cognition that led to significantly more ideas and more ideas that were highly novel, workable, and relevant. This study makes two contributions: it shows one theoretical pathway by which achievement priming works; and it show that priming using pictures improves idea generation.
Article
The democratization of journalism through crowd sourcing, blogging, and social media has proven to be a sharp, double-edged sword. The internet has vastly expanded the sourcing of news and information, capturing stories that might otherwise go untold and delivering a diversity of perspectives that no single media outlet could hope to offer. At the same time, this new and open model has given anyone with web access a global platform to propagate information that is mistakenly or intentionally false. This is especially problematic when it comes to scientific information, which is critical to rational policy-making in areas like health, environmental protection, and national security, and at its best is often misinterpreted by the lay public. Yet recent years have seen a reduction in specialized science pages and reporters in the nation's newsrooms in favor of reliance on general assignment staffers, even as deadlines have grown shorter—reducing opportunities to ensure accuracy and clarity before publication.