Content uploaded by Huyen Nguyen
Author content
All content in this area was uploaded by Huyen Nguyen on Mar 16, 2023
Content may be subject to copyright.
Fighting Misinformation: Where Are We
and Where to Go?
Huyen Nguyen , Lydia Ogbadu-Oladapo , Irhamni Ali , Haihua Chen ,
and Jiangping Chen(B)
University of North Texas, Denton, TX 76203, USA
Jiangping.chen@unt.edu
Abstract. This study reviews existing studies on misinformation. Our purposes
are to understand the major research topics that have been investigated by
researchers from a variety of disciplines, and to identify important areas for further
exploration for library and information science scholars. We conducted automatic
descriptive analysis and manual content analysis after selecting journal articles
from 4 major databases. The automatic analysis of 5,586 journal articles demon-
strated that misinformation has been an increasingly popular research area in
recent 12 years, and scholars in more than 1,200 fields of study have published
related articles in more than 2,400 journals. Topics explored include misinforma-
tion environments, impact of misinformation, users/victims, types of misinforma-
tion, misinformation detection & correction, and others; The content analysis of
151 articles published in library and information studies journals found that more
than 40 different theories/models/frameworks have been applied to understand or
fight misinformation. Furthermore, information scholars have suggested that the
research of misinformation could be explored further in 5 categories, including
further understanding misinformation, its spread, and impacts; misinformation
detection and correction, Policy and education to fight misinformation, more case
studies, and more theory and model development. This study provides a broad
picture of misinformation research, which allows researchers and practitioners to
better plan and develop their projects and strategies for fighting misinformation. It
also provides evidence to information schools to enhance curriculum development
for educating the next generation of information professionals.
Keywords: Misinformation ·Systematic literature review ·Topic analysis
1 Introduction
Misinformation has been a challenge for the public, the social media industry, and the
academia, especially with the advancement of the Internet, social media, and other infor-
mation technologies. The wrong messages submitted or posted on social media might
lead to incorrect understanding and even harmful behavior by individuals. The negative
impact of misinformation might be tremendous and cause huge damage to the human
society; Social media platforms must spend time and effort to detect misinformation and
seek balance between free speech and misleading messages; And scholars in multiple
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
I. Sserwanga et al. (Eds.): iConference 2023, LNCS 13971, pp. 371–394, 2023.
https://doi.org/10.1007/978-3-031-28035-1_27
372 H. Nguyen et al.
disciplines, such as medicine, computer science, communication, political science are
actively exploring theories, methods, and technologies to explain, detect, and correct
misinformation.
As information professionals, we are at the forefront of helping information users
fight misinformation [1,2]. Although extensive research has been conducted, systematic
understanding of such effort is not performed. There is no big picture on what disciplines
are active, what have been explored, and especially, what information scientists have
performed in this area. We believe information scientists could do much more to help
the general public, to educate modern information professionals, and also to provide
insights on future solutions to misinformation. However, the basis of further exploration
needs to be a good understanding of what have been done ever since.
The purposes of this study are to understand the major research topics investigated
by researchers from a variety disciplines, and to examine the important areas for further
exploration for information scientists. Specifically, we would like to answer the following
questions:
•RQ1: What are the characteristics of existing literature (2010–2022) on misinforma-
tion?
•RQ2: What are the major topics explored by researchers from different disciplines?
•RQ3: What are the theories that information scientists have applied to misinformation
research?
•RQ4: What are considered important future research topics or directions?
The rest of the paper is organized as follows. Section 2reviews the definition of
misinformation and a few literature reviews we could find that are related to this study;
Sect. 3describes our research design, including literature selection and data analysis
approaches. Section 4presents our analytical results on the selected papers’ metadata,
and manual library and information science articles. Next, we discuss our results and
concluded the paper.
2 Related Review Studies
2.1 Misinformation Defined
Earlier authors portray misinformation and disinformation as false information [3],
inaccurate information [4], as a virus [5,6], and as “articles that are intentionally and
verifiably false and could mislead readers” [7].
Contemporary authors, however, present misinformation as the umbrella term that
includes all inaccurate or false information or reports spreading on social media [8,
9]. However, misinformation, disinformation, fake news, rumor, spam, troll, and urban
legend all share the wrong message(s) as a common characteristic that can lead to
distress or adverse effects via social media if not curbed [9,10]. This effect explains
why false information and fake news are confused with various other terms associated
with inaccurate information. Lazer and colleagues [11] argue that fake news is the most
challenging to define as they are fabricated stories that mimic news media content and
sometimes intend to accomplish a political goal.
Fighting Misinformation: Where Are We and Where to Go? 373
Our study applies a broad definition of misinformation to include most of the aspects
being specified by other researchers - as misinformation refers to articles, posts, mes-
sages, and other representations that are intentionally fabricated and verifiably false to
mislead people.
2.2 Literature Analysis of Misinformation Research
There are very few review studies that have been found in this area. Revez and Corujo [2]
conducted a systematic literature review on librarians against fake news. They reviewed
27 articles from 2018–2020 and concluded that a librarian could develop formal instruc-
tion, apply a framework and use checklists or learner-centred approaches. Librarians
may also produce campaigns through social media or library guides through audio-visual
activities.
Another study by E, Sakura, and Li [12] reviewed 135 papers published before
2020 on misinformation correction and its effects. Their review tried to answer ques-
tions including characteristics of the literature, psychological and behavioral outcomes,
and theories used to explain or predict the performance of misinformation correction.
Their research found that there have been consistent interests on information correction
topics over the past four decades with a sharp increase in the topic relevance in the
last ten years. However, most research conducted in misinformation correction has been
built using psychological perspectives with quantitative methodologies. They concluded
that misinformation treatment is a complex process that sometimes generates unwanted
outcomes.
Both reviews have presented a systematic process of selecting relevant articles from
literature databases for review, which is very similar to what we have used for this
study. Specifically, E, Sakura, and Li [12] used Preferred Reporting Items for Systematic
review and Meta-Analysis (PRISMA) approach to analyze articles collected from google
scholar, EBSCO academic, and Web of Science.
3 Research Design
We used a systematic review approach in order to answer the research questions. A sys-
tematic review was defined as: “a review of a clearly formulated question that uses sys-
tematic and explicit methods to identify, select and critically appraise relevant research
and to collect and analyze data from the studies that are included in the review” [13].
Automatic analysis and manual content analysis were combined in this study. The whole
research process involves the following steps: 1) relevant literature identification, or data
collection; 2) article selection and verification, and (3) automatic and manual analysis.
Our study starts with retrieving metadata from well-known large scholarly databases.
The collected data were then filtered using the inclusive and exclusive criteria we prede-
fined for automatic and manual analysis. After that, the selected data sets were analyzed
automatically or manually to answer the research questions (Fig. 1).
374 H. Nguyen et al.
Fig. 1. Research design: Semi-automatic systematic review workflow
3.1 Data Collection
As misinformation covers many related terms such as “disinformation”, “fake news”,
“rumors”, “spams”, and “troll”. [9,14,15]. They are often used interchangeably. There-
fore, together with “misinformation”, we included the above terms as queries to retrieve
“misinformation” articles from the following four well-known databases: Web of Sci-
ence, Scopus, ScienceDirect, and Semantic Scholar. Furthermore, we set the publication
year filter from 2010 to 2022. In other words, we tried to retrieve misinformation-related
articles in the past 12 years as the scope of this study.
Most databases do not have complete metadata attributes, which are helpful for our
data filtering strategy. Therefore, we merged metadata from different databases using
DOI (if available) and titles. Web scraping and APIs given by Scopus, ScienceDirect,
and Semantic Scholar were used to scrap and integrate data from the four sources auto-
matically. As a result, we obtained metadata records of 24,056 articles. Each metadata
record may include the following fields: title, authors, venue, publication year, citation
count, fields of study, abstract, DOI, query, and database.
Fighting Misinformation: Where Are We and Where to Go? 375
3.2 Data Selection and Verification
Data selection and verification, or data cleaning, are necessary to ensure the quality of the
data for analysis. Our data selection process includes duplicates removal, journal article
identification, and Library and Information Science (LIS) journal article identification.
Because the data collected from different sources contained many duplicate records,
we normalized the titles by lowercasing and removing punctuations. Then we removed
8,449 duplicated records by comparing them with each other using a Python program.
After this process, 15,607 unique articles were kept. Then we removed 1,780 articles
that were published before 2010.
The remaining 13,823 records contain journal articles, conference proceedings,
books, and other types of publications. The quality of these articles varies and is chal-
lenging to control. We made a difficult decision that our analysis would focus on journal
articles. To verify that a paper is a journal article, we gathered the publication type
attributes from Semantic Scholar and compared them with our data set. None of the
other databases offer more complete information on publication type than Semantic
Scholar. Still, we manually evaluated about 1,200 records that could not be verified
automatically. After dropping papers with other publication types, 9,502 journal articles
remained. Then we kept the articles that included necessary metadata fields for our anal-
ysis. Specifically, we filtered out articles missing values on DOI, abstract, and field of
study. We finally selected metadata records of 5,586 articles for our automatic analysis.
To answer research questions 3 and 4, we selected articles that were published in
LIS journals for manual analysis. Based on the LIS journal list provided by SCImago
JR, we found 202 LIS journal articles in the 5,586 records, and 151 of them were kept
for manual content analysis.
3.3 Automatic Metadata Analysis
Our automatic analysis of the metadata of 5,586 articles was conducted using the pipeline
suggested by Chen, Chen, and Hguyen [16]. First, the analysis explores the metadata fea-
tures using simple descriptive statistics. Then we conducted keyword and topic analysis
using statistical models to gain more insights into the topics of the articles. Finally, statis-
tical models were used to extract the keywords and topics as they are corpus-independent
and do not require training data as supervised learning models.
Descriptive Analysis. The descriptive analysis aims to identify high-level bibliometric
characteristics of literature about misinformation. For our purposes, we aim to identify
distributions of articles regarding publication years, disciplines, and venues of the 5,586
records.
Keyword Analysis. Keyword analysis extracts the most important and representative
keywords from a large corpus. Statistical and unsupervised-learning models are pre-
ferred as they are efficiently computed and do not depend on any corpus or domain.
YAKE! outperformed among those models (i.e., TF.IDF, KP-Miner, RAKE, TextRank,
SingleRank, ExpandRank, TopicRank, TopicalPageRank, PositionRank and Multipar-
titeRank) on over twenty datasets [17]. YAKE! takes into account features of casing,
376 H. Nguyen et al.
position, frequency, context relevancy, and term dispersion, returning a ranking score for
each keyword extracted by the algorithm [17]. We concatenated titles and abstracts to
input into YAKE! as they contain the most concise information of the articles. The most
common lengths of word phrases are from one to three tokens, so we set up a maximum
n-gram of three.
Topic Analysis. Topic analysis, also called topic modeling, is preferred to use to gain
more insight into a large text dataset as it clusters the most representative information into
word categories. Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA)
are the two most widely-used algorithms for topic modeling based on mathematical
techniques [18]. LSA looks at the whole corpus as a whole matrix. It attempts to find the
latent relationships within texts by first computing similarity among document vector
representations, then using Singular Value Decomposition (SVD) to reduce the matrix
dimension. On the other hand, LDA assumes that each document contains a mixture
of topics described by a multinomial distribution over a word vocabulary [18]. Using
Umass topic coherence as an evaluation metric, Bellaouar, Bellaouar, and Ghada [19]
empirically proved that LDA provided better-quality topics with higher coherence scores
than LSA on a scientific publication dataset. Therefore, LDA was chosen to extract the
latent topics from the misinformation publications we selected. To choose an optimal
number of topics, we based on cv topic coherence scores. We investigated the coherence
of a range of topic numbers between 20 and 40; this range is proper to balance the topic’s
interpretability and specificity. With the highest coherence scores (0.485) for 30 topics,
we chose to cluster the dataset into 30 topics.
3.4 Manual Content Analysis
As aforementioned, we selected 202 LIS journal articles for our systematic review. To
ensure that those articles are research publications with regard to misinformation, we
manually verified them and kept 151 articles for the systematic review. For the purpose
of this study, we manually analyzed the full texts of these articles to: identify theories
or frameworks used in misinformation research (RQ3), and to extract and summarize
future research directions (RQ4) by the authors of these articles.
Coding Process and Codes. We applied an inductive approach to conducting a coding
process to answer RQ3 and RQ4. In other words, the coding process aims to identify
theories (RQ3) and future directions (RQ4) from the 151 library and information sci-
ence articles. Because there is no coding scheme available, we identified the theories
and future directions of the articles and then summarized them to form appropriate
categories. The two-level codes: theories or future directions, and their categories, are
presented in the Results Section. In Sect. 4.3, Table 4lists the categories of theories
in the first column, the frequency count and percentage, and the examples of existing
theories/models/frameworks in the last column. In Sect. 4.4, Table 5lists the categories
of future directions in the first column, and actual future directions in the second column.
One of the authors did the coding for each table, and another author reverified the coding
results.
Fighting Misinformation: Where Are We and Where to Go? 377
4 Results
This section reports the results of our automatic analysis and manual content analysis.
4.1 General Characteristics of Misinformation Literature
We automatically analyzed the following features: Distribution of Publication, Publica-
tion Fields, and Publication Venues.
Distribution of Publication. Figure 2depicts the distribution of 5,586 journal articles
published in 2010–2022 related to misinformation. It indicated that research on misin-
formation had become an upward trend between 2010 and 2021. Evidently, the number
of publications has doubled each year since 2019. Although we do not have complete
data for 2022, with this tendency, it can be forecasted to grow higher at the end of this
year.
Fig. 2. Distribution of publications 2010–2022
Publication Fields. Scrapping the field-of-study metadata from Semantic Scholar to
merge with our collected data allows us to understand the interdisciplinary nature of
the misinformation research. We found 1,278 disciplines that have contributed to this
research area. Table 1lists the top 30 fields of study that have published the greatest
number of articles. Medicine, followed by Computer science, has the most publications
about misinformation, accounting for 27% and 24% respectively of the whole data set
(n =5,586). Psychology takes up the third largest portion (10%). Among the misinfor-
mation publications we selected, the Information science & Library science field also
contributes 182 articles, making it the eighth largest field.
378 H. Nguyen et al.
Table 1. Top 30 disciplines studying misinformation.
Field of study Count Field of study Count
Medicine 1508 Science & technology - other topics 99
Computer science 1321 History 93
Psychology 546 Environmental sciences & ecology 83
Communication 419 Public 71
Engineering 250 Environmental & occupational
health
71
Political science 226 Chemistry 51
Sociology 186 International relations 50
Information science & library science 182 Operations research & management
science
50
Physics 170 Linguistics 46
Mathematics 138 Materials science 45
Government & LAW 135 Geography 45
Telecommunications 122 Geology 45
Business 109 Arts & humanities - other topics 44
Social sciences - other topics 104 Economics 41
Business & economics 99 Education & educational research 36
Publication Venues. The misinformation-related articles we selected were published
in 2,401 journals; the top 20 leading journals are presented in Fig. 3. Physica A: Sta-
tistical mechanics and its applications, a journal in the field of statistical mechanics,
published the most misinformation-related papers (96 papers); IEEE Access, an inter-
disciplinary journal of engineering and information technologies fields, published the
second largest number of articles. Both journals are well-identified in the computer and
information science community. Further, a majority of the leading journals are computer
and information science (about eight journals) and medicine and health science (about
seven journals), affirming the incredible growth of this topic in those two areas.
Based on the journal list given by SCImago JR1, we found that 62 out of 2,401 jour-
nals in our selected misinformation literature are of Library and Information Science.
Table 2shows the top 20 LIS journals leading in misinformation research. The Publica-
tions journal ranks first with 16 articles, followed by Government Information Quarterly,
Journal of Information Science, International Journal of Information Management, and
JASIST, almost half of that number.
1https://www.scimagojr.com/.
Fighting Misinformation: Where Are We and Where to Go? 379
Fig. 3. Top 20 journals publishing misinformation-related research.
Table 2. Top 20 leading library and information science journals
LIS journal Frequency
Publications 16
Government information quarterly 9
Journal of information science 8
International journal of information management 8
Journal of the association for information science and technology (JASIST) 8
Online information review 7
Profesional de la informacion 6
Library & information science research 5
IEEE transactions on information forensics and security 4
Social science computer review 4
Electronic library 4
Journal of health communication 4
Information communication and society 3
Journal of documentation 3
Reference services review 3
Communications in information literacy 3
380 H. Nguyen et al.
4.2 Major Topics and Themes
Key Terms. Using YAKE! Keyword extraction algorithm, we extracted the top 50 key
terms from the dataset. Among them, the most frequent words or phrases are “social
network spam”, “social media network”, “social media research”, “fake news”, “health
misinformation”, “fake news detection”, “rumor spreading”, “rumor detection”, “fake
news sharing” (or “fake news spreading”), “vaccine”, “social media analysis”, “misin-
formation effect”, “fake news research”, and “rumors”. These terms indicate aspects of
current misinformation research, which may include misinformation detection, social
media analysis, misinformation effect or impact, contexts, or environments that mis-
information was spread or distributed (i.e., social media), and domains involved (i.e.,
medicine and healthcare).
Topics and Themes. Key term extraction has its limitation for us to understand the
topics because they are at most three-word phrases. We, therefore, conducted topic
analysis applying the LDA topic modeling algorithm. The topic analysis could cluster
articles with related topics that allow us to identify themes out of the topics. Table 3is
the results of the topic analysis. It contains four columns, with the first column providing
an ID for each topic identified by the system. The second column lists the 10 terms that
represent each topic. These terms are extracted from the title and abstract of the 5,583
records; the third column is topics annotated by the authors based on the terms, which was
challenging because annotation is subjective and could be wrong, even though we have 10
terms for each topic. The last column lists themes that we identified based on the terms and
topics. We identified 6 themes, including Environment of rising misinformation, Impact
of misinformation, User research, Domain-specific misinformation research, types of
misinformation, and fighting misinformation. Several topics detected by the system are
labeled as “Unknow topic” because we failed to assign an appropriate label to it. We
assigned them into a category called “Other.”
Even though it is sometimes challenging to accurately interpret the results of the topic
analysis, most misinformation is created and spread on Internet environments, including
social media such as Facebook, Twitter, Instagram, and emails. Noticeably, misinfor-
mation also exists in the form of visual advertisements on Instagram. Additionally, we
found that Cybercrime contributed to misinformation creation and growth.
Spam, fake news, and rumors are found to be major types of misinformation, which
is consistent with the aforementioned misinformation definition by Wu and others [9].
Misinformation especially involves various domains, from controversial social issues,
environment, and medicine to the political dilemma; specifically, some popular topics
of misinformation found are scandal, medical treatment, sexual misconduct, COVID-19
pandemic, abortion, political issues in the election, and vaccination concerns in commu-
nity. Also, approaches to fight misinformation, including educating, managing healthcare
risks, making scientific evidence visible, and using algorithms to automatically detect
misinformation, are explored in the literature. Furthermore, we found that misinforma-
tion literature also focused on studying the impact of misinformation on human memory
and stock trading market, while behavior studies and online review studies are signifi-
cant research in this area. However, as a tradeoff of automatic content analysis methods,
some word clusters (i.e., Topics 25–30) are not easily interpretable, so we named them
unknown topics.
Fighting Misinformation: Where Are We and Where to Go? 381
Table 3. Topics and themes of LIS misinformation research.
Topic ID Word cluster Annotated topic Themes
1Social, media, inform,
misinformation, twitter,
content, user, platform,
tweet, analysis
Misinformation on social
media (Twitter)
A. Environment of rising
misinformation
2Model, network, spread,
rumor, inform,
propagation, social,
dynamic, control, node
Rumor spreading through
social network
3Cyber, youth,
cyberbullying, aggressive,
self, trait, person,
relationship, answer, dark
Cybercrime
4Image, visual, Instagram,
panic, advertise, buy,
project, forest, density, tag
Visual advertisements on
Instagram
5 Rumor, market, food,
consume, product, stock,
cost, price, trade, impact
Rumor impacts on the
stock trading market
B. Impact of
Misinformation
6Misinformation, effect,
memory, participate,
correct, false, inform, test,
study, belief
Effect of misinformation
on memory
7Troll, study, online, use,
bully, behavior, measure,
analysis, data, indicate
Behavior study about
online bullying and troll
C. User research
8 Review, user, research,
use, base, data, inform,
online, develop, paper
Research on online users’
reviews
9Couple, lie, pathway,
poison, tolerate, pseudo,
scandal, gate, hotspot, beij
Scandal D. Domain Specific
misinformation research
10 Patient, medical,
treatment, publish,
clinical, case, report,
disorder, article, journal
Medical treatment
publications
11 Video, women, adolescent,
victim, YouTube, gender,
school, score, sexual, year
Victims of sexual abuse
(continued)
382 H. Nguyen et al.
Table 3. (continued)
Topic ID Word cluster Annotated topic Themes
12 Covid, health, pandemic,
study, public, inform,
misinformation, prevent,
people, risk
Misinformation in
COVID-19 pandemic
13 Politics, media, election,
ideology, right, Russian,
campaign, social, speech,
online
Russian-related political
issues in election
14 Fish, surface,
environment, estimate,
field, water, area, land,
catch, region
Environment
15 Vaccine, hesitate, accept,
barrier, community,
uptake, health, immune,
concern, including
Vaccination concerns in
community
16 Disinformation, article,
community, public,
digital, govern, research,
case, state, politics
Political disinformation
17 Rumor, social, abort,
event, attention, inform,
Weibo, microblog, model,
refute
Rumor of abortion
18 Spam, mail, attack,
message, email, system,
page, secure, mobile, rate
Email spam E. Types of
misinformation
19 News, fake, media,
inform, credible, social,
share, fact, study, trust
Fake news on social
media
20 Inform, student, study,
research, educate, literacy,
university, participate,
social, find
Educating F. Fighting
misinformation
21 Inform, community,
manage, risk, care, health,
provide, need, use,
response
Heathcare risk
management
(continued)
Fighting Misinformation: Where Are We and Where to Go? 383
Table 3. (continued)
Topic ID Word cluster Annotated topic Themes
22 Detect, spam, propose,
method, base, feature,
model, learn, use,
algorithm
Spam detection algorithm
23 Children, parent, cell,
interview, child, family,
mother, report, study, trial
Interviewing family
24 Science, scientific,
knowledge, policy,
climate, public, evidence,
expert, access, scientist
Scientific evidence
25 Contracept, random,
formula, graph, call,
station, time, nuclear,
protocol, accident
Unknown topic G. Other
26 Game, level, interact,
echo, book, chamber,
liquid, dietary, micro,
smoke
Unknown topic
27 Particle, source, mass,
species, size, single,
aerosol, concentration,
period, observe
Unknown topic
28 Energy, sensor, island,
rout, flood, Christian,
renew, entropy,
cryptocurrency, protocol
Unknown topic
29 Perpetrate, urban, fatal,
Taiwan, domestic, middle,
diplomacy, opioid, Asian,
entry
Unknown topic
30 Comment, signal, blog,
emission, linguistic, film,
coal, humor, annual,
biomass
Unknown topic
4.3 Theories Applied by Library and Information Science Scholars
Our manual content analysis identified more than 40 different theories, models, or
perspectives that library and information science scholars have applied to research
misinformation. Table 4lists the categories of major theories from the 151 articles.
384 H. Nguyen et al.
Table 4. Theories, models, or perspectives applied to misinformation research2
Category Frequency (percentage) Description or sample theories
No theory 59 (39.1%) No specific theories identified. Even
though papers provide literature for
concepts being explored
Computational
algorithms/models
38 (25.2%) BERT, deep learning models,
different types of classifiers,
supervised and non-supervised
machine learning models
Information credibility 9 (6.0%) Information credibility theory
Information literacy
frameworks/models
6 (4.0%) Media literacy key elements, ACRL
information literacy framework
Other theories 42 (27.8%) Theories in information seeking,
behavior, cognition, social
diffusion, deterrence theory, uses
and gratification
No Theory. Fifty-nine or 39.1% of the papers does not mention specific theories applied,
even though they have reviewed concepts or certain perspectives in the literature for their
research purposes. Among them, 3 papers are on bibliometrics and 8 are literature review
papers. Some papers have applied a grounded theory approach [20,21], which aims to
discover or construct theory from data that is systematically obtained and analyzed using
comparative analysis [22]. Some of the papers focused on understanding misinformation
impact on different user groups, or fake news spread at different media platforms [23–
26]. Legal perspective was also used to guide the discussion of misinformation [27–30].
Other theories or perspectives to understand misinformation include systematic review
and bibliometric [2,31–35].
Computational Algorithms and Models. Computational methods, such as deep learn-
ing and machine learning models have been used in 38 studies. These studies explored
the use of machine learning methods [36–51], deep learning methods [52–63], and
other computation models or theories [64–66] to analyze social media, email, or other
data. Studies aimed to experiment or identify most effective models or algorithms for
misinformation detection, information classification, or impact understanding. Many of
the studies experimented multiple algorithms for evaluation purposes. It is sometimes
challenging to differentiate different types of computational models.
Information Credibility. There are 9 papers that are related to information credibility
[67–75]. Information credibility theories define related concepts, such as credible infor-
mation, trust, belief, and provide approaches to assessing credibility of information in
various format [73].
2A complete table of theories/models is available in our GitHub site.
Fighting Misinformation: Where Are We and Where to Go? 385
Information Literacy Frameworks/Models. Authors of 6 articles considered that
information literacy was the key to deter and correct misinformation, and have applied
existing information literacy framework to guide their research [76–81]. The informa-
tion literacy framework from the Association of College and Research Libraries (ACRL:
https://www.ala.org/acrl/standards/ilframework) has been applied by multiple studies.
Other Theories. Forty-two studies applied theories or models in business, human
behavioral research, Psychology, political science, policy, and information seeking. Here
are a few examples: uses and gratification theory [82,83], fact checking [84], elaboration
likelihood model [85], Kuhlthau’s information seeking behavior and process [86], tech-
nology adoption model [87], themes of misinformation [88], Longo health information
model [89], Heuristic system model [90], and integrated behavioral model [91].
4.4 Important Future Research Directions
We are interested in what scholars consider important future research on misinformation.
Table 5lists the important future research directors identified by information science
scholars. These directions can be categorized into the following areas:
Further Understanding Misinformation, its Spread, and its Impacts: About 29
authors would be conducting studies in this category. Examples include longitudinal
studies [24] and expanding library intervention [20,50]. While others include exploring
the extent of the spread of misinformation on social networks [15,41,43,66,92,93],
sentiments or reason behind the spread [52,94], relationships between variables [44,95,
96], and predicting virality [97]. Other authors propose to explore misinformation spread
in other languages [47,98], in other countries [90,99], investigating public relations and
crises intervention [35,100], entrepreneurial space [101], perception [71,75,102] and
addressing structural inequalities [103].
Misinformation Detection and Correction. Authors of about 20 articles proposed to
work toward improving misinformation detection, including spam detection [54,55,61];
using natural language processing [48], using the bot [104], applying machine learning
[46], probability sampling [77]; text extraction [59,105], and modified label training
[106]. Moreover, other future works relate to refining previous analysis [57], exploring
systematic modeling and theorization effort [107], employing classifiers [108], and more
exploration of effective rumorr extraction [70,85,109–111].
Policy and Education to Fight Misinformation. Authors of 50 articles would be
developing policies and procedures for emergency responses [79,87,100,112,113],
professional neutrality [114], privacy [42], promotion of awareness [115], exploring
ways of exposing players of misinformation [29,60,116–118] and tracking blogs [119].
More studies are need on understanding information behavior and limitations of partic-
ipants [120,121], exploring instructional methods [14,26,28,31,80,122,123], and
staff development [124,125].
386 H. Nguyen et al.
Conducting more Case Analyses and Studies. Authors of about 23 articles suggested
more case analyses and studies being conducted for various purposes [2,24,36,44,53,
126–130]. Here we mention just a few.
Theory and Model Development. Authors of about 21 articles proposed that new
models be applied or developed in this area. A few examples include the architecture
model [56], exploring patterns [67], interactivity [72], cross-linkability between bio-
metric templates in different databases [62], and models for controlling misinformation
[131].
Table 5. Future research directions identified3
Category Major topics Number of articles
Further understanding
misinformation, spread, and its
impacts
Health related
Media, youths, information literacy,
& others
Spread: Extent, form, sentiments, &
perception
7
8
15
Misinformation detection and
correction
Text & rumor extraction
Misinformation correction with AI/
machine learning
8
12
Policy and education to fight
misinformation
Developing policies and educational
guidelines
Intervention: library & crises
Information behavior to overcome
limitation of participants
31
12
7
Conducting more case analysis
and studies
Empirical, mixed methods,
longitudinal & heuristic study
23
Theory and model development Proposing new models related to
architecture, patterns, interactivity
Enhancing generalizability: update,
scope, reliability
8
13
Not stated 13
5 Discussions
5.1 Answers to Research Questions
Characteristics of Existing Literature (2010–2022). We explored the literature from
three perspectives: distribution of publication over time, disciplines involved, and pub-
lication venues. Our automatic analysis found that misinformation research has gained
3A complete table of future research directions is available in our GitHub.
Fighting Misinformation: Where Are We and Where to Go? 387
popularity since 2017, with more than 200 journal articles published in this area. More-
over, the number of publications has doubled each year since 2019. In 2021, we found
more than 1,600 journal articles on misinformation; The fields that have studied misin-
formation are numerous, spanning from medicine, computer science, and psychology
to political science, information science, and economics; More than 2,400 journals have
published articles on misinformation.
Major Topics Explored by Researchers from Different Disciplines. Our key term
analysis and topic analysis of the 5,586 journal articles indicated that extensive research
had been conducted to understand misinformation on social media, misinformation
spreading, user behavior as related to misinformation, and misinformation in differ-
ent areas and their impact on different types of people. As a result, six categories of
misinformation research with about 24 topics were identified and listed in Table 2.
Theories that Information Scientists Have Applied to Misinformation Research.
Our content analysis of the 151 journal articles in Library and Information Studies (LIS)
identified more than 40 theories or models that have been applied by information schol-
ars. These theories were used to guide the development of new theories/algorithms for
misinformation understanding/detection, to explore misinformation spread, to assess
misinformation impact on people, or to improve people’s capabilities of evaluating
information.
Important Future Research Topics or Directions. We also analyzed the future
research direction proposed or suggested by the authors of the 151 LIS journal articles.
Five themes or categories were identified for future research in misinformation: Further
understanding of misinformation, spread, and its impacts, Misinformation detection, and
correction, Policy and education to fight misinformation, Conducting more case analysis
and studies, and theory and model development.
5.2 Significance and Limitations of the Study
This study provides a broad picture of misinformation research. Focusing on under-
standing the current status of this field, including characteristics of existing literature,
topics explored by scholars, theories being applied, and suggestions from authors on
future directions. As fighting misinformation is an ongoing topic in information studies,
results from this study help researchers develop and plan their research ideas and projects.
Also, this study provides evidence to inform schools that we need to develop related
curricula on understanding and fighting misinformation to educate future information
professionals.
This study has several limitations: we had to limit our data set to journal articles in the
recent 12 years to complete the study in a manageable timeframe. Many papers related
to misinformation may be published in other venues, such as books and conference
proceedings. However, as shown in the initial exploratory data analysis of Stage II, the
number of journal articles accounts for nearly 70%, compared to that of conference,
books, and others, proving a reasonable data sample. Another limitation is due to the
data incompleteness in the existing databases: we had to drop a significant number of
388 H. Nguyen et al.
data (almost 41% of 9,502 journal articles) which metadata needed for our meta-analysis
are missing. Therefore, our driven implications and conclusions might not be scalable
for the whole science community; The third limitation is, due to time constraints, our
content analysis had only focused on theories and future directions in LIS; the two areas
that most interest the authors. The investigation could be more extensive and deeper, for
example, to summarize different research designs, or identify new theories or models
originally targeting and curbing misinformation.
6 Summary and Future Research
This study conducts systematic literature review to understand misinformation research
topics, theories applied, and future directions as proposed by information scholars.
Through a systematic approach, we selected 5,586 articles for automatic analysis and
151 articles in library and information science for manual content analysis. Our anal-
ysis discovered that misinformation research had attracted broad interests from many
disciplines and extensive research has been conducted in LIS.
To address the limitations of this study, we plan to continue our analysis using
the same dataset to build an ontology of misinformation, then extend the dataset to
expand the ontology. Finally, guided by ontology, we will explore specific topics, such
as developing models, strategies, curricula, and guidelines to help organizations and the
public fight misinformation. Python codes and our analysis results are publicly available
on our GitHub.4
References
1. Lim, S.: Academic library guides for tackling fake news: a content analysis. J. Acad.
Librariansh. 46(5), 102195 (2020)
2. Revez, J., Corujo, L.: Librarians against fake news: a systematic literature review of library
practices (Jan 2018–Sept 2020). J. Acad. Librariansh. 47(2), 102304 (2021). https://doi.org/
10.1016/j.acalib.2020.102304
3. Fetzer, J.H.: Information, misinformation, and disinformation. Minds Mach. 14(2) (2004)
4. Karlova, N.A., Lee, J.H.: Notes from the underground city of disinformation: a conceptual
investigation. Proc. Am. Soc. Inf. Sci. Technol. 48(1), 1–9 (2011)
5. Spinney, L.: In Congo, fighting a virus and a groundswell of fake news. Science 363(6424),
213–214 (2019). https://doi.org/10.1126/science.363.6424.213
6. Smith, J.H., Bastian, N.D.: A ranked solution for social media fact checking using epidemic
spread modeling. Inf. Sci. 589, 550–563 (2022)
7. Allcott, H., Gentzkow, M.: Social media and fake news in the 2016 election. J. Econ. Perspect.
31(2), 211–236 (2017)
8. Wang, Y., McKee, M., Torbica, A., Stuckler, D.: Systematic literature review on the spread
of health-related misinformation on social media. Soc. Sci. Med. 240, 112552 (2019)
9. Wu, L., Morstatter, F., Carley, K.M., Liu, H.: Misinformation in social media: definition,
manipulation, and detection. ACM SIGKDD Explor. Newsl. 21(2), 80–90 (2019)
10. Zrnec, A., Poˇzenel, M., Lavbiˇc, D.: Users’ ability to perceive misinformation: an
information quality assessment approach. Inf. Process. Manage. 59(1), 102739 (2022)
4https://github.com/HuyenNguyenHelen/Misinformation.
Fighting Misinformation: Where Are We and Where to Go? 389
11. Lazer, D.M., et al.: The science of fake news. Science 359(6380), 1094–1096 (2018)
12. Qinyu, E., Sakura, O., Li, G.: Mapping the field of misinformation correction and its effects:
a review of four decades of research. Soc. Sci. Inf. 60(4), 522–547 (2021). https://doi.org/
10.1177/05390184211053759
13. Gough, D., Thomas, J.: Systematic reviews of research in education: aims, myths and
multiple methods. Rev. Educ. 4(1), 84–102 (2016)
14. Rubin, V.L.: Disinformation and misinformation triangle: a conceptual model for “fake
news” epidemic, causal factors and interventions. J. Doc. 75(5), 1013–1034 (2019)
15. Hopp, T.: Fake news self-efficacy, fake news identification, and content sharing on Facebook.
J. Inform. Tech. Polit. 19(2), 229–252 (2022)
16. Chen, H., Chen, J., Nguyen, H.: Demystifying covid-19 publications: institutions, journals,
concepts, and topics. J. Med. Libr. Assoc.: JMLA 109(3), 395 (2021)
17. Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., Jatowt, A.: Yake! keyword
extraction from single documents using multiple local features. Inf. Sci. 509, 257–289 (2020)
18. Alghamdi, R., Alfalqi, K.: A survey of topic modeling in text mining. Int. J. Adv. Comput.
Sci. Appl. (IJACSA) 6(1) (2015)
19. Bellaouar, S., Bellaouar, M.M., Ghada, I.E.: Topic modeling: comparison of LSA and LDA
on scientific publications. In: 2021 4th International Conference on Data Storage and Data
Engineering, pp. 59–64 (2021)
20. Young, J.C., Boyd, B., Yefimova, K., Wedlake, S., Coward, C., Hapel, R.: The role of libraries
in misinformation programming: a research agenda. J. Librariansh. Inf. Sci. 53(4), 539–550
(2021)
21. Paris, B., Carmien, K., Marshall, M.: “We want to do more, but…”: new jersey public library
approaches to misinformation. Libr. Inf. Sci. Res. 44(2), 101157 (2022)
22. Chun Tie, Y., Birks, M., Francis, K.: Grounded theory research: a design framework for
novice researchers. SAGE Open Med. 7, 2050312118822927 (2019)
23. Bianchini, C., Truccolo, I., Bidoli, E., Group, C.I.Q.A., Mazzocut, M.: Avoiding misleading
information: a study of complementary medicine online information for cancer patients.
Libr. Inf. Sci. Res. 41(1), 67–77 (2019)
24. Blanco-Herrero, D., Amores, J.J., Sánchez-Holgado, P.: Citizen perceptions of fake news
in Spain: socioeconomic, demographic, and ideological differences. Publications 9(3), 35
(2021)
25. El Rayess, M., Chebl, C., Mhanna, J., Hage, R.-M.: Fake news judgement: the case of
undergraduate students at Notre Dame University-Louaize, Lebanon. Ref. Serv. Rev. 46(1),
146–149 (2018)
26. Johnston, N.: Living in the world of fake news: High school students’ evaluation of
information from social media sites. J. Aust. Libr. Inf. Assoc. 69(4), 430–450 (2020)
27. Shankar, R., Ahmad, T.: Information technology laws: mapping the evolution and impact of
social media regulation in India. DESIDOC J. Libr. Inf. Technol. 41(4) (2021)
28. Christensen, B.: Cyber state capacity: A model of authoritarian durability, ICTs, and
emerging media. Gov. Inf. Q. 36(3), 460–468 (2019)
29. Kigerl, A.C.: Evaluation of the can spam act: testing deterrence and other influences of
e-mail spammer legal compliance over time. Soc. Sci. Comput. Rev. 33(4), 440–458 (2015)
30. Gaozhao, D.: Flagging fake news on social media: an experimental study of media
consumers’ identification of fake news. Gov. Inf. Q. 38(3), 101591 (2021)
31. Baber, H., Fanea-Ivanovici, M., Lee, Y.-T., Tinmaz, H.: A bibliometric analysis of digi-
tal literacy research and emerging themes pre-during covid-19 pandemic. Inf. Learn. Sci.
123(3/4), 214–232 (2022). https://doi.org/10.1108/ILS-10-2021-0090
32. Patra, R.K., Pandey, N., Sudarsan, D.: Bibliometric analysis of fake news indexed in web of
science and scopus (2001–2020). Global Knowledge, Memory and Communication (ahead-
of-print) (2022)
390 H. Nguyen et al.
33. Adams, J.: Information and misinformation in bibliometric time-trend analysis. J. Informet.
12(4), 1063–1071 (2018)
34. Janmohamed, K., et al.: Interventions to mitigate covid-19 misinformation: a systematic
review and meta-analysis. J. Health Commun. 26(12), 846–857 (2021)
35. Awan, T.M., Aziz, M., Sharif, A., Ch, T.R., Jasam, T., Alvi, Y.: Fake news during the
pandemic times: a systematic literature review using prisma. Open Inf. Sci. 6(1), 49–60
(2022)
36. Yılmazel, I.B., Arslan, A.: An intrinsic evaluation of the waterloo spam rankings of the
clueweb09 and clueweb12 datasets. J. Inf. Sci. 47(1), 41–57 (2021)
37. Sedhai, S., Sun, A.: An analysis of 14 million tweets on hashtagoriented spamming. J. Am.
Soc. Inf. Sci. 68(7), 1638–1651 (2017)
38. Zamir, A., Khan, H.U., Mehmood, W., Iqbal, T., Akram, A.U.: A feature-centric spam email
detection model using diverse supervised machine learning algorithms. Electron. Libr. 38(3),
633–657 (2020). https://doi.org/10.1108/EL-07-2019-0181
39. Trivedi, S.K., Dey, S.: A novel committee selection mechanism for combining classifiers to
detect unsolicited emails. VINE J. Inf. Knowl. Manag. Syst. 46(4), 524–548 (2016)
40. Choi, E.B., Kim, J., Jeong, D., Park, E., del Pobil, A.P.: Detecting agro: korean trolling and
clickbaiting behaviour in online environments. J. Inf. Sci. 01655515221074325 (2022)
41. Shrivas, A.K., Dewangan, A.K., Ghosh, S., Singh, D.: Development of proposed ensemble
model for spam e-mail classification. Inf. Technol. Control 50(3) (2021)
42. Resende, A., Railsback, D., Dowsley, R., Nascimento, A.C., Aranha, D.F.: Fast privacy-
preserving text classification based on secure multiparty computation. IEEE Trans. Inf.
Forensics Secur. 17, 428–442 (2022)
43. Del-Fresno-García, Mi., Manfredi-Sánchez, J.-L.: Politics, hackers and partisan networking.
misinformation, national utility and free election in the Catalan independence movement.
Prof. Inf. 27(6), 1225 (2018). https://doi.org/10.3145/epi.2018.nov.06
44. Al-Zaman, M.S.: A thematic analysis of misinformation in India during the covid-19
pandemic. Int. Inf. Libr. Rev. 54(2), 128–138 (2022)
45. Velichety, S., Shrivastava, U.: Quantifying the impacts of online fake news on the equity
value of social media platforms–evidence from twitter. Int. J. Inf. Manage. 64, 102474 (2022)
46. Lim, L.P., Singh, M.M.: Resolving the imbalance issue in short messaging service spam
dataset using cost-sensitive techniques. J. Inf. Secur. Appl. 54, 102558 (2020)
47. Al-Zoubi, A., Alqatawna, J., Faris, H., Hassonah, M.A.: Spam profiles detection on social
networks using computational intelligence methods: the effect of the lingual context. J. Inf.
Sci. 47(1), 58–81 (2021)
48. Ma, J., Luo, Y.: The classification of rumour standpoints in online social network based on
combinatorial classifiers. J. Inf. Sci. 46(2), 191–204 (2020)
49. Kerr, E., Lee, C.A.L.: Trolls maintained: baiting technological infrastructures of informa-
tional justice. Inf. Commun. Soc. 24(1), 1–18 (2021)
50. Bringula, R.P., Catacutan-Bangit, A.E., Garcia, M.B., Gonzales, J.P.S., Valderama, A.M.C.:
“Who is gullible to political disinformation?”: predicting susceptibility of university students
to fake news. J. Inform. Tech. Polit. 19(2), 165–179 (2022)
51. Shan, G., Zhao, B., Clavin, J.R., Zhang, H., Duan, S.: Poligraph: intrusion-tolerant and
distributed fake news detection system. IEEE Trans. Inf. Forensics Secur. 17, 28–41 (2021)
52. Antenore, M., Camacho Rodriguez, J.M., Panizzi, E.: A comparative study of bot detec-
tion techniques with an application in twitter covid-19 discourse. Soc. Sci. Comput. Rev.
08944393211073733 (2022)
53. Zeng, J., Chan, C.-h.: A cross-national diagnosis of infodemics: comparing the topical and
temporal features of misinformation around covid-19 in China, India, the US, Germany and
France. Online Inf. Rev. (2021)
Fighting Misinformation: Where Are We and Where to Go? 391
54. Lu, H.-Y., Yang, J., Fang, W., Song, X., Wang, C.: A deep neural networks-based fusion
model for covid-19 rumor detection from online social media. Data Technol. Appl. 56(5),
806–824 (2022). https://doi.org/10.1108/DTA-06-2021-0160
55. Rastogi, A., Mehrotra, M., Ali, S.S.: Effective opinion spam detection: a study on review
metadata versus content. J. Data Inf. Sci. 5(2), 76–110 (2020)
56. Luo, Y., Ma, J., Yeo, C.K.: Exploiting user network topology and comment semantic for
accurate rumour stance recognition on social media. J. Inf. Sci. 48(5), 660–675 (2022)
57. Chen, X.K., Na, J.-C., Tan, L.K.-W., Chong, M., Choy, M.: Exploring how online responses
change in response to debunking messages about covid-19 on WhatsApp. Online Inf. Rev.
46(6), 1184–1204 (2022). https://doi.org/10.1108/OIR-08-2021-0422
58. Aiwan, F., Zhaofeng, Y.: Image spam filtering using convolutional neural networks. Pers.
Ubiquit. Comput. 22(5–6), 1029–1037 (2018). https://doi.org/10.1007/s00779-018-1168-8
59. Imam, N.H., Vassilakis, V.G., Kolovos, D.: Ocr post-correction for detecting adversarial text
images.J.Inf.Secur.Appl.66, 103170 (2022)
60. Wu, H., Zhou, J., Tian, J., Liu, J., Qiao, Y.: Robust image forgery detection against
transmission over online social networks. IEEE Trans. Inf. Forensics Secur. 17, 443–456
(2022)
61. Wang, H.C., Chiang, Y.H., Lin, S.T.: Spam detection and high-quality features to analyse
question–answer pairs. Electron. Libr. 38(5/6), 1013–1033 (2020). https://doi.org/10.1108/
EL-05-2020-0120
62. Skoric, B., de Vreede, N.: The spammed code offset method. IEEE Trans. Inf. Forensics
Secur. 9(5), 875–884 (2014)
63. Bunker, D.: Who do you trust? the digital destruction of shared situational awareness and
the covid-19 infodemic. Int. J. Inf. Manage. 55, 102201 (2020)
64. Zhuang, X., Zhu, Y., Chang, C.-C., Peng, Q., Khurshid, F.: A unified score propagation
model for web spam demotion algorithm. Inf. Retr. J. 20(6), 547–574 (2017). https://doi.
org/10.1007/s10791-017-9307-9
65. Kiwi, M., Caro, C.T.: Fifo queues are bad for rumor spreading. IEEE Trans. Inf. Theory
63(2), 1159–1166 (2016)
66. Colladon, A.F., Gloor, P.A.: Measuring the impact of spammers on e-mail and twitter
networks. Int. J. Inf. Manage. 48, 254–262 (2019)
67. Giachanou, A., Rosso, P., Crestani, F.: The impact of emotional signals on credibility
assessment. J. Am. Soc. Inf. Sci. 72(9), 1117–1132 (2021)
68. Masip, P., Suau, J., Ruiz-Caballero, C.: Perceptions on media and disinformation: Ideology
and polarization in the Spanish media system. Prof. Inf. 29(5) (2020)
69. Zhang, Z., Zhang, Z., Li, H.: Predictors of the authenticity of internet health rumours. Health
Info. Libr. J. 32(3), 195–205 (2015)
70. Herrero-Gutiérrez, F.-J., Urchaga-Litago, J.-D.: The importance of rumors in the Spanish
sports press: an analysis of news about signings appearing in the newspapers Marca, As.
Mundo Deportivo And Sport. Publications 9(1), 9 (2021)
71. Baptista, J.P., Correia, E., Gradim, A., Piñeiro-Naval, V.: The influence of political ideology
on fake news belief: the Portuguese case. Publications 9(2), 23 (2021). https://doi.org/10.
3390/publications9020023
72. Montesi, M.: Understanding fake news during the covid-19 health crisis from the perspective
of information behaviour: The case of Spain. J. Librariansh. Inf. Sci. 53(3), 454–465 (2021)
73. Savolainen, R.: Assessing the credibility of covid-19 vaccine mis/disinformation in online
discussion. J. Inf. Sci. 01655515211040653 (2021)
74. Charbonneau, D.H., Vardell, E.: The impact of covid-19 on reference services: a national
survey of academic health sciences librarians. J. Med. Libr. Assoc.: JMLA 110(1), 56 (2022)
75. Moreno, A., Tench, R., Verhoeven, P.: Trust in public relations in the age of mitrusted media:
a European perspective. Publication 9(1) (2021)
392 H. Nguyen et al.
76. Faix, A.., Fyn, A..: Framing fake news: Misinformation and the ACRL framework. portal:
Libraries and the Academy 20(3), 495–508 (2020). https://doi.org/10.1353/pla.2020.0027
77. Igbinovia, M.O., Okuonghae, O., Adebayo, J.O.: Information literacy competence in cur-
tailing fake news about the covid-19 pandemic among undergraduates in Nigeria. Ref. Serv.
Rev. 49(1), 3–18 (2020). https://doi.org/10.1108/RSR-06-2020-0037
78. Perry, H.B.: Understanding financial conflict of interest: implications for information literacy
instruction. Commun. Inf. Lit. 12(2), 215–225 (2018)
79. Haggar, E.: Fighting fake news: exploring George Orwell’s relationship to information
literacy. J. Doc. 76(5), 961–979 (2020)
80. Krutkowski, S., Taylor-Harman, S., Gupta, K.: De-biasing on university campuses in the age
of misinformation. Ref. Serv. Rev. 48(1), 113–128 (2019). https://doi.org/10.1108/RSR-10-
2019-0075
81. Pérez-Escoda, A., Pedrero-Esteban, L.M., Rubio-Romero, J., Jiménez-Narros, C.: Fake news
reaching young people on social networks: distrust challenging media literacy. Publications
9(2), 24 (2021)
82. Lin, T.-C., Huang, S.-L., Liao, W.-X.: Examining the antecedents of everyday rumor
retransmission. Inf. Technol. People (2021)
83. Sampat, B., Raj, S.: Fake or real news? understanding the gratifications and personality traits
of individuals sharing fake news on social media platforms. Aslib J. Inf. Manag. (2022)
84. Juneström, A.: An emerging genre of contemporary fact-checking. J. Doc. (2020)
85. Pal, A., Banerjee, S.: Internet users beware, you follow online health rumors (more than
counter-rumors) irrespective of risk propensity and prior endorsement. Inf. Technol. People
(2020)
86. Evanson, C., Sponsel, J.: From syndication to misinformation: how undergraduate students
engage with and evaluate digital news. Commun. Inf. Lit. 13(2), 228–250 (2019)
87. Simon, T., Goldberg, A., Adini, B.: Socializing in emergencies—a review of the use of social
media in emergency situations. Int. J. Inf. Manage. 35(5), 609–619 (2015)
88. Al-Zaman, M.S.: Prevalence and source analysis of covid-19 misinformation in 138
countries. IFLA J. 48(1), 189–204 (2022)
89. Ahmadinia, H., Eriksson-Backa, K., Nikou, S.: Health information seeking behaviour during
exceptional times: a case study of Persian-speaking minorities in Finland. Libr. Inf. Sci. Res.
44(2), 101156 (2022)
90. Sharma, A., Kapoor, P.S.: Message sharing and verification behaviour on social media during
the covid-19 pandemic: a study in the context of India and the USA. Online Inf. Rev. 46(1),
22–39 (2021)
91. Dhawan, D., Bekalu, M., Pinnamaneni, R., McCloud, R., Viswanath, K.: Covid-19 news and
misinformation: do they matter for public health prevention? J. Health Commun. 26(11),
799–808 (2021)
92. Fichman, P., Sanfilippo, M.R.: The bad boys and girls of cyberspace: how gender and context
impact perception of and reaction to trolling. Soc. Sci. Comput. Rev. 33(2), 163–180 (2015)
93. Wang, X., Zhang, M., Fan, W., Zhao, K.: Understanding the spread of covid-19 misinfor-
mation on social media: the effects of topics and a political leader’s nudge. J. Am. Soc. Inf.
Sci. 73(5), 726–737 (2022)
94. López-Marcos, C., Vicente-Fernández, P.: Fact checkers facing fake news and disinformation
in the digital age: a comparative analysis between Spain and United Kingdom. Publications
9(3), 36 (2021). https://doi.org/10.3390/publications9030036
95. Oliphant, T.: Emerging (information) realities and epistemic injustice. J. Am. Soc. Inf. Sci.
72(8), 951–962 (2021)
96. Sanfilippo, M., Yang, S., Fichman, P.: Trolling here, there, and everywhere: perceptions of
trolling behaviors in context. J. Am. Soc. Inf. Sci. 68(10), 2313–2327 (2017)
Fighting Misinformation: Where Are We and Where to Go? 393
97. King, K.K., Wang, B.: Diffusion of real versus misinformation during a crisis event: a big
data-driven approach. Int. J. Inf. Manage. 102390 (2021)
98. Barakat, K.A., Dabbous, A., Tarhini, A.: An empirical approach to understanding users’
fake news identification on social media. Online Inf. Rev. (2021)
99. Stone, M., Aravopoulou, E., Evans, G., Aldhaen, E., Parnell, B.D.: From information mis-
management to misinformation–the dark side of information management. Bottom Line
(2018)
100. Yu, W., Chen, N., Chen, J.: Characterizing Chinese online public opinions towards the
covid-19 recovery policy. Electron. Libr. (2022)
101. Radu, R., Kettemann, M.C., Meyer, T., Shahin, J.: Normfare: norm entrepreneurship in
internet governance. Telecommun. Policy 45(6), 102148 (2021)
102. Tan, W.-K., Hsu, C.Y.: The application of emotions, sharing motivations, and psychological
distance in examining the intention to share covid-19-related fake news. Online Inf. Rev.
(ahead-of-print) (2022)
103. Soler, J., Cooper, A.: Unexpected emails to submit your work: spam or legitimate offers?
the implications for novice English l2 writers. Publications 7(1), 7 (2019)
104. Nonnecke, B., et al.: Harass, mislead, & polarize: an analysis of twitter political bots’ tactics
in targeting the immigration debate before the 2018 us midterm election. J. Inf. Technol.
Politics 1–12 (2021)
105. Esteban-Navarro, M.-Á., Nogales-Bocio, A.-I., García-Madurga, M.-Á., Morte-Nadal, T.:
Spanish fact-checking services: an approach to their business models. Publications 9(3), 38
(2021). https://doi.org/10.3390/publications9030038
106. Merto˘glu, U., Genç, B.: Automated fake news detection in the age of digital libraries. Inf.
Techno. Libr. 39(4) (2020)
107. Kwon, K.H., Rao, H.R.: Cyber-rumor sharing under a homeland security threat in the context
of government internet surveillance: the case of south-north Korea conflict. Gov. Inf. Q.
34(2), 307–316 (2017)
108. Lyu, H.-S.: Internet policy in Korea: A preliminary framework for assigning moral and legal
responsibility to agents in internet activities. Gov. Inf. Q. 29(3), 394–402 (2012)
109. Deng, S., Fu, S., Liu, Y., Li, H.: Modelling users’ trust in online health rumours: an
experiment-based study in China (2021)
110. Luo, Y., Ma, J., Yeo, C.K.: Identification of rumour stances by considering network topology
and social media comments. J. Inf. Sci. 48(1), 118–130 (2022)
111. Wang, P., Yixia, H., Li, Q., Yang, H.: Trust mechanisms underlying the self-efficacy-
rumour use relationship. Electron. Libr. 39(2), 373–387 (2021). https://doi.org/10.1108/
EL-12-2020-0332
112. Elbanna, A., Bunker, D., Levine, L., Sleigh, A.: Emergency management in the changing
world of social media: framing the research agenda with the stakeholders through engaged
scholarship. Int. J. Inf. Manage. 47, 112–120 (2019)
113. Lor, P., Wiles, B., Britz, J.: Re-thinking information ethics: truth, conspiracy theories, and
librarians in the covid-19 era. Libri 71(1), 1–14 (2021)
114. Froehlich, T.: Some thoughts evoked by Peter Lor, Bradley Wiles, and Johannes Britz, “re-
thinking information ethics: Truth, conspiracy theories, and librarians in the covid-19 era”,
in Libri, March 2021. Libri 71(3), 219–225 (2021)
115. Farfán, J., Mazo, M.E.: Disinformation and responsibility in young people in Spain during
the covid-19 era. Publications 9(3), 40 (2021). https://doi.org/10.3390/publications9030040
116. Cheng, J.W., Mitomo, H., Kamplean, A., Seo, Y.: Lesser evil? public opinion on regulating
fake news in Japan, South Korea, and Thailand– a three-country comparison. Telecommun.
Policy 45(9), 102185 (2021)
394 H. Nguyen et al.
117. Herasimenka, A., Bright, J., Knuutila, A., Howard, P.N.: Misinformation and professional
news on largely unmoderated platforms: the case of telegram. J. Inf. Technol. Politics 1–15
(2022)
118. Xiao, X., Su, Y.: Integrating reasoned action approach and message sidedness in the era of
misinformation: the case of HPV vaccination promotion. J. Health Commun. 26(6), 371–380
(2021)
119. Oguz, F., Holt, M.: Library blogs and user participation: a survey about comment spam in
library blogs. Library Hi Tech 29(1), 173–188 (2011)
120. Flores-Saviaga, C., Savage, S.: Fighting disaster misinformation in Latin America: the# 19s
Mexican earthquake case study. Pers. Ubiquit. Comput. 25(2), 353–373 (2021)
121. Muriel-Torrado, E., Pereira, D.B.: Correlations between the concepts of disinformation and
Fogg’s behavior model. Transinformaciao 32 (2020)
122. Elmwood, V.: The journalistic approach: evaluating web sources in an age of mass
disinformation. Commun. Inf. Lit. 14(2), 269–286 (2020)
123. Vamanu, I., Zak, E.: Information source and content: articulating two key concepts for
information evaluation. Inf. Learn. Sci. (2022)
124. LaPierre, S.S., Kitzie, V.: “lots of questions about ‘fake news’”: How public libraries have
addressed media literacy, 2016–2018. Public Libr. Q. 38(4), 428–452 (2019)
125. de Vicente Domínguez, A.M., Beriain Bañares, A., Sierra Sánchez, J.: Young Spanish
adults and disinformation: do they identify and spread fake news and are they literate in
it? Publications 9(1), 2 (2021)
126. Agarwal, N.K., Alsaeedi, F.: Creation, dissemination and mitigation: toward a disinformation
behavior framework and model. Aslib J. Inf. Manag. (2021)
127. Cano-Orón, L., Calvo, D., Llorca-Abad, G., Mestre-Pérez, R.: Media crisis and disinforma-
tion: the participation of digital newspapers in the dissemination of a denialist hoax. Prof.
Inf. 30(4) (2021)
128. Jane, E.A.: Flaming? what flaming? the pitfalls and potentials of researching online hostility.
Ethics Inf. Technol. 17(1), 65–87 (2015)
129. Patra, R.K., Pandey, N.: Disinformation on novel coronavirus (covid19): a content analysis
of news published on fact-checking sites in India. DESIDOC J. Libr. Inf. Technol. 41(4)
(2021)
130. Sun, L.H., Fichman, P.: The collective trolling lifecycle. J. Am. Soc. Inf. Sci. 71(7), 770–783
(2020)
131. Chipidza, W., Krewson, C., Gatto, N., Akbaripourdibazar, E., Gwanzura, T.: Ideological
variation in preferred content and source credibility on reddit during the covid-19 pandemic.
Big Data Soc. 9(1), 20539517221076490 (2022)