Figure - available from: Social Network Analysis and Mining
This content is subject to copyright. Terms and conditions apply.
The percentage of articles (pie chart) with respect to four dimensions: rumor, fake News, misinformation, and hoax

The percentage of articles (pie chart) with respect to four dimensions: rumor, fake News, misinformation, and hoax

Source publication
Article
Full-text available
This work presents a review of detecting false information and its role in decision making spread across online content. The authenticity of information is an emerging issue that affects society and individuals and has a negative impact on people’s decision-making capabilities. The purpose is to understand how different techniques can be used to ad...

Similar publications

Article
Full-text available
Many businesses are using social media networks to deliver different services and connect with clients and collect information about the thoughts and views of individuals. Sentiment analysis is a technique of machine learning that senses polarities such as positive or negative thoughts within the text, full documents, paragraphs, lines, or subsecti...

Citations

... Management strategies are aimed at developing countermeasures, such as optimizing how corrections are presented or educating audiences on identifying false information. [Fernandez and Alani, 2018]; Data and methodology limitations [Zhang and Ghorbani, 2020, Lozano et al., 2020, Al-Sarem et al., 2019; Emphasis on technical solutions [Fernandez and Alani, 2018]; limited engagement with diverse content themes [Habib et al., 2019]; ...
... Our investigation into the taxonomy of digital false information uncovered 24 distinct terms employed to categorize digital false information, each with nuanced variations, as illustrated in Figure 5. For instance, the term "fake news" is variably interpreted across studies, sometimes referring to satirical content [Khan et al., 2019], and other times to hoaxes [Ishida and Kuraya, 2018]. This variability and lack of clear definitions challenge the field's progress by obstructing the straightforward comparison and integration of research findings. ...
... A notable root cause of the consensus challenge is identified in the foundational literature Unverified information [Habib et al., 2019, Lee and Choi, 2018, Patel et al., 2017; That can be true or false [Rana et al., 2019, Buntain andGolbeck, 2017]; False information [Qin et al., 2018, Zhang et al., 2015; False propaganda [Tan et al., 2019]. ...
Preprint
Full-text available
This paper presents a systematic literature review in Computer Science that provide an overview of the initiatives related to digital misinformation. This is an exploratory study that covers research from 1993 to 2020, focusing on the investigation of the phenomenon of misinformation. The review consists of 788 studies from SCOPUS, IEEE, and ACM digital libraries, synthesizing the primary research directions and sociotechnical challenges. These challenges are classified into Physical, Empirical, Syntactic, Semantic, Pragmatic, and Social dimensions, drawing from Organizational Semiotics. The mapping identifies issues related to the concept of misinformation, highlights deficiencies in mitigation strategies, discusses challenges in approaching stakeholders, and unveils various sociotechnical aspects relevant to understanding and mitigating the harmful effects of digital misinformation. As contributions, this study present a novel categorization of mitigation strategies, a sociotechnical taxonomy for classifying types of false information and elaborate on the inter-relation of sociotechnical aspects and their impacts.
... Fake news has compromised media trust, leaving readers perplexed. Based on the analysis of the literature, it appears that fake news has been responsible for numerous real-time disasters and is detrimental to the economy [1], health [2], political stability [3], and journalism in general [4]. Manual interferences are ineffective at curbing fake news dissemination due to high-speed data distribution [5]. ...
... Although studies discussing this issue and providing solutions to curb fake news are still in their early stages, they are steadily increasing. However, this requires exploring various directions in research along with further development of fake news detection models [4,6]. Previous research has successfully attempted to identify fake news in social networks through diverse methods; nonetheless, they still face certain limitations. ...
... It further uses neural network models to analyze large amounts of data and extract latent features [127]. Moreover, DL algorithms have been found to outperform ML algorithms [130,131], and have provided remarkable advances in image analysis and text classification, thanks to the ability of the deep neural network efficiently to extract features and learn successfully [4,23], in addition to having a high ability of capturing complex patterns [4,132]. ...
Article
Full-text available
Currently, social networks have become the main source to acquire news about current global affairs. However, fake news appears and spreads on social media daily. This disinformation has a negative influence on several domains, such as politics, the economy, and health. In addition, it further generates detriments to societal stability. Several studies have provided effective models for detecting fake news in social networks through a variety of methods; however, there are limitations. Furthermore, since it is a critical field, the accuracy of the detection models was found to be notably insufficient. Although many review articles have addressed the repercussions of fake news, most have focused on specific and recurring aspects of fake news detection models. For example, the majority of reviews have primarily focused on dividing datasets, features, and classifiers used in this field by type. The limitations of the datasets, their features, how these features are fused, and the impact of all these factors on detection models were not investigated, especially since most detection models were based on a supervised learning approach. This review article analyzes relevant studies for the few last years and highlights the challenges faced by fake news detection models and their impact on their performance. The investigation of fake news detection studies relied on the following aspects and their impact on detection accuracy, namely datasets, overfitting/underfitting, image-based features, feature vector representation, machine learning models, and data fusion. Based on the analysis of relevant studies, the review showed that these issues significantly affect the performance and accuracy of detection models. This review aims to provide room for other researchers in the future to improve fake news detection models.
... This is due to findings that traditional Machine Learning (ML) techniques cannot match them in a wide range of sectors [26], [4], [6]. Therefore, it is necessary to develop a new model to detect disinformation that targets Islamic issues by using a deep learning technique that has proven its accuracy in detecting fake news [27]. The contributions of this research could be summarized as follows: ...
Article
Full-text available
Nowadays, many people receive news and information about what is happening around them from social media networks. These social media platforms are available free of charge and allow anyone to post news or information or express their opinion without any restrictions or verification, thus contributing to the dissemination of disinformation. Recently, disinformation about Islam has spread through pages and groups on social media dedicated to attacking the Islamic religion. Many studies have provided models for detecting fake news or misleading information in many domains, such as political, social, economic, and medical, except in the Islamic domain. Due to this negative impact of spreading disinformation targeting the Islamic religion, there is an increase in Islamophobia, which threatens societal peace. In this paper, we present a Bidirectional Long Short-Term Memory-based model trained on an Islamic dataset (CIDII) that was collected and labeled by two separate specialized groups. In addition, using a pre-trained word-embedding model will generate Out-Of-Vocabulary, because it deals with a specific domain. To address this issue, we have retrained the pre-trained Glove model on Islamic documents using the Mittens method. The results of the experiments proved that our proposed model based on Bidirectional Long Short-Term Memory with the retrained Glove model on the Islamic articles is efficient in dealing with text sequences better than unidirectional models and provides a detection accuracy of 95.42% of Area under the ROC Curve measure compared to the other models.
... This generation of misinformation, its ontology, detection methods and the motivations behind it have aroused much interest in the scientific community, which has carried out several studies to improve our understanding of the phenomenon. There have been studies such as the one by Habib et al. (2019), which endeavoured to classify misinformation into rumours, fake news, disinformation and hoaxes, and also described their characteristics to facilitate their detection and prevent cheating. Meanwhile, Tandoc et al. (2018) sought to categorise the purposes of false information that is disseminated online into satire, parody, political propaganda, advertising and manipulation. ...
... Others such as Viviani and Pasi (2017) identified and quantified a user's credibility when entering information on social media, while Conroy et al. (2015) demonstrated that some techniques are more effective than others in detecting online deception and identifying fraudsters. Although previous studies have addressed different aspects of the problem (Conroy et al., 2015;Habib et al., 2019;Parikh & Atrey, 2018;Shu et al., 2017;Viviani & Pasi, 2017;Zubiaga et al., 2018), they all highlight the need to create control mechanisms to ensure the quality of databases, and to use the knowledge extracted from them to compare approaches and profile cheaters better. ...
Article
Full-text available
The digital environment, which includes the Internet and social networks, is propitious for digital marketing. However, the collection, filtering and analysis of the enormous, constant flow of information on social networks is a major challenge for both academics and practitioners. The aim of this research is to assist the process of filtering the personal information provided by users when registering online, and to determine which user profiles lie the most, and why. This entailed conducting three different studies. Study 1 estimates the percentage of Spanish users by stated sex and generation who lie the most when registering their personal data by analysing a database of 5,534,702 participants in online sweepstakes and quizzes using a combination of error detection algorithms, and a test of differences in proportions to measure the profiles of the most fraudulent users. Estimates show that some user profiles are more inclined to make mistakes and others to forge data intentionally, the latter being the majority. The groups that are most likely to supply incorrect data are older men and younger women. Study 2 explores the main motivations for intentionally providing false information, and finds that the most common reasons are related to amusement, such as playing pranks, and lack of faith in the company's data privacy and security measures. These results will enable academics and companies to improve mechanisms to filter out cheaters and avoid including them in their databases.
... This issue is not related to a specific age or gender and does not depend on education [3]. Researchers observed that fake news spread 70% more than real news [4]. Recent studies have stated that fake news dissemination on social media has become a current-day issue that has attracted global attention that needs intervention and an immediate halt to its spread [5] because this issue is creating social panic and economic unrest [6]. ...
... Social media is used by many people to post or share news or information. This is because, unlike traditional media, news distribution through networks is real-time and quick, there are no costs involved, and there are no restrictions imposed on validation [4]. For example, in 2012 in the U.S., about 49% of users shared news on social media platforms. ...
... For example, in 2012 in the U.S., about 49% of users shared news on social media platforms. The Pew Research Center issued a report in 2016 stating that more than 62% of users daily receive their news from social networking pages [4], while in 2018, a report indicated that two-thirds of adults in the U.S. received their news from these pages [38]. The fact that social media is used on a variety of devices has significantly expanded the amount of data available [15]. ...
Article
Full-text available
In recent times, social media has become the primary way people get news about what is happening in the world. Fake news surfaces on social media every day. Fake news on social media has harmed several domains, including politics, the economy, and health. Additionally, it has negatively affected society's stability. There are still certain limitations and challenges even though numerous studies have offered useful models for identifying fake news in social networks using many techniques. Moreover, the accuracy of detection models is still notably poor given we deal with a critical topic. Despite many review articles, most previously concentrated on certain and repeated sections of fake news detection models. For instance, the majority of reviews in this discipline only mentioned datasets or categorized them according to labels, content, and domain. Since the majority of detection models are built using a supervised learning method, it has not been investigated how the limitations of these datasets affect detection models. This review article highlights the most significant components of the fake news detection model and the main challenges it faces. Data augmentation, feature extraction, and data fusion are some of the approaches explored in this review to improve detection accuracy. Moreover, it discusses the most prominent techniques used in detection models and their main advantages and disadvantages. This review aims to help other researchers improve fake news detection models.
Article
Full-text available
As social media and web-based forums have grown in popularity, the fast-spreading trend of fake news has become a major threat to the government and other agencies. With the rise of social media and internet platforms, misinformation may quickly spread over borders and language boundaries. Detecting and neutralizing fake news in several languages can help to protect the integrity of global elections, political discourse, and public opinion. The lack of a robust multilingual database for training the classification models makes detecting fake news a difficult task. This paper looks at it by describing several forms of fake news (like serious fabrications, large-scale hoaxes, stance news, deceptive news, satire news, clickbait, misinformation, rumour). This review paper includes different steps, features, tools for mitigating the scourge of information pollution, and different available datasets. This study presented a taxonomy for detecting fake news, which gives a comprehensive overview and analysis of existing DL-based algorithms focusing on diverse techniques. This paper also includes the monolingual and multilingual fake news detection models. Finally, this paper ends with the technical challenges.