Figure 3 - uploaded by Pardis Pourghomi
Content may be subject to copyright.
Graph of misinformation modeling (Color figure online)

Graph of misinformation modeling (Color figure online)

Source publication
Conference Paper
Full-text available
The spread of misinformation online is specifically amplified by use of social media, yet the tools for allowing online users to authenticate text and images are available though not easily accessible. The authors challenge this view suggesting that corporations’ responsible for the development of browsers and social media websites need to incorpor...

Contexts in source publication

Context 1
... Vertices which belong to set R, the set of reading vertices, which represents users who only receive information-accordingly . A vertice r is a neighbor of a vertice s if and only if there is , an edge from r to s in G. Furthermore, all vertices from V can be divided into subsets (layers) depending on length . Where from Fig. 3. Assuming i and j are any vertices of the given graph, the vertices i and are connected by certain chains of edges going through different layers. The main goal of the team's approach is to see the effect of cascade labeling in models that they created. Note that cascade labeling symbolizes pressing the Right-click ...
Context 2
... illustrated in Fig. 3, by selecting edge e, where , the entire sub graph , from to , where l stands for length, is colored in red. This step is known as cascade labeling. Subsequently, this cascade labeling results in coloring some of the vertices from G into red. Coloring in red symbolizes the node authentication of the information to be untrue and the ...
Context 3
... after n repetitions of this process in graph , all vertices from subset V l can be colored in red. Therefore, by implying cascade labeling procedure, some of the destination vertices will be preserved of receiving misinformation. The model in Fig. 3 is assuming that only one vertex authenticates the information and passes that information on. The first vertex to authenticate and turn from black to red is modeled as red with black line and labeled as vertex , ...

Similar publications

Article
Full-text available
In the age of mass information and misinformation, the corporate duty of developers of browsers, social media, and search engines are falling short of the minimum standards of responsibility. The tools and technologies are already available to combat misinformation online but the desire to integrate these tools has not taken enough priority to warr...
Article
Full-text available
In this paper, a new mathematical formulation for the problem of de-anonymizing social network users by actively querying their membership in social network groups is introduced. In this formulation, the attacker has access to a noisy observation of the group membership of each user in the social network. When an unidentified victim visits a malici...

Citations

... In a nuclear disaster, information overflows, not only from the mass media, but also from other sources, which makes it difficult to understand what is correct [28]. On the Internet, an anonymous individuals and organizations can disseminate information in an almost unlimited manner, without scrutiny or editorial moderation, and spread it rapidly [29,30]. Therefore, it is probable that after the Fukushima nuclear accident, disinformation and rumors were disseminated on the Internet. ...
Article
Full-text available
The nuclear accident that accompanied the Great East Japan Earthquake of 11 March, 2011, was also an information disaster. A serious problem that arose after the accident and persisted for a long time was the damage caused by harmful rumors (DCBHR). In 2016, a cross-sectional questionnaire survey on health and information was conducted in Fukushima. The eligible population of this survey was 2000 Fukushima residents, which included those in the evacuated areas. We received 861 responses. Data were analyzed using the responses to the question about perceived DCBHR as the objective variable and the sources of information residents trusted and the media they used as explanatory variables. A multiple logistic regression analysis revealed that those who trusted government ministries and local commercial TV were significantly associated with no effect. In contrast, those who used Internet sites and blogs were significantly associated with a negative effect. This study underlines the pivotal importance of media and information, literacy, and education and discusses how these should be improved to avoid DCBHR in the future. Furthermore, accurate information should be made available to all sections of the population to diminish DCBHR.
... In [18], the authors identified a collection of variables that tend to be shared among many social media platforms. The authors showed that fighting fake news online are governed by the following variables: authentication, passing on information, cross-wire, same level communication, and reverse validation. ...
... ( ) [18] Authentication Rate of people who will take the time to check the validity of a post. ...
... ( ) [18] Crosswire ...
Conference Paper
Full-text available
Several researchers have attempted to investigate the processes that govern and support the spread of fake news. This paper collates and identifies these variables. This paper then categorises these variables based on three key players that are involved in the process: Users, Content, and Social Networks. The authors conducted an extensive review of the literature and a reflection on the key variables that are involved in the process. The paper has identified a total of twenty-seven variables. Then the paper presents a series of tasks to mitigate or eliminate these variables in a holistic process that could be automated to reduce or eliminate fake news propagation. Finally, the paper suggests further research into testing the method in lab conditions.
... There are, however, several sets of common actions that tend to be shared platform actions that propagate the spread of posts, news, and fake news. Dordevic et al. (2016) when modelling propagation of misinformation, demonstrated a proof-of-concept and identified the variables involved in the travel of information and misinformation. The authors showed that combating fake news online is influenced by the following variables: rate of authentication, passing on information rate, average cross-wire rate, the success rate of same level communication rate, and reverse validation rate. ...
... of people who will take the time to check the validity of a post. ( )Dordevic et al. (2016) CrosswireThe rate in which information crosses the same user multiple times. someone takes an active role to communicate or alert other users to the authenticity of a post. ...
Chapter
Full-text available
This chapter looks at the variables involved in the spread of fake news to better understand factors that contribute to the success of fake news. This is issue remains open-ended as the solution has not been developed, and several researchers have been unsuccessful in reaching a clear approach because academics and industry are failing to consider the larger environment in which news on social media thrives. The variables involved in the dissemination of misinformation can be categorised into four categories: Human factors, Interaction factors, Platform factors, and Content factors. The human factors include attraction to rumours and thus tend to seek information that resonates with and affirms one's beliefs. Interaction factors include different ways to engage social media posts, verification of the information, and the likelihood of taking any number of actions that either promote or demote news online. Platform factors include platform algorithms and platform-tools employed online. Finally, the content factor relates to the research that shows fake news posts tend to have a distinctive linguistic style, multimedia content, and sourcing pattern that could help identify it early on. These factors rarely operate in isolation but rather the combination of all these factors may explain the complexity of understanding how fake news posts get a life of their own. Following this review, this chapter presents a step by step process of verifying news posted online by looking at the features identified above. This chapter will provide examples and a practical element for readers to cross-reference fake news. The human factor: The study of rumours The human factor can vary, as demonstrated in chapter 4 when we looked at the psychology of fake news. The key variables that one would need to take into account when considering human factors include the attraction to rumours, information that confirms one's belief, likelihood to take any number of actions, and tendency to invest time and effort to verify the information. Some of these factors have been extensively studied and modelled while others remain new fields of study. For long, the study of rumours was linked to the study of Epidemiological modelling. Epidemiology is the study of how disease spreads in a given community providing perhapsOne of the most extensively studied topics that can be applied to fake news. More than 50 years ago, Daley and Kendall (1964) explained the analogy between the spreading of infectious disease and the dissemination of rumours. They examined the spreading of a rumour from mathematical epidemiology. Researchers highlighted that a mathematical model for the spreading of rumours could be created in several different ways which depend on growth and decay of the spreading process. However, the environment in which rumours operate in brick and mortar spaces do not necessary match that of the virtual world of social media. Still, these models, the methods derived ffrom mathematical epidemiology can provide valuable insights.
... In paper [26], the authors set out to demonstrate proof-ofconcept using 2D modeling and identified the variables involved in the travel of information (see fig. 4). Fig. 4. The Authenticate, Passing on rate, and Cross-Wire rate simulation [26] The paper identified eight key variables and applied theoretical values to demonstrate their applicability. ...
... In paper [26], the authors set out to demonstrate proof-ofconcept using 2D modeling and identified the variables involved in the travel of information (see fig. 4). Fig. 4. The Authenticate, Passing on rate, and Cross-Wire rate simulation [26] The paper identified eight key variables and applied theoretical values to demonstrate their applicability. These variables are: as the first vertex and is the last vertex of the given simulation. ...
... There is still more to be understood in order to develop a representative formula and understand the algorithms required to develop effective combating tools. One research limitation points to the fact that two-dimensional simulations do not reflect the method misinformation travels in a spatial space [26]. Hence, as part of the future research direction, the authors acknowledged the need for further three-dimensional simulation to be conducted using Biolayout [27]. ...
Conference Paper
Full-text available
Academic research shows increase reliance of online users on social media as a main source of news and information. Researchers found that young users are particularly inclined to believe what they read on social media without adequate verification of the information. There has been some research to study the spread of misinformation and identification of key variables in developing simulations of the process. Current literature on combating misinformation focuses on individuals and neglects social newsgroups-key players in the dissemination of information online. Using benchmark variables and values from the literature, the authors simulated the process using Biolayout; a big data-modeling tool. The results show social newsgroups have significant impact in the explosion of misinformation as well as combating misinformation. The outcome has helped better understand and visualize how misinformation travels in the spatial space of social media.
... It remains an open and challenging topic. Dordevic et al. (2016) presented a two-dimensional simulation provided bases for a proof-ofconcept and identification of key variables. The authors set out to demonstrate proof-of-concept using 2D modelling and identified the variables involved in the travel of information. ...
Chapter
Researchers have attempted to model the spread of information as well as misinformation in various ways. In the early stages, these attempts revolved around modelling several nodes (users) communicating to a group of users; the model then evolved into a two-dimensional network, and finally, it ended up with a three-dimensional model. The main idea of this chapter is to present the latest findings and to set out the premise that the ways are which individuals react to false news is comparable to epidemiology, or the science of spreading diseases. Some individuals may remain naturally immune, while some may be vulnerable. Some would be afflicted and the rest may not, This chapter demonstrates four different scenarios in which the authors have modelled the spread of misinformation empirically, mathematically, or through lab, simulations to showcase their results.
... Modelling and simulation of variables in such an ecosystem that describes the processes of misinformation propagation can provide an understanding of misinformation propagation and test the efficiency of a control strategy. Dordevic et al. (2016) set out to demonstrate proof-of-concept using 2D modelling and identified the variables involved in the travel of information figure 8.1. The source news in figure 8.1 sends fake news spreading, with some individuals believing and some not, some individuals sharing, and some not. ...
Chapter
This chapter examines academic approaches to combating fake news. Researchers have looked at understanding the underlying incentives behind the creation and consumption of fake news, characteristics for its success, initiatives to educate audiences, filters for Fake News, the use of algorithms, and the employment automated and semi-automated validation tools. Some of the research into these techniques are industry led and as such, were covered in Chapter 7. This chapter, will primarily focus on academic research highlighting the differences in academic approaches in comparison to that of the industry in the way researchers attempt to apply a level of pragmatism to gain an insight into human processes and how these can be paired with technological solutions to mediate behaviours around fake news consumption. This chapter explores the distinct approaches while acknowledging the challenges posed by fake news, remain open to further introspection.
... There are, however, several sets of common actions that tend to be shared platform actions that propagate the spread of posts, news, and fake news. Dordevic et al. (2016) when modelling propagation of misinformation, demonstrated a proof-of-concept and identified the variables involved in the travel of information and misinformation. The authors showed that combating fake news online is influenced by the following variables: rate of authentication, passing on information rate, average cross-wire rate, the success rate of same level communication rate, and reverse validation rate. ...
... of people who will take the time to check the validity of a post. ( )Dordevic et al. (2016) CrosswireThe rate in which information crosses the same user multiple times. someone takes an active role to communicate or alert other users to the authenticity of a post. ...
Chapter
This chapter looks at the variables involved in the spread of fake news to better understand factors that contribute to the success of fake news. This is issue remains open-ended as the solution has not been developed, and several researchers have been unsuccessful in reaching a clear approach because academics and industry are failing to consider the larger environment in which news on social media thrives. The variables involved in the dissemination of misinformation can be categorised into four categories: Human factors, Interaction factors, Platform factors, and Content factors. The human factors include attraction to rumours and thus tend to seek information that resonates with and affirms one’s beliefs. Interaction factors include different ways to engage social media posts, verification of the information, and the likelihood of taking any number of actions that either promote or demote news online. Platform factors include platform algorithms and platform-tools employed online. Finally, the content factor relates to the research that shows fake news posts tend to have a distinctive linguistic style, multimedia content, and sourcing pattern that could help identify it early on. These factors rarely operate in isolation but rather the combination of all these factors may explain the complexity of understanding how fake news posts get a life of their own. Following this review, this chapter presents a step by step process of verifying news posted online by looking at the features identified above. This chapter will provide examples and a practical element for readers to cross-reference fake news.
... Hence there have been multiple attempts to develop means and tools to minimise the spread of misinformation on social media. Thus, allowing misinformation flow freely on social media not only wastes users' time but can also be dangerous [1]. Several approaches have been developed to limit the spread of fake news by allowing users to authenticate it from within their web browsers [2], A.I [3], labelling news [4], and many more. ...
Article
Full-text available
March 2019, Facebook updated its security procedures requesting ID verification for people who wish to advertise or promote political posts of adverts. The announcement received little media coverage even though it is an interesting development in the battle against Fake News. This paper reviews the current literature on different approaches in the battle against the spread of fake news, including the use of computer algorithms, A.I, and introduction of ID checks. Critical to the evaluation is consideration into ID checks as a means to combat the spread of Fake News. To understand the process and how it works, the team undertook a social experiment combined with reflective analysis to understand the impact of ID check policies better when combined with other standards policies of a typical platform. The analysis identifies grave concerns. In a wider context, standardising such policy will leave political activists in countries vulnerable to reprisal from authoritarian regimes. Other impacts include people who use fake names to protect the identity of adopted children or to protect anonymity from abusive partners. The analysis also points to the fact that troll arms could bypass these checks rendering the use of ID checks less effective in the battle to combat fake news.
... Where applicable, the post is cross-referenced with other outlets for better balance, such as Wikipedia [31], Fig. 4. The papers' simulates the process by which misinformation travel. In [32,33], the authors present series of lab simulations for combating misinformation online that help identify and demonstrate important variables and factors that contribute to the success in spreading misinformation. ...
Conference Paper
Full-text available
Social media is now accepted as a dominant source of sharing news and political activism on the internet. However, this has given way to an increase in sharing fake news and misinformation online. While there have been many attempts to model information cascading, the propagation of misinformation online and means of combating misinformation, there has not been any data that can be used to enhance the understanding and improve these models. Several researchers have suggested variables and assign proposed values in order to develop a better understanding; these values are considered hypothetical. Using a simulated social media interface with factual and none factual informational posts, this paper designs a survey that can study the users' interaction with the page, trust in online and third-party tools, and a scale survey to collect the data needed regarding users' behaviour. The outcome is a six-part survey. In parts one to three, the survey assesses users 'Trust Scale' in sharing of information and news on social media, behaviour online when viewing posts, and the impact of having an online tool that allows users to authenticate posts. In parts four to six, the survey assesses the trust people could have in such a tool, assessing trust in third-party checkers, and finally collecting demographic data for cross-analysis purposes.
... Unlike public organizations and media institutions, many blog writers lack the obligation and responsibility to communicate accurate information. Confirmation or verification of the information prior to sharing is rare, and thus they cannot be considered highly credible sources [31,32]. Baseless rumors and conspiracy theories spread very quickly on the internet [33], which may also explain those who used the internet for their information had higher levels of anxiety. ...
Article
Full-text available
Following the March 2011 accident at Fukushima Daiichi Nuclear Power Plant, many residents of Fukushima have faced anxieties about the health impacts of radiation exposure. Considering that source of information may influence resident anxiety, this study aimed to elucidate the correlation between the two. In addition, a health literacy query was included to examine a possible relationship between anxiety and health literacy skills. A mail survey was conducted in August 2016 among 2000 residents of Fukushima Prefecture aged 20 to 79 years. Survey items included questions about current health anxieties caused by radiation, trusted sources of information about radiation, and media used to obtain information on radiation. The survey valid response rate was 43.4%. Results of multiple linear regression analysis revealed that anxiety was significantly higher for the groups indicating "trust in citizen groups" and "use of internet sites." Anxiety was significantly lower for the groups indicating "trust in government ministries," "trust in local government," and "use of local broadcast television." Also anxiety was significantly lower for groups with higher health literacy. It was found that the significant relationship to anxiety varies depending on the sources of trust and media used. There is a possibility that this was caused by the difference between the contents of each information and media reports. In preparation for any future nuclear accident, government may consider action to improve the media literacy of residents. In addition, improving health literacy of both the recipient and the sender of information can improve access to information and thereby safeguard the health and well-being of the public.