Fig 12 - available via license: CC BY
Content may be subject to copyright.
Mean false-claim inference scores across conditions NE (no-exposure) and FCOl± (fact-check-only, with/without cognitive load) in Experiment 3. Error bars show standard errors of the mean

Mean false-claim inference scores across conditions NE (no-exposure) and FCOl± (fact-check-only, with/without cognitive load) in Experiment 3. Error bars show standard errors of the mean

Source publication
Article
Full-text available
Misinformation often continues to influence inferential reasoning after clear and credible corrections are provided; this effect is known as the continued influence effect. It has been theorized that this effect is partly driven by misinformation familiarity. Some researchers have even argued that a correction should avoid repeating the misinformat...

Similar publications

Article
Full-text available
Chronic obstructive pulmonary disease is a major cause of morbidity and mortality worldwide and global health concern. COPD self care knowledge is a cornerstone for self-management of chronic illness. The objective of this study was to find out the level of knowledge on self care among COPD patients. A descriptive, cross sectional design and purpos...

Citations

... While there is a dearth of misinformation perceptions and behaviour studies, many studies explored information literacy skills in the Arab region. The topic of information literacy studies is important because many studies attributed the issue of misinformation to the lack of information literacy skills (Ecker et al., 2020;Shehata, 2022;Shehata and Eldakar, 2021). Alwreikat et al. (2021) explored Arab women's feelings while seeking health information during the COVID-19 outbreak. ...
Article
Full-text available
Purpose-The purpose of this study is to investigate individuals' perceptions and behavior when dealing with misinformation on social media platforms. While misinformation is not a new phenomenon, the COVID-19 outbreak has accelerated its spread through social media outlets, leading to widespread exposure to false or misleading information. This exposure can have serious consequences on individuals' decision-making and behavior, especially when it comes to critical decisions related to education or healthcare. The use of social media as a source of information makes it essential to understand how people perceive and respond to misinformation to develop effective strategies for mitigating its harmful effects. Design/methodology/approach-This large-scale study explores the Omani individuals' perceptions and behaviour of misinformation on the social Web in a series of studies that seek to enhance the authorities' response to misinformation. The study adopted a quantitative approach to collect data. Using WhatsApp as a social networking platform, a survey was disseminated to capture participants' perceptions and behaviour among different segments of citizens in Oman. Findings-The findings showed that Omani participants have high verification skills, implying high information literacy skills among them. Additionally, results indicated that misinformation had created doubt and anxiety among the participants. Moreover, it hindered many participants' ability to take countermeasures and obtain reliable data. Originality/value-This study was a large-scale study conducted in Oman, making it one of a few studies conducted in the region about perceptions and behaviour towards misinformation. The findings help to understand how different cultures interacted with COVID-19 misinformation. In addition, these findings offer useful insight that can help health information professionals to design preventive resources that help people to obtain accurate information during crises.
... When participants are later asked specific open-ended inference questions about the event, CIE makes them refer to the retracted information. It is worth noting that the current research on CIE is moving away from this traditional paradigm by not limiting itself to unfolding events and providing information in small pieces or using open-ended inference questions about the event [23][24][25][26]. BPB is defined as the tendency to persevere in beliefs or opinions even after the information on which the beliefs or opinions were based has been discredited [27]. ...
Article
Full-text available
Belief perseverance bias refers to individuals’ tendency to persevere in biased opinions even after the misinformation that initially shaped those opinions has been retracted. This study contributes to research on reducing the negative impact of misinformation by mitigating the belief perseverance bias. The study explores the previously proposed awareness-training and counter-speech debiasing techniques, further developing them by introducing new variants and combining them. We investigate their effectiveness in mitigating the belief perseverance bias after the retraction of misinformation related to a real-life issue in an experiment involving N = 876 individuals, of whom 364 exhibit belief perseverance bias. The effectiveness of the debiasing techniques is assessed by measuring the difference between the baseline opinions before exposure to misinformation and the opinions after exposure to a debiasing technique. Our study confirmed the effectiveness of the awareness-training and counter-speech debiasing techniques in mitigating the belief perseverance bias, finding no discernible differences in the effectiveness between the previously proposed and the new variants. Moreover, we observed that the combination of awareness training and counter-speech is more effective in mitigating the belief perseverance bias than the single debiasing techniques.
... Despite the concern about the familiarity backfire effect, which suggests that pairing a correction with a false statement could unintentionally increase perceived familiarity with the misinformation, and, therefore, backfire (e.g., Lewandowsky et al., 2012;Nyhan & Reifler, 2010), recent research has concluded that the familiarity backfire effect has a weak or insignificant effect. This suggests the robust persuasiveness of corrective information even when the original false claims are articulated as well (Ecker et al., 2020;Swire-Thompson et al., 2020). Nevertheless, questions still persist regarding whether the backfire effect would be less likely to occur even in cases where false claims are repeatedly presented within the corrective information on social media. ...
... Given that users might feel fatigued by the repeated exposure to the identical myth-fact correction, they might only pay attention to the myths that are located earlier in the correction and disregard the remaining parts where the important facts are stated. Therefore, our results on the perceived misinformation familiarity effect of the repeated myth-fact correction mirror the argument by Ecker et al. (2020) that "recommendations to avoid unnecessary misinformation repetition should arguably remain in place" (p. 22) and expand it to social media spaces. ...
Article
Full-text available
Focusing on vaccine-related misinformation, this online experiment study (N = 502) examined how short-term repeated exposure of the corrective information may unexpectedly impact misinformation credibility through misinformation familiarity. The study found that repeated exposure of myths within corrective information increased the perceived familiarity about the misinformation about COVID-19 vaccines. This effect, which ultimately increased misinformation credibility, was pronounced even among individuals with low and moderate levels of prior beliefs in the misinformation. Our findings suggest practical implications for minimizing the unexpected backfire effect of corrections against vaccine misinformation on social media. Debunking processes should avoid the dominant framing of original false claims within the correction and unnecessary repetitions of the correction.
... Materials were again Tweets that we created to contain false information. This time, in addition to myths fact-checked on Snopes, we also adapted additional false claims that were fact-checked on other online sources or used as materials by Ecker et al. (2020) into the content of our Tweets. ...
Article
Full-text available
General Audience Summary Many individuals wonder what they can do when they see other users post false information online. Fact-checking can take substantive time and effort, and the information may not always be available to perform these fact-checks. People may also be hesitant to directly correct and confront their peers online. We test a novel, alternative intervention for addressing online misinformation: responding to posts containing false information with “truth queries”: questions that draw attention to truth or criteria used to judge truth, such as the presence of evidence or the credibility of the information’s source. Examples of truth queries include “How do you know this is true?” “Where did you learn this?” and “Do you have an example?” These questions may alert other users to pay more attention to the accuracy of the false information and communicate that the information is not universally accepted. In a series of three studies, we showed participants Tweets containing false information that appeared with no reply, a reply containing a truth query, or a reply unrelated to truth. We found that the presence of a user reply containing a social truth query consistently reduced participants’ belief in and intent to share the posts containing false information compared to when the same posts appeared with no replies or a reply unrelated to truth. We also found that a variety of different types of truth queries were effective, making this approach flexible and adaptable. This research provides initial evidence for the use of social truth queries as an easy-to-implement, nonconfrontational, user-driven strategy for addressing misinformation online.
... Researchers seeking to understand how to correct misinformation (Lewandowsky et al., 2012) have focused on fact-checking. However, one can fact-check news, but not beliefs (Diaz Ruiz and Nilsson, 2023: 29), and studies show that fact-checking can backfire (Ecker et al., 2020;Nyhan and Reifler, 2010). Others studied what makes people susceptible to misinformation (Del Vicario et al., 2016;Lewandowsky et al., 2017), including the sociality that enables its circulation (Altay and Acerbi, 2023), and its effects such as radicalization (Radanielina Hita and Grégoire, 2023). ...
Article
Full-text available
The proliferation of deceptive content online has led to the recognition that some actors in the digital media ecosystem profit from disinformation's rapid spread. The reason is that a market designed to monetize engagement with fringe audiences encourages actors to create content that can go viral, hence creating financial incentives to circulate controversial claims, adversarial narratives, and deceptive content. The theoretical claim of this piece is that the actors and practices of digital media platforms can be analyzed through their market practices. Through this lens, scholars can study whether digital markets such as programmatic advertising, commercial content moderation, and influencer marketing make money from circulating disinformation. To show how disinformation is an expected outcome, not breakage, of the current media market in digital platforms, the article analyzes the business models of pre-digital broadcasting media, partisan media, and digital media platforms, finding qualitatively different forms of disinformation in each media market iteration.
... Some have argued that familiarity backfire may be a particular risk when a correction of novel misinformation is presented without prior misinformation exposure [55]. However, recent studies from our lab have found that even without initial misinformation exposure, corrections are unlikely to backfire [46,56]. Cumulatively, we believe that the evidence shows there is limited risk of familiarity backfire effects and that even the correction of novel misinformation is unlikely to backfire. ...
... Several studies have also suggested there are benefits to using a narrative format or adding narrative elements to a correction [52,54,55]. However, when narrative corrections are contrasted with similarly detailed non-narrative corrections, there does not appear to be any benefit of narrative [51,53,56]. Overall, there is mixed evidence regarding which correction formats are most effective, and many studies confound correction format and provision of details. ...
... While fact-checking has received growing attention as an important tool to combat misinformation, its effectiveness has often been debated (see Walter and Tukachinsky 2020). For example, some scholars raised concerns over backfire effects whereby an evidence-based correction drives people to strengthen their existing misconceptions (Lewandowsky et al. 2012), but more recent studies suggest that such backfire effects are rarely found (Ecker, Lewandowsky, and Chadwick 2020;Wood and Porter 2019). Also, while some studies found that corrections have a moderate-level effect on reducing misinformation-related beliefs (see Walter and Murphy 2018), other studies revealed little impact of fact-checking. ...
... COVID-19 vaccines) it may be met with resistance. Therefore, for false product disinformation which has experienced repeated exposure (Resnick, 2017), policy makers and governments should ensure offenders use repeated retractions to correct the falsehood (Ecker et al., 2020;Lewandowsky et al., 2012). However, this must (1) not oversaturate the messages, and (2) ensure that these messages are from the offenders social media account. ...
... The motivation for RQ3 is to establish a basis for an interface design that accounts for differences in people's prior beliefs [26,69,73]. RQ4 aims to see the impact of available retraction information. ...
... O'Rear and Radvansky [61] found that retracted information is used even after its retraction, so creating misinformation after its retraction. Notably, Ecker et al. [26] suggest that repeating novel misinformation in corrections does not enhance misinformation, therefore making it safe to do. Work on retracted political news by [26,61,78] involves very different considerations from those in the science publication process with different implications for credibility assessment. ...
... Notably, Ecker et al. [26] suggest that repeating novel misinformation in corrections does not enhance misinformation, therefore making it safe to do. Work on retracted political news by [26,61,78] involves very different considerations from those in the science publication process with different implications for credibility assessment. ...
Preprint
Full-text available
For many people, social media is an important way to consume news on important topics like health. Unfortunately, some influential health news is misinformation because it is based on retracted scientific work. Ours is the first work to explore how people can understand this form of misinformation and how an augmented social media interface can enable them to make use of information about retraction. We report a between subjects think-aloud study with 44 participants, where the experimental group used our augmented interface. Our results indicate that this helped them consider retraction when judging the credibility of news. Our key contributions are foundational insights for tackling the problem, revealing the interplay between people's understanding of scientific retraction, their prior beliefs about a topic, and the way they use a social media interface that provides access to retraction information.
... Given the important role that eyewitnesses play in criminal investigations, it is crucial to understand how an eyewitness' memory may be influenced by receiving the same piece of misinformation multiple times and through more than one source (e.g., from multiple people). Although some reports have shown that repeated exposure to misinformation can exacerbate its influence (Mitchell and Zaragoza, 1996;Walther et al., 2002;Ecker et al., 2011;Bright-Paul and Jarrold, 2012;Schwarz et al., 2016;Ecker et al., 2020), very little research has independently examined the effects of repetition and source variability on eyewitness suggestibility (Mitchell and Zaragoza, 1996;Foster et al., 2012). ...
... Several researchers have found that including the details of the misinformation in a retraction can have a backfire effect, such that people often falsely remember the information being corrected as true (Skurnik et al., 2005;Nyhan and Reifler, 2010;Peter and Koch, 2016). However, more recent research showed that retractions that included the misinformation were more effective at reducing the continued influence effect than retractions that did not mention the misinformation (Ecker et al., 2017(Ecker et al., , 2020. A possible explanation for these discrepant findings is that timing of the correction matters. ...
... A possible explanation for these discrepant findings is that timing of the correction matters. In studies that did not demonstrate a backfire effect, participants read statements or new articles and received a correction (with or without a reminder of the fake news) after a delay (Ecker et al., 2017(Ecker et al., , 2020. In contrast, in the studies that demonstrated a backfire effect, participants usually read statements or news articles with a truth verification simultaneously. ...
Article
Full-text available
Considerable evidence has shown that repeating the same misinformation increases its influence (i.e., repetition effects). However, very little research has examined whether having multiple witnesses present misinformation relative to one witness (i.e., source variability) increases the influence of misinformation. In two experiments, we orthogonally manipulated repetition and source variability. Experiment 1 used written interview transcripts to deliver misinformation and showed that repetition increased eyewitness suggestibility, but source variability did not. In Experiment 2, we increased source saliency by delivering the misinformation to participants via videos instead of written interviews, such that each witness was visibly and audibly distinct. Despite this stronger manipulation, there was no effect of source variability in Experiment 2. In addition, we reported a meta-analysis (k = 19) for the repeated misinformation effect and a small-scale meta-analysis (k = 8) for the source variability effect. Results from these meta-analyses were consistent with the results of our individual experiments. Altogether, our results suggest that participants respond based on retrieval fluency rather than source-specifying information.