ArticlePDF Available

Lie for a Dime: When Most Prescreening Responses Are Honest but Most Study Participants Are Impostors

Authors:

Abstract

The Internet has enabled recruitment of large samples with specific characteristics. However, when researchers rely on participant self-report to determine eligibility, data quality depends on participant honesty. Across four studies on Amazon Mechanical Turk, we show that a substantial number of participants misrepresent theoretically relevant characteristics (e.g., demographics, product ownership) to meet eligibility criteria explicit in the studies, inferred by a previous exclusion from the study or inferred in previous experiences with similar studies. When recruiting rare populations, a large proportion of responses can be impostors. We provide recommendations about how to ensure that ineligible participants are excluded that are applicable to a wide variety of data collection efforts, which rely on self-report.
A preview of the PDF is not available
... One of the less articulated issues of online recruitment and data collection is participant fraud or scam behaviour. The term "imposter participants" has been proposed to characterise individuals who, for the purpose of participating in research, provide false identities and experiences [7]. While participant fraud has been commonly observed in online surveys [7][8][9][10], its occurrence in online interviews is a relatively less common, although rapidly emerging phenomenon [11][12][13][14]. ...
... The term "imposter participants" has been proposed to characterise individuals who, for the purpose of participating in research, provide false identities and experiences [7]. While participant fraud has been commonly observed in online surveys [7][8][9][10], its occurrence in online interviews is a relatively less common, although rapidly emerging phenomenon [11][12][13][14]. An increase in participant fraud has also been noted in online focus group discussions by qualitative market researchers, but has not yet been reported in health services research literature [15,16]. ...
... Our experience and the suggested indicators of fraudulent participation align with findings from other recent studies using qualitative methods. These indicators include a high volume of emails sent in a short time and unusual timing of emails [13,14], similarities in content and email address format [12], preference for not turning on the camera [11], vague and inconsistent participant responses, as well as signs of confusion [10,21], and potential financial interests [7]. When researchers rely on participant self-reports for determining eligibility, the quality and validity of the collected data depends upon the honesty of the participants [7]. ...
Article
Full-text available
Background The growth in online qualitative research and data collection provides several advantages for health service researchers and participants, including convenience and extended geographic reach. However, these online processes can also present unexpected challenges, including instances of participant fraud or scam behaviour. This study describes an incident of participant fraud identified during online focus group discussions and interviews for a PhD health services research project on paediatric neurodevelopmental care. Methods We aimed to recruit carers of Australian children with neurodevelopmental disorders. Potential participants were recruited via a publicly available social media advert on Facebook offering $50 AUD compensation. Those who expressed interest via email (n = 254) were sent a pre-interview Qualtrics survey to complete. We identified imposters at an early stage via inconsistencies in their self-reported geographical location and that captured by the survey as well as recognition of suspicious actions before, during and after focus group discussions and interviews. Results Interest in participation was unexpectedly high. We determined that all potential participants were likely imposters, posing as multiple individuals and using different IP addresses across Nigeria, Australia, and the United States. In doing so, we were able to characterise several “red flags” for identifying imposter participants, particularly those posing as multiple individuals. These comprise a combination of factors including large volumes and strange timings of email responses, unlikely demographic characteristics, short or vague interviews, a preference for nonvisual participation, fixation on monetary compensation, and inconsistencies in reported geographical location. Additionally, we propose several strategies to combat this issue such as providing proof of location or eligibility during recruitment and data collection, examining email and consent form patterns, and comparing demographic data with regional statistics. Conclusions The emergent risk of imposter participants is an important consideration for those seeking to conduct health services research using qualitative approaches in online environments. Methodological design choices intended to improve equity and access for the target population may have an unintended consequence of improving access for fraudulent actors unless appropriate risk mitigation strategies are also employed. Lessons learned from this experience are likely to be valuable for novice health service researchers involved in online focus group discussions and interviews.
... Following the surge in use of online platforms, participant misrepresentation, fraud, and deception have become serious potential concerns for virtual studies, undermining research integrity (Bybee et al., 2022;Glazer et al., 2021;Jones et al., 2021;Teitcher et al., 2015). There are increasing reports of "fraudsters, " "scammers, " "imposters, " and "bots" compromising quantitative research (Chandler & Paolacci, 2017). The term "imposter participants" was first introduced in the context of online survey studies, which describes dishonest individuals who fraudulently gain access to research studies. ...
... The term "imposter participants" was first introduced in the context of online survey studies, which describes dishonest individuals who fraudulently gain access to research studies. Individuals either completely fabricate or misrepresent their identities, or exaggerate their experiences to participate in research, particularly when compensation is high (Chandler & Paolacci, 2017). Similarly, also in the context of surveys, Teitcher et al. (2015) describes "fraudsters" as ineligible individuals who participate in research studies solely for compensation (Teitcher et al., 2015). ...
... Three quarters of all participants reported concealing health information from researchers to avoid exclusion from the study (Devine et al., 2013). Deception is easier and perhaps more successful through certain methodologies (e.g., online surveys) compared to others, with the likelihood of deception increasing with larger monetary incentives (Chandler & Paolacci, 2017;Fernandez Lynch et al., 2019). It has been estimated that some form of participant deception may occur in a small minority to large majority of research across various research methods (Devine et al., 2013;Fernandez Lynch et al., 2019). ...
Article
Background: The COVID-19 pandemic has accelerated and amplified the use of virtual research methods. While online research has several advantages, it also provides greater opportunity for individuals to misrepresent their identities to fraudulently participate in research for financial gain. Participant deception and fraud have become a growing concern for virtual research. Reports of deception and preventative strategies have been discussed within online quantitative research, particularly survey studies. Though, there is a dearth of literature surrounding these issues pertaining to qualitative studies, particularly within substance use research. Results: In this commentary, we detail an unforeseen case study of several individuals who appeared to deliberately misrepresent their identities and information during participation in a virtual synchronous qualitative substance use study. Through our experiences, we offer strategies to detect and prevent participant deception and fraud, as well as challenges to consider when implementing these approaches. Conclusions: Without general awareness and protective measures, the integrity of virtual research methods remains vulnerable to inaccuracy. As online research continues to expand, it is essential to proactively design innovative solutions to safeguard future studies against increasingly sophisticated deception and fraud.
... Defining Fraudulence. Terms describing participants falsely representing themselves and their associated behaviors have included misrepresentation, fraud, inauthenticity, feigning, deception, imposter, and even liar, speaking to some of the challenges inherent in adequately conceptualizing not only the behavior, but also the underlying motivations or intentions of individuals who engage in research under false pretenses (Chandler & Paolacci, 2017;Davies et al., 2023;Drysdale et al., 2023;Glazer et al., 2021;Owens, 2022, Ridge et al., 2023Roehl & Harland, 2022). For example, the term "imposter participant" has been used to highlight extreme presentations of fraudulent research participation, wherein an individual may completely fake or distort their identity and/or experiences to enroll in a study (Drysdale et al., 2023;Roehl & Harland, 2022). ...
... For example, the term "imposter participant" has been used to highlight extreme presentations of fraudulent research participation, wherein an individual may completely fake or distort their identity and/or experiences to enroll in a study (Drysdale et al., 2023;Roehl & Harland, 2022). Explorations of the underlying motives and intentions of individuals who engage in this type of behavior have yielded a range of factors, such as financial gain in the context of incentives, curiosity, or entertainment (Bauermeister et al., 2012;Chandler & Paolacci, 2017;Owens, 2022). Indeed, use of incentives in research has increased over time, and has been linked with higher rates of fraudulent participation in research (Bowen et al., 2008;Glazer et al., 2021;Levi et al., 2022). ...
... Purposeful participant sampling and rigorous design. To enhance response validity and counteract fake participation, researchers can utilize purposeful participant sampling, pre-screening questions, and generally a rigorous study design (Chandler & Paolacci, 2017;Hulland & Miller, 2018;Jones et al., 2015). Researchers can employ targeted recruitment materials and strategically share them on specific closed social media pages or groups relevant to the audience. ...
... A second limitation was that the researcher relied on the participants to provide accurate and honest information for a reliable study. Survey questions should be asked to allow sincere and truthful responses from participants (Chandler & Paolacci, 2017). The questions should not be posed in a favorable way to what the researcher wants the responses to be. ...
Research
Full-text available
Turnover in the food service industry, particularly in the college and university dining sector, contributes significantly to the rising costs associated with consistently replacing employees. Leader behavior can significantly affect an employee's intention to stay with an organization. The purpose of this study was to examine the relationship between perceived servant leadership behaviors of supervisors and employee turnover intentions in the university dining sector of the food service industry. The research questions addressed whether the practice of servant leadership behaviors correlates with employee intention to stay. The study design was nonexperimental correlational with purposive convenience sampling. Data were obtained from 50 participants via two surveys. A post hoc analysis was used to determine the study strength. Multiple regression analysis determined the statistical significance of turnover intentions, which demonstrated a strong and negative correlation linking servant leadership behaviors to turnover intentions. Study findings addressed a gap in the literature regarding how supervisors' perceived servant leadership behaviors relate to employee turnover intentions. Results suggested that adopting servant leadership behaviors in the college and university dining sector of the food service industry could lead to less employee turnover. Keywords: leadership styles, servant leadership, leadership, leadership models, organizational commitment, employee turnover intention, perceived organizational support, iv perceived supervisor support, employee turnover, job satisfaction, food services, leadership in food service, campus dining operations, employee engagement and job satisfaction, effect of leadership styles on job hopping, servant leadership, food service industry, leadership models v ACKNOWLEDGMENTS
... To encourage participants to respond truthfully to the filter question, we adopted the procedure of Chandler and Paolacci (2017). ...
Article
Full-text available
The central questions answered in this research are, “For what, and when, do consumers prefer to spend loyalty points over money?” We use construal level theory (CLT) to theorize that loyalty (or reward) points are perceived abstractly while money is perceived concretely, and this impacts spending preferences. We find that consumers prefer to spend loyalty points (vs. money) on high desirability‐low feasibility (vs. low desirability‐high feasibility) consumption items. The same pattern also persists when the items that vary on desirability and feasibility are equivalently priced. Second, we show that the construal‐level matching phenomenon influences temporal decisions such that consumers prefer to spend points (vs. money) for items that are available later (vs. now). Third, the moderating effect of category type (experiential vs. material) is reported. Finally, we demonstrate two managerial applications that are reported in the Web Appendix. We show that managers may influence how consumers spend loyalty points (a) by altering the concreteness of the decision context, and (b) by manipulating the nature of loyalty points.
... The sample was contacted through Amazon MTurk with participants selected from the USA (N ¼ 125) and India (N ¼ 120). To increase the quality of the data, we used four different measurement for data security: location, database master qualification, at least 100 hits approved, and IP address control (Buhrmester et al., 2011;Chandler and Paolacci, 2017). From 300 responses received, 34 cases presented missing values in multiple variables (12%) and were eliminated (Hair et al., 2010). ...
Article
Purpose- This study aims to understand the effect of cultural dimension (individualism/ collectivism) on promotional rewards (social or economic) resulting in incentivizing consumers to engage in electronic word-of-mouth (eWOM), further impacting their repurchase intentions. Design/methodology/approach- In Study 1, a 2 (culture: individualism vs collectivism) X 2 (promotional rewards: social vs economic) between-subjects design was used. Structural equation modeling was used to test the hypotheses. In Study 2, culture was measured instead of just being manipulated. The authors used regression analysis in this study. Findings- Owing to the characteristics of collectivistic individuals, consumers in collectivistic cultures were more likely to respond to social rewards as an incentive to engage in eWOM. However, consumers in individualistic cultures were more motivated to engage in eWOM when economic rewards were offered. Originality/value- Despite the global nature of eWOM, little research has explored the effects of cultural traits on consumer response to amplified eWOM strategy. Additionally, though many organizations now offer various promotional incentives to reviewers, little research has explored the effects of promotional offers on a reviewer's subsequent behavior, and no research has explored the relationship between cultural dimensions and current and future response to promotional eWOM rewards.
Article
Full-text available
In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose a shift in focus towards developing sustainable methods of acquiring such data instead of solely building new classifiers in the ever-evolving ML field. Specifically, in the geospatial domain, the process of generating training data for ML systems has been largely neglected in research. Traditionally, experts have been burdened with the laborious task of labeling, which is not only time-consuming but also inefficient. In our system for the semantic interpretation of Airborne Laser Scanning point clouds, we break with this convention and completely remove labeling obligations from domain experts who have completed special training in geosciences and instead adopt a hybrid intelligence approach. This involves active and iterative collaboration between the ML model and humans through Active Learning, which identifies the most critical samples justifying manual inspection. Only these samples (typically $$\ll 1{\%}$$ ≪ 1 % of Passive Learning training points) are subject to human annotation. To carry out this annotation, we choose to outsource the task to a large group of non-specialists, referred to as the crowd, which comes with the inherent challenge of guiding those inexperienced annotators (i.e., “short-term employees”) to still produce labels of sufficient quality. However, we acknowledge that attracting enough volunteers for crowdsourcing campaigns can be challenging due to the tedious nature of labeling tasks. To address this, we propose employing paid crowdsourcing and providing monetary incentives to crowdworkers. This approach ensures access to a vast pool of prospective workers through respective platforms, ensuring timely completion of jobs. Effectively, crowdworkers become human processing units in our hybrid intelligence system mirroring the functionality of electronic processing units .
Article
Full-text available
Objective: Despite extensive research on the link between pornography exposure and sexual aggression, inconsistent results have hindered consensus on the association. To help resolve inconsistencies and advance the field of knowledge, the present study conducted a latent profile analysis to identify common patterns of pornography exposure and to examine their associations with sexual aggression and its risk factors. Method: A total of 491 men in the United States completed assessments of six pornography exposure profile indicators (i.e., frequency of pornography exposure, duration of typical pornography exposure, and exposure to pictures, sex films, degrading films, and violent films) and six outcome variables (i.e., sexual aggression, rape myth acceptance, hostile masculinity, casual sex, psychopathy, and emotion regulation difficulties). Results: The analysis identified three profiles: “infrequent pornography viewers” (n = 113), “average pornography viewers” (n = 302), and “violent pornography viewers” (n = 76). Compared to the infrequent pornography viewers and average pornography viewers profiles, the violent pornography viewers profile had significantly higher means for each outcome variable (ps < .05). Significant differences between the infrequent pornography viewers and average pornography viewers profiles were found for casual sex and difficulties engaging in goal-directed behavior (ps < .05), but not the other outcome variables. Conclusions: Findings provided further insight into the association between pornography and sexual aggression in a way that cannot be observed using a variable-centered approach.
Article
Purpose This study aims to understand the effect of cultural dimension (individualism/ collectivism) on promotional rewards (social or economic) resulting in incentivizing consumers to engage in electronic word-of-mouth (eWOM), further impacting their repurchase intentions. Design/methodology/approach In Study 1, a 2 (culture: individualism vs collectivism) × 2 (promotional rewards: social vs economic) between-subjects design was used. Structural equation modeling was used to test the hypotheses. In Study 2, culture was measured instead of just being manipulated. The authors used regression analysis in this study. Findings Owing to the characteristics of collectivistic individuals, consumers in collectivistic cultures were more likely to respond to social rewards as an incentive to engage in eWOM. However, consumers in individualistic cultures were more motivated to engage in eWOM when economic rewards were offered. Originality/value Despite the global nature of eWOM, little research has explored the effects of cultural traits on consumer response to amplified eWOM strategy. Additionally, though many organizations now offer various promotional incentives to reviewers, little research has explored the effects of promotional offers on a reviewer’s subsequent behavior, and no research has explored the relationship between cultural dimensions and current and future response to promotional eWOM rewards.
Article
Full-text available
Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners' average diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Photos posted by depressed individuals were more likely to be bluer, grayer, and darker. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These findings suggest new avenues for early screening and detection of mental illness.
Article
Full-text available
The authors find that experimental studies using online samples (e.g., MTurk) often violate the assumption of random assignment, because participant attrition-quitting a study before completing it and getting paid-is not only prevalent, but also varies systemically across experimental conditions. Using standard social psychology paradigms (e.g., ego-depletion, construal level), they observed attrition rates ranging from 30% to 50% (Study 1). The authors show that failing to attend to attrition rates in online panels has grave consequences. By introducing experimental confounds, unattended attrition misled them to draw mind-boggling yet false conclusions: that recalling a few happy events is considerably more effortful than recalling many happy events, and that imagining applying eyeliner leads to weight loss (Study 2). In addition, attrition rate misled them to draw a logical yet false conclusion: that explaining one's view on gun rights decreases progun sentiment (Study 3). The authors offer a partial remedy (Study 4) and call for minimizing and reporting experimental attrition in studies conducted on the Web. (PsycINFO Database Record
Article
Full-text available
In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.
Article
Full-text available
This article assesses the validity of many of the assumptions made about work in the on-demand economy and analyses whether proposals advanced for improving workers' income security are sufficient for remedying current shortcomings. It draws on findings from a survey of crowdworkers conducted in late 2015 on the Amazon Mechanical Turk and Crowdflower platforms on workers' employment patterns, work histories, and financial security. Based on this information, it provides an analysis of crowdworkers' economic dependence on the platform, including the share of workers who depend on crowdwork as their main source of income, as well as their working conditions, the problems they encounter while crowdworking and their overall income security. Based on these findings, the article recommends an alternative way of organizing work that can improve the income security of crowdworkers as well as overall efficiency and productivity of crowdwork.
Article
This tutorial provides evidence that character misrepresentation in survey screeners by Amazon Mechanical Turk Workers (“Turkers”) can substantially and significantly distort research findings. Using five studies, we demonstrate that a large proportion of respondents in paid MTurk studies claim a false identity, ownership, or activity in order to qualify for a study. The extent of misrepresentation can be unacceptably high, and the responses to subsequent questions can have little correspondence to responses from appropriately identified participants. We recommend a number of remedies to deal with the problem, largely involving strategies to take away the economic motive to misrepresent and to make it difficult for Turkers to recognize that a particular response will gain them access to a study. The major short-run solution involves a two-survey process that first asks respondents to identify their characteristics when there is no motive to deceive, and then limits the second survey to those who have passed this screen. The long-run recommendation involves building an ongoing MTurk participant pool (“panel”) that (1) continuously collects information that could be used to classify respondents, and (2) eliminates from the panel those who misrepresent themselves.
Chapter
In this chapter, we discuss the “lab-in-the-field” methodology, which combines elements of both lab and field experiments in using standardized, validated paradigms from the lab in targeting relevant populations in naturalistic settings. We begin by examining how the methodology has been used to test economic models with populations of theoretical interest. Next, we outline how lab-in-the-field studies can be used to complement traditional randomized control trials in collecting covariates to test theoretical predictions and explore behavioral mechanisms. We proceed to discuss how the methodology can be utilized to compare behavior across cultures and contexts, and test for the external validity of results obtained in the lab. The chapter concludes with an overview of lessons on how to use the methodology effectively.
Article
Objective: The successful recruitment and study of cancer survivors within psycho-oncology research can be challenging, time-consuming, and expensive, particularly for key subgroups such as young adult cancer survivors. Online crowdsourcing platforms offer a potential solution that has not yet been investigated with regard to cancer populations. The current study assessed the presence of cancer survivors on Amazon's Mechanical Turk (MTurk) and the feasibility of using MTurk as an efficient, cost-effective, and reliable psycho-oncology recruitment and research platform. Methods: During a <4-month period, cancer survivors living in the United States were recruited on MTurk to complete two assessments, spaced 1 week apart, relating to psychosocial and cancer-related functioning. The reliability and validity of responses were investigated. Results: Within a <4-month period, 464 self-identified cancer survivors on MTurk consented to and completed an online assessment. The vast majority (79.09%) provided reliable and valid study data according to multiple indices. The sample was highly diverse in terms of U.S. geography, socioeconomic status, and cancer type, and reflected a particularly strong presence of distressed and young adult cancer survivors (median age = 36 years). A majority of participants (58.19%) responded to a second survey sent one week later. Conclusions: Online crowdsourcing represents a feasible, efficient, and cost-effective recruitment and research platform for cancer survivors, particularly for young adult cancer survivors and those with significant distress. We discuss remaining challenges and future recommendations. Copyright © 2016 John Wiley & Sons, Ltd.
Article
Online labor markets allow rapid recruitment of large numbers of workers for very low pay. Although online workers are often used as research participants, there is little evidence that they are motivated to make costly choices to forgo wealth or leisure that are often central to addressing accounting research questions. Thus, we investigate the validity of using online workers as a proxy for non-experts when accounting research designs use more demanding tasks than these workers typically complete. Three experiments examine the costly choices of online workers relative to student research participants. We find that online workers are at least as willing as students to make costly choices, even at significantly lower wages. We also find that online workers are sensitive to performance-based wages, which are just as effective in inducing high effort as high fixed wages. We discuss implications of our results for conducting accounting research with online workers. Data Availability: Contact the authors.