Fig 2 - uploaded by David Torgerson
Content may be subject to copyright.
-Comparison of Brown vs White envelope to increase response rates.  

-Comparison of Brown vs White envelope to increase response rates.  

Source publication
Article
Full-text available
Postal questionnaires are widely used in health research to provide measurable outcomes in areas such as quality of life. Participants who fail to return postal questionnaires can introduce non-response bias. Previous studies within populations over the age of 65 years have shown that response rates amongst older people can be 60% or less. The curr...

Citations

... In the future, it would be worthwhile to consider a study within a trial (SWAT) 47 to prospectively assess the impact of such interactions on recruitment rates. Some of the published SWATs to date exploring other recruitment strategies, 48 such as use of optimised information sheets, 49 advertising patient and public involvement (PPI) 50 to potential trial participants, pre-notification of trial detail, 51 use of Post-It ® Notes (3M, Saint Paul, MN, USA) in older patients, 52 and impact of envelope colour 53 did not find significant impact on recruitment rates. ...
Article
Full-text available
Randomised controlled trials are challenging to deliver. There is a constant need to review and refine recruitment and implementation strategies if they are to be completed on time and within budget. We present the strategies adopted in the United Kingdom Collaborative Trial of Ovarian Cancer Screening, one of the largest individually randomised controlled trials in the world. The trial recruited over 202,000 women (2001–5) and delivered over 670,000 annual screens (2001–11) and over 3 million women-years of follow-up (2001–20). Key to the successful completion were the involvement of senior investigators in the day-to-day running of the trial, proactive trial management and willingness to innovate and use technology. Our underlying ethos was that trial participants should always be at the centre of all our processes. We ensured that they were able to contact either the site or the coordinating centre teams for clarifications about their results, for follow-up and for rescheduling of appointments. To facilitate this, we shared personal identifiers (with consent) with both teams and had dedicated reception staff at both site and coordinating centre. Key aspects were a comprehensive online trial management system which included an electronic data capture system (resulting in an almost paperless trial), biobanking, monitoring and project management modules. The automation of algorithms (to ascertain eligibility and classify results and ensuing actions) and processes (scheduling of appointments, printing of letters, etc.) ensured the protocol was closely followed and timelines were met. Significant engagement with participants ensured retention and low rates of complaints. Our solutions to the design, conduct and analyses issues we faced are highly relevant, given the renewed focus on trials for early detection of cancer. Future work There is a pressing need to increase the evidence base to support decision making about all aspects of trial methodology. Trial registration ISRCTN-22488978; ClinicalTrials.gov-NCT00058032. Funding This article presents independent research funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme as award number 16/46/01. The long-term follow-up UKCTOCS (2015 20) was supported by National Institute for Health and Care Research (NIHR HTA grant 16/46/01), Cancer Research UK, and The Eve Appeal. UKCTOCS (2001–14) was funded by the MRC (G9901012 and G0801228), Cancer Research UK (C1479/A2884), and the UK Department of Health, with additional support from The Eve Appeal. Researchers at UCL were supported by the NIHR UCL Hospitals Biomedical Research Centre and by the MRC Clinical Trials Unit at UCL core funding (MC_UU_00004/09, MC_UU_00004/08, MC_UU_00004/07). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the UK Department of Health and Social Care.
Article
Full-text available
Background Self‐administered questionnaires are widely used to collect data in epidemiological research, but non‐response reduces the effective sample size and can introduce bias. Finding ways to increase response to postal and electronic questionnaires would improve the quality of epidemiological research. Objectives To identify effective strategies to increase response to postal and electronic questionnaires. Search methods We searched 14 electronic databases up to December 2021 and manually searched the reference lists of relevant trials and reviews. We contacted the authors of all trials or reviews to ask about unpublished trials; where necessary, we also contacted authors to confirm the methods of allocation used and to clarify results presented. Selection criteria Randomised trials of methods to increase response to postal or electronic questionnaires. We assessed the eligibility of each trial using pre‐defined criteria. Data collection and analysis We extracted data on the trial participants, the intervention, the number randomised to intervention and comparison groups and allocation concealment. For each strategy, we estimated pooled odds ratios (OR) and 95% confidence intervals (CI) in a random‐effects model. We assessed evidence for selection bias using Egger's weighted regression method and Begg's rank correlation test and funnel plot. We assessed heterogeneity amongst trial odds ratios using a Chi² test and quantified the degree of inconsistency between trial results using the I² statistic. Main results Postal We found 670 eligible trials that evaluated over 100 different strategies of increasing response to postal questionnaires. We found substantial heterogeneity amongst trial results in half of the strategies. The odds of response almost doubled when: using monetary incentives (odds ratio (OR) 1.86; 95% confidence interval (CI) 1.73 to 1.99; heterogeneity I² = 85%); using a telephone reminder (OR 1.96; 95% CI 1.03 to 3.74); and when clinical outcome questions were placed last (OR 2.05; 95% CI 1.00 to 4.24). The odds of response increased by about half when: using a shorter questionnaire (OR 1.58; 95% CI 1.40 to 1.78); contacting participants before sending questionnaires (OR 1.36; 95% CI 1.23 to 1.51; I² = 87%); incentives were given with questionnaires (i.e. unconditional) rather than when given only after participants had returned their questionnaire (i.e. conditional on response) (OR 1.53; 95% CI 1.35 to 1.74); using personalised SMS reminders (OR 1.53; 95% CI 0.97 to 2.42); using a special (recorded) delivery service (OR 1.68; 95% CI 1.36 to 2.08; I² = 87%); using electronic reminders (OR 1.60; 95% CI 1.10 to 2.33); using intensive follow‐up (OR 1.69; 95% CI 0.93 to 3.06); using a more interesting/salient questionnaire (OR 1.73; 95% CI 1.12 to 2.66); and when mentioning an obligation to respond (OR 1.61; 95% CI 1.16 to 2.22). The odds of response also increased with: non‐monetary incentives (OR 1.16; 95% CI 1.11 to 1.21; I² = 80%); a larger monetary incentive (OR 1.24; 95% CI 1.15 to 1.33); a larger non‐monetary incentive (OR 1.15; 95% CI 1.00 to 1.33); when a pen was included (OR 1.44; 95% CI 1.38 to 1.50); using personalised materials (OR 1.15; 95% CI 1.09 to 1.21; I² = 57%); using a single‐sided rather than a double‐sided questionnaire (OR 1.13; 95% CI 1.02 to 1.25); using stamped return envelopes rather than franked return envelopes (OR 1.23; 95% CI 1.13 to 1.33; I² = 69%), assuring confidentiality (OR 1.33; 95% CI 1.24 to 1.42); using first‐class outward mailing (OR 1.11; 95% CI 1.02 to 1.21); and when questionnaires originated from a university (OR 1.32; 95% CI 1.13 to 1.54). The odds of response were reduced when the questionnaire included questions of a sensitive nature (OR 0.94; 95% CI 0.88 to 1.00). Electronic We found 88 eligible trials that evaluated over 30 different ways of increasing response to electronic questionnaires. We found substantial heterogeneity amongst trial results in half of the strategies. The odds of response tripled when: using a brief letter rather than a detailed letter (OR 3.26; 95% CI 1.79 to 5.94); and when a picture was included in an email (OR 3.05; 95% CI 1.84 to 5.06; I² = 19%). The odds of response almost doubled when: using monetary incentives (OR 1.88; 95% CI 1.31 to 2.71; I² = 79%); and using a more interesting topic (OR 1.85; 95% CI 1.52 to 2.26). The odds of response increased by half when: using non‐monetary incentives (OR 1.60; 95% CI 1.25 to 2.05); using shorter e‐questionnaires (OR 1.51; 95% CI 1.06 to 2.16; I² = 94%); and using a more interesting e‐questionnaire (OR 1.85; 95% CI 1.52 to 2.26). The odds of response increased by a third when: offering survey results as an incentive (OR 1.36; 95% CI 1.16 to 1.59); using a white background (OR 1.31; 95% CI 1.10 to 1.56); and when stressing the benefits to society of response (OR 1.38; 95% CI 1.07 to 1.78; I² = 41%). The odds of response also increased with: personalised e‐questionnaires (OR 1.24; 95% CI 1.17 to 1.32; I² = 41%); using a simple header (OR 1.23; 95% CI 1.03 to 1.48); giving a deadline (OR 1.18; 95% CI 1.03 to 1.34); and by giving a longer time estimate for completion (OR 1.25; 95% CI 0.96 to 1.64). The odds of response were reduced when: "Survey" was mentioned in the e‐mail subject (OR 0.81; 95% CI 0.67 to 0.97); when the email or the e‐questionnaire was from a male investigator, or it included a male signature (OR 0.55; 95% CI 0.38 to 0.80); and by using university sponsorship (OR 0.84; 95%CI 0.69 to 1.01). The odds of response using a postal questionnaire were over twice those using an e‐questionnaire (OR 2.33; 95% CI 2.25 to 2.42; I² = 98%). Response also increased when: providing a choice of response mode (electronic or postal) rather than electronic only (OR 1.76 95% CI 1.67 to 1.85; I² = 97%); and when administering the e‐questionnaire by computer rather than by smartphone (OR 1.62 95% CI 1.36 to 1.94). Authors' conclusions Researchers using postal and electronic questionnaires can increase response using the strategies shown to be effective in this Cochrane review.
Article
Background: Poor retention of participants in randomised trials can lead to missing outcome data which can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to improve retention but few have been formally evaluated. Objectives: To quantify the effect of strategies to improve retention of participants in randomised trials and to investigate if the effect varied by trial setting. Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Scopus, PsycINFO, CINAHL, Web of Science Core Collection (SCI-expanded, SSCI, CPSI-S, CPCI-SSH and ESCI) either directly with a specified search strategy or indirectly through the ORRCA database. We also searched the SWAT repository to identify ongoing or recently completed retention trials. We did our most recent searches in January 2020. Selection criteria: We included eligible randomised or quasi-randomised trials of evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance. Data collection and analysis: We extracted data on: the retention strategy being evaluated; location of study; host trial setting; method of randomisation; numbers and proportions in each intervention and comparator group. We used a risk difference (RD) and 95% confidence interval (CI) to estimate the effectiveness of the strategies to improve retention. We assessed heterogeneity between trials. We applied GRADE to determine the certainty of the evidence within each comparison. Main results: We identified 70 eligible papers that reported data from 81 retention trials. We included 69 studies with more than 100,000 participants in the final meta-analyses, of which 67 studies evaluated interventions aimed at trial participants and two evaluated interventions aimed at trial staff involved in retention. All studies were in health care and most aimed to improve postal questionnaire response. Interventions were categorised into broad comparison groups: Data collection; Participants; Sites and site staff; Central study management; and Study design. These intervention groups consisted of 52 comparisons, none of which were supported by high-certainty evidence as determined by GRADE assessment. There were four comparisons presenting moderate-certainty evidence, three supporting retention (self-sampling kits, monetary reward together with reminder or prenotification and giving a pen at recruitment) and one reducing retention (inclusion of a diary with usual follow-up compared to usual follow-up alone). Of the remaining studies, 20 presented GRADE low-certainty evidence and 28 presented very low-certainty evidence. Our findings do provide a priority list for future replication studies, especially with regard to comparisons that currently rely on a single study. Authors' conclusions: Most of the interventions we identified aimed to improve retention in the form of postal questionnaire response. There were few evaluations of ways to improve participants returning to trial sites for trial follow-up. None of the comparisons are supported by high-certainty evidence. Comparisons in the review where the evidence certainty could be improved with the addition of well-done studies should be the focus for future evaluations.