Conference PaperPDF Available

Improving Quality by Lowering Non-Response – A Guideline for Online-Surveys. Paper presented on the the WAPOR-Seminar “Quality Criteria in Survey Research VI”, Cadenabbia, Italy, June 29- July 1, 2006 [mit Thomas Zerback].

Authors:
1
Nikolaus Jackob & Thomas Zerback
Improving Quality by Lowering on-Response –
AGuideline for Online Surveys
Paper presented at the WAPOR-Seminar “Quality Criteria in Survey Research VI”, Cade-
nabbia, Italy, June 29- July 1, 2006.
1. Introduction
Surveying persons by using Internet technology often is considered as a new trend in empiri-
cal research. Compared to classical modes of investigation the online survey indeed is a rela-
tively novel method. However, it is not a completely new method – the first online surveys
1
were conducted almost 25 years ago. And because of the rapid growth in the online realm
online surveys became an increasingly attractive alternative to other survey modes. In 2006,
surveys based on Internet technology will presumably account for one third of all surveys
worldwide. It is not unlikely that in future the majority of all survey research will be done
online (Evans & Mathur 2005, p. 196).
Online surveys seem to be a method for everybody: They are cheap, fast, and easy to
administer. But like for every other survey-mode there is a danger of low quality of poorly
designed surveys. And because of the frequent misuse of online surveys e.g. for advertising
or promotion purposes they today still have a comparatively bad reputation. Although there is
an increasing amount of evidence for the fact that online surveys are a reasonable alternative
to other self-administered interview methods, there is a persisting skepticism among profes-
sionals when confronted with surveys conducted in the WorldWideWeb. And in many cases
this skepticism is quite well-founded. The most frequently raised objections are: Online sur-
veys are only suitable for very special populations, target persons must have access to the
Internet and the skills to use it. It is often difficult or even impossible to draw random sam-
ples and low response rates often seriously limit their informative value of online surveys.
Additionally, online surveys are frequently mistrusted for privacy and security reasons as
well as the widespread fear of viruses or spam-mails.
1
In this paper we concentrate on Web-surveys as standard mode of online surveys. Another mode is the mail-
survey, but Web-surveys are preferable because they easier to program and to administer and offer much more
options and comfort (e.g. Bandilla & Hauptmanns 1998, pp. 36-39; Schonlau et al. 2003, p. 2).
2
2. Strengths and Weaknesses of Online Surveys
Like every other survey-mode online surveys have specific strengths and weaknesses. Be-
sides speed, timeliness, and cost efficiency, one important strength is the global reach of the
Internet and its continuous expansion which in future will lead to a broader range of applica-
tions and a higher degree of representativeness. Its flexibility is another major strength – on-
line surveys can be easily tailored for a particular purpose. Furthermore, technological inno-
vations allow a higher level of methodological control (e.g. randomization and rotation of
items) and enable more complex experimental designs for methodological research. In addi-
tion, compared to other survey modes questionnaire fill-out can be more comfortable and
entertaining. Finally, data analysis is convenient, follow-up contacts are easy to realize, and
even large and disperse samples can be contacted and interviewed without greater effort (e.g.
Bandilla, Bosnjak & Altendorfer 2001, p. 8; Evans & Mathur 2005, p. 197). On the other
hand, there is a considerable number of weaknesses, not only the lack of representativeness
for general populations (Bandilla, Bosnjak & Altendorfer 2001; Faas 2003), the typical (se-
lection-)bias of Web-accessible populations (Bandilla & Hauptmanns 1998, p. 49; Schonlau
et al. 2003, p. 2), or the “junk-mail problem”: technological variations, hardware and soft-
ware problems, or cheating by multiple voting (Roessing 2004, p. 4) may contribute to the
perception that online surveys are inappropriate for most applications in social research. Fur-
thermore, online surveys have to deal with the same difficulties as other self-administered
surveys, such as low response rates or lacking interview control (EFigure 1).
Like for every other survey mode it depends on the specific research objectives and
the scientific purpose if an online survey is the proper method for data collection. Online sur-
veys are best used (1) when wide geographic coverage is sought (e.g. multi-national surveys
of Internet-experienced populations such as journalists; see Jackob & Zerback 2005), (2)
when good sample lists are accessible, (3) an explorative or an experimental investigation is
intended, or (4) when a multimedia-approach is desired (Bandilla & Hauptmanns 1998: p. 36;
Evans & Mathur 2005, pp. 208-209). If the target persons are experienced Internet users on-
line surveys generally seem to be the best survey mode: “For some populations of individu-
als, such as government agency staff members, faculty members, and university students,
Internet usage can be high as 100 percent” (Forsman & Varedian 2002; see also Bandilla,
Bosnjak & Altendorfer 2001: p. 26). Therefore, most published studies have been conducted
in closed populations where Internet usage is common and e-mail addresses are readily avail-
able. In those cases, many of the problems mentioned above (e.g. coverage problems) are less
of a concern (Schonlau et al. 2003: 3).
3
Figure 1: Strengths and Weaknesses of Online surveys (Evans & Mathur 2005, p. 197)
But unfortunately, minimizing some of the other problems does not affect the non-response
problem (Crawford, Couper & Lamias 2001). Although there is only limited evidence for
lower response rates compared to other survey-modes (Lozar Manfreda, Bosnjak, Haas &
Vehovar 2005), raising response in online surveys is a major task for every researcher who
4
wants to produce meaningful and high-quality results. Therefore, the lack of an adequate set
of methodological principles for obtaining high response rates is a problem (Lozar Manfreda,
Batagelj & Vehovar 2002). And it is not entirely clear whether the techniques used to in-
crease response rates in paper and telephone surveys will directly translate to online surveys.
Many questions related to non-response still remain to be unanswered: Most research has
focused on coverage and sampling errors – non-response did not take center stage (Porter &
Whitcomb 2003, p. 579).
2
3. Method and Research Questions
Because of the absence of general methodological “guidelines” for obtaining high response
rates and therefore producing high-quality online surveys, this paper intends to compile some
of the most important methodological problems and to offer a practical guideline for high-
quality online surveys. On the basis of (1) existing literature about online survey methodol-
ogy and (2) findings of surveys conducted at the Johannes Gutenberg-University of Mainz
the impact of various design-features and response-heightening strategies on response rates
will be discussed. The central questions are:
Which is the best dosage of design features (e.g. graphics or videos)?
Which is the best way of questionnaire presentation, one-page or multiple-page design?
Which types of questions are adequate?
Should a progress-indicator be used or not?
How should respondents be contacted?
Which incentives do work?
The presentation concentrates on design-specific causes of non-response, not on psychologi-
cal causes. There may me many reasons for non-response and many types of non-respondents
(e.g. Bosnjak, Tuten & Bandilla 2001) – however, the paper focuses on practical problems
occurring during design and the subsequent effects on response rates.
The following suggestions are to a large extent result of experiences gathered during
several seminars on survey methodology and during research projects. They were developed
to give advice for students which prepared their term papers or master thesis. Therefore, al-
though the paper contains some newer research findings from recently conducted studies,
experts in online survey methodology will certainly be familiar with many aspects discussed
in the following paragraphs.
2
However, many research papers presented at different conferences in the last years (e.g. AAPOR annual confe-
rence) indicate that the non-response issue is increasingly gaining more attention (see e.g. www.websm.org, the
website of WebSurveyMethodology).
5
4. Guidelines for Lowering on-Response
4.1 Survey / Questionnaire Design
An important factor influencing non-response in self-administered interviews is “design”.
The term design usually is used to describe three different aspects of surveys: First, the term
refers to the visual appearance of the questionnaire.In this context online surveys provide a
wide range of possibilities: Whereas different typefaces, colors, or stylistic elements can also
be varied in conventional paper-and-pencil surveys, online surveys enable survey designers
to utilize more complex features like multimedia and other elements. Second, the term design
refers to the structural appearance of the questionnaire:It is possible to present the question-
naire on one single page or on multiple pages. In the first case the respondent can see the all
questions at once, in the second he has to click on a button to proceed to the next question.
Third, the term design refers to the process of survey implementation and conduction.Design
here means procedures for recruiting and contacting potential respondents. These procedures
have effects on response rates in online surveys as well as in other survey modes. The follow-
ing sections deal with the effects of the visual and structural appearance of online surveys.
The process of survey conduction will be discussed later (chapter 4.4).
4.1.1 Design features
“With the graphic and multimedia capabilities of the WorldWideWeb, the survey researcher
has an almost unlimited set of design choices in developing a survey for administration on the
Web. As a result, the quality of design seen in Web surveys is highly variable” (Couper,
Traugott & Lamias 2001, p. 230). It is for this reason why research has to focus on the effects
of different design elements on survey quality, indicated e.g. by non-response.
The presentation of the survey has an impact on response behavior. Couper et al. dis-
tinguish two types of design features in online surveys: The first category is task features.
Because of the absence of an interviewer self-administered surveys have to offer information
providing guidance on how to answer the questions – for example visual or verbal guides for
proceeding within the form, graphics like photos to clarify the meaning of certain verbal ex-
pressions, or elements giving immediate feedback to the respondent’s behavior. The second
category is stylistic features.They influence the “look and feel” of the questionnaire and
therefore also affect response behavior. Questionnaire design can motivate participants to
start and to complete the survey, raise their interest and enjoyment (Dillman, Tortora, Con-
radt & Bowker 1998, p. 1; Couper, Tourangeau & Kenyon 2004, pp. 256-257; 263).
3
3
Official logos e.g. help to enhance the survey’s credibility and trustworthiness (Dillmann 2000, pp. 150-193).
6
The use of visual elements in online surveys implies risks as well as chances: Survey re-
sponse can be improved or reduced. Visual elements such as images and graphics carry in-
formation which can be useful for the respondent. They can improve the appearance of the
survey and help to clarify the meaning of words – both effects may raise survey response. But
the use of images also yields some dangers: There is always the risk that graphical features
(e.g. photographs) transport more information than the researcher is aware of. This may also
be true for images which are only used for stylistic reasons. Furthermore, the use of images
can lead to technical problems resulting in higher drop-out rates. Research indicates that so-
called “fancy” designs are likely to increase non-response: Increased drop-out rates may be
caused by increased download times provoked by complex graphical elements (Dillman, Tor-
tora, Conradt & Bowker 1998, p. 4). It is likely that the speed of the participant’s Internet
connection as well as the emerging costs influence completion rates: If respondents have
high-speed Internet-access there is no significant difference between response to surveys in-
cluding complex graphic elements and surveys without any “fancy” design features (Lozar
Manfreda, Batagelj & Vehovar 2002, p. 10). Additionally, certain visual elements are dis-
played differently by different hardware and software configurations (Dillman 2000, p. 361)
the visual appearance of the survey varies with consequences for measurement and non-
response (e.g. some questions may disappear and by this provoke item non-response).
4
Figure 2: Recommended Use of Logotypes - Example
4
Many problems arise from technical restrictions such as display resolutions, operating systems, and Internet
connection speed. Whereas different display resolutions and operation systems can be classified as long term-
problems which may persist for the next years, problems caused by connection speed are moderated by the con-
tinuing diffusion of technological innovations (Dillman, Tortora, Conradt & Bowker 1998, p. 4).
7
In summary, it can be concluded that positive effects of visual design in online surveys have
to be weighted against negative effects: On the one hand design features can improve the
functionality as well as the look and feel of the survey. Thus, design may contribute to a
higher motivation and subsequently to higher response rates. On the other hand, several tech-
nical problems may arise with effects on survey response. Based on experiences with their
own surveys the authors suggest a careful dosage of graphical elements. It may be recom-
mended applying e.g. logotypes to increase the appearance of the online survey and to make a
survey appear official and serious (Figure 2). But it may be a quite reasonable recommen-
dation to keep the saying in mind that “less may be more” (see also Molasso 2005). This con-
clusion is also supported by recent research which has found that adding (color-)pictures to
an online surveys neither seems to influence respondents’ perception of survey-length nor
their satisfaction with the survey (Couper, Tourangeau & Kenyon 2004; Kaplowitz & Lupi
2004).
5
4.1.2 Questionnaire Presentation
One of the most important questions in the field of design is whether to present the online
questionnaire on one page, to separate it into groups of thematically interconnected questions,
or to present every question separately. In the first case (one-page design)the respondent has
to use the scroll-bar to navigate within the form. If the survey is divided into separate parts
(multiple-page design)the participant has to click on a special navigation-button to move on
to the next section or question. Both options have advantages and disadvantages. The time
needed to fill out the questionnaire is closely related to the questionnaire presentation. The
longer it takes to complete the survey the more likely it will be ended prematurely. Thus,
every factor leading to longer completion time (including multiple-page design) may contri-
bute to higher drop-out rates. Recent research indicates that completion-time usually is short-
er (up to 30 percent) when utilizing a scrollable one-page questionnaire (Couper, Traugott &
Lamias 2001, pp. 244-245; Forsman & Varedian 2002, pp. 6-7; Lozar Manfreda, Batagelj &
Vehovar 2002, pp. 8-9).
6
5
Additionally, besides the effects on response rates graphical elements may also affect the content of responses
(Couper, Tourangeau & Keynon 2004).
6
The main causes for time extension are of technical nature: Whereas in one-page surveys only two contacts
with the server are necessary – one to download the questionnaire and one to send the answers back – in most
multiple-page surveys the answers are submitted after each click to the next question (Forsman & Varedian
2002, p. 7; Aoki & Elasmar 2000, p. 932). This requires more time especially if the respondent’s Internet con-
nection is slow. Furthermore, a higher amount of contacts may lead to a higher number of failed connections. A
further cause is that a new page provides new information to the respondent which has to be processed before
continuing the answering process (Couper, Traugott & Lamias 2001, p. 233).
8
But this extension of time does not automatically lead to higher non-response rates: There is
usually no significant difference between one-page and multiple-page solutions when com-
paring the amount of prematurely abandoned questionnaires. In addition, it seems that item
non-response is lower when utilizing a multiple-page presentation. Respondents often tend to
omit certain questions when confronted with scrollable one-page questionnaires (Lozar Man-
freda, Batagelj & Vehovar 2002, pp. 8-9). Furthermore, scrollable one-page surveys some-
times are regarded to be preferable because they are closer to other Web- and office-
applications and therefore easier to handle by the respondent. However, this argument may
also be used for justifying multiple-page solutions – common operation systems for example
frequently utilize single windows to interact with computer users.
Another argument is that participants in one-page questionnaires are less likely to lose
the plot: They are always able to check which questions surround a certain question as well
as to scroll back and to move forward.
7
Scrollable one-page questionnaires bear resemblance
to their paper-and-pencil counterparts and enable the respondent to orientate themselves quite
easily (Dillman 2000, p. 395). However, researchers have to consider the negative effects of
this option: Moving forwards and backwards within a questionnaire may in many cases be
unproblematic. But oftentimes it is necessary to avoid context-/halo-effects. In these cases it
seems recommended utilizing a multiple-page design because it is rather likely to contain
such effects: The option of skipping back can be deactivated. Similar to other computer-
assisted survey-technology (e.g. CATI) multiple-page solutions allow applying even complex
filters and skip-patterns in a speedy and easy manner. Finally, multiple-page designs are also
the best choice for research focusing on survey drop-out: Answers to questions are submitted
to the server immediately after the respondent clicks on the “next”-button. Thus, it is easier to
locate drop-out-positions and to study the reasons for survey drop-out.
In summary, it can be concluded that the decision which form of presentation is to be
chosen largely depends on the researcher’s objectives. Again, researchers have to consider
several factors, such as speed and costs of the Internet-connection, the necessity of skip-
patterns and filters, or context-/halo-effects. Based on positive experiences with multiple-
page solutions – especially in methodological research – the authors recommend multiple-
page design for most purposes because it provides a broader range of features to control and
observe the survey process.
7
This may lead to a modest increase of correlation among questions located beneath each other (Couper,
Traugott & Lamias 2001, pp. 244-245).
9
4.2 Question Types
When it comes to question development and wording scholars usually state that “(…) there
are no specific recommendations for Web questionnaires in comparison to other modes, as
long as we adhere to general standards for the correct formulation of questions in survey re-
search” (Lozar Manfreda, Batagelj & Vehovar 2002). This is substantially true, but there are
particular characteristics of online surveys which justify some remarks about the application
of open-ended questions in online surveys. Generally, open-ended questions are regarded to
cause increased drop-out rates. There seems to be a consensus among scholars: The higher
the share of open-ended questions the higher the drop-out rates (Lozar Manfreda & Vehovar
2002, p. 19). Applying closed-ended questions is likely to shorten fill-out time and therefore
to decrease the number of drop-outs (Bosnjak 2000, p. 4; Bosnjak, Tuten & Bandilla 2001, p.
8). It is usually suggested that questions should give a limited number of carefully worded
alternatives (e.g. Jackob & Zerback 2005). Easy-to-answer-items are regarded to be crucial
for speed and convenience and contribute to survey success (Forsman & Varedian 2001).
This advice is absolutely correct but it should not be regarded as a universal law:
There are some practical experiences which suggest that open-ended questions for specific
purposes are a reasonable alternative to closed-ended models. Especially when information is
sought which is cognitively present, easily accessible, and easy to verbalize, the usage open-
ended questions may be appropriate and does not necessarily lead to a decrease of speed and
convenience. Within the limits of the generally accepted standards and restrictions, some
types of questions (such as factual or knowledge questions) in certain cases are suitable for
online surveys: Questions which are frequently used in consumer surveys – e.g. which type
of car a respondent drives or if he is able to identify a certain brand – can be easily applied in
open-ended form, they do not necessarily lead to higher non-response rates.
Studies recently conducted in Mainz indicate the usefulness of open-ended questions
in online surveys for specific purposes: In a survey about trust in the media students were
asked to name the medium which published the faked “Hitler-diaries” in the 1980s and by
this caused a major scandal in Germany. In a first partial sample students were presented an
open-ended question model, in a second sample they were presented a list of different media.
In another survey concerning the same issue social workers employed at the catholic diocese
of Mainz were asked the same question – but this time for practical reasons only the open-
ended version was applied (:Table 1).
10
Although, this data does not provide meaningful information about survey drop-out, it is not
worthless for the present context: It shows that everyone as far as possible tried to answer the
question regardless of question presentation. The pattern of answers for both student samples
is very similar, about 60 percent knew that the Hitler-diaries were published by the “Stern” –
it did not matter if the question was presented open-ended or if a list of magazines was pre-
sented. Additionally, the lower number of correct answers in the survey of social workers
indicates that the results seem to be valid: The higher level of student’s knowledge compared
to the social workers is a consequence of their study in communication science. In all surveys
the open-ended questions seem to have produced meaningful answers. The number of res-
pondents who gave no answer was low and the differences between students and social
workers fell within the range of expectation.
Table 1: Open-Ended and Closed-Ended Questions – Response Comparison
Question: Do you know the medium which published the Hitler-diaries and by this caused the scandal?
Open-Ended Question
Student Survey Sample 1
n=112
%
Closed-Ended Question
Student Survey Sample 2
n=56
%
Open-Ended Question
Social Worker Survey
n=42
%
Correct Medium
59
62
36
Wrong Medium
30
22
40
No Answer/
Don’t know
11
16
24
100
100
100
(1) Online survey of students enrolled at the department of communication research in Mainz which had heard
of the scandal about the so-called “Hitler-diaries” (n=168), online form January 26 to February 6, 2006. (2)
Online survey of social workers working for the catholic diocese Mainz which had heard of the scandal about
the so-called “Hitler-diaries” (n=42), online from February 17 to March 8, 2006.
Indeed, this cannot not be regarded as striking evidence. But it indicates that open-ended
questions are in specific cases an option for online surveys. Nevertheless, avoiding open-
ended questions generally remains to be an appropriate strategy for reducing drop-out rates.
4.3 Progress-Indicator
Afrequently applied feature which is regarded to reduce non-response is the so-called
progress-bar or progress-indicator (:Figure 3). One major difference between online sur-
veys and other self-administered survey modes is that respondents usually do not know how
11
far they have proceeded in the survey and how much is left to complete. While it is easy for a
respondent to estimate the length of a survey by running over the pages of a paper question-
naire, online surveys using a multiple-page design without some kind of progress-indicator
usually do not allow respondents to draw conclusions about their position in the questionnaire
and the amount of work left. This lack of knowledge about the number and types of questions
may for other reasons be desirable (e.g. in order to avoid halo-/context-effects) but it raises
the danger of de-motivation close to the end and increases the likelihood of drop-outs. There-
fore, in order to motivate the respondents to conclude the questionnaire, it is recommended to
use progress-indicators (Couper, Traugott & Lamias 2001, p. 232)
Figure 3: Progress-Bar Applied in the Student Survey (Partial Sample 2)
As Noelle-Neumann and Petersen pointed out, a questionnaire has to be polite and to have
good manners (Noelle-Neumann & Petersen 2004). Giving information about the length of a
survey and about the respondent’s progress can be interpreted as a contribution to the ques-
tionnaire’s politeness. To honestly say how long the survey will (still) take is a good counsel
for online survey designers (Molasso 2005). But as for every other methodological procedure
suited to raise response rates, there is no rule without exception: Applying a progress-bar,
especially if it is graphically elaborate, may lead to additional download-time and thus leng-
12
then the time needed for completion – probably leading to counterproductive effects on the
respondents’ motivation to complete the questionnaire. Additionally, in cases of extensive
surveys containing a large number of questions it seems to be recommended not to inform the
respondent about his individual progress because it may be particularly de-motivating if the
progress-indicator indicates no or only marginal progress. Therefore, it may be recommended
to add a progress-bar to short surveys but to remove it in longer ones because otherwise drop-
out rates are likely to increase (e.g. Bosnjak 2002, p. 40).
In several surveys the authors experimentalized with the application and abandonment
of progress-indicators. In a large-scale online survey of real-estate journalists in six European
countries a progress-bar was abandoned because the questionnaire developed for this study
was comparatively long (57 questions) and the matter addressed was relatively complex
(Jackob & Zerback 2005). Therefore, the authors assumed that a progress-indicator would
rather de-motivate than motivate respondents. In the student survey mentioned above the two
partial samples were presented different versions of the questionnaire: The version designed
for the larger sample contained a progress-bar while the other version – for experimental rea-
sons – did not. In both cases relatively short questionnaires were used (35 questions) and the
questions were for the most part easy to answer. This applies for the third survey too: The
questionnaire presented to the social workers was identical to the student’s questionnaire and
as in the larger partial sample – was provided with a progress bar (:Table 2).
Table 2: Response Rates in Surveys with / without Progress-Indicator
Journalist Survey
Response Rate
-Without Progress-Bar -
%(n)
Social Worker Survey
Response Rate
-With Progress-Bar-
%(n)
Student Survey-Sample 1
Response Rate
-Without Progress-Bar-
%(n)
Student Survey-Sample 2
Response Rate
-With Progress-Bar-
%(n)
40 (207)
46 (54)
74 (133)
80 (64)
(1) Online survey of European real-estate journalists (N=524), online from May 30 to August 1, 2005; (2) On-
line survey of social workers working for the catholic diocese Mainz (N=118), online from February 17 to
March 8, 2006; (3) Online survey of students enrolled at the department of communication research in Mainz
(N=260, Part 1=180, Part 2=80), online form January 26 to February 6, 2006.
It seems that response behavior differed between the different groups of respondents: Non-
response was highest in the journalist survey (without progress-indicator) and in the survey of
social workers (with progress bar). Motivation to participate and to complete the question-
naire in both cases was ab initio low: About 40 percent of the contacted persons completed
the questionnaire. Compared to other online surveys the response rate can be regarded as sa-
tisfactory (Bosnjak 2002, p. 44; Lozar-Manfreda & Vehovar 2002). Although both examples
13
do not allow drawing exceedingly far-reaching conclusions, it can be guessed that abandon-
ing a progress-indicator when using a rather long questionnaire does not have a particularly
strong effect on respondents’ motivation to complete the questionnaire. And it seems that
progress-bars in short questionnaires do not have such an effect either. But because of the
fact that both surveys were not conducted within the framework of experimental comparative
research, the assumptions above need to be checked experimentally – as done in the student
survey: The results indicate that response rate was slightly higher when students were pre-
sented a questionnaire containing a progress-bar (:Table 2). Although the difference is not
very impressive and the respondents’ motivation in this survey was already high from the
very first, it seems obvious that adding a progress-indicator at least does not have a negative
impact on completion rates. This finding corresponds with findings of other studies: Usually,
there are small but not significant differences between surveys including progress-indicators
and their counterparts without this feature (Couper, Traugott & Lamias 2001, p. 243; Fors-
man & Varedian 2001). Although we cannot present evidence for the thesis that progress-
indicators should not be attached when using long questionnaires, it is generally recommend-
able to add such an indicator when presenting a manageable number of questions, even if
positive effects on response rates may be only marginal – it’s a question of politeness and
may have motivating effects.
4.4 Multiple-Contact Strategy
Developing a strategy for contacting and recruiting target persons for surveys is an essential
element of survey design. Whereas the use of graphical design elements can have an ambiva-
lent effect on response rates, the design of a sophisticated strategy for contracting and recruit-
ing potential respondents is consensually considered to lower non-response. Methodological
research on traditional mail-surveys confirmed that above all multiple-contact strategies sig-
nificantly lead to higher response rates (Dillman 2000, p. 149). For online surveys the quanti-
ty and quality of contacts with target persons is as crucial as for other self-administered sur-
vey modes.
When it comes to the matter of contact frequency (quantity)it is commonly stated that
contacting the target persons several times is one of the most effective ways to improve re-
sponse rates. This applies especially for online surveys with Internet-experienced populations
because most of their members check their e-mail account everyday. Furthermore, additional
contacts (e.g. e-mail reminders) can be realized easily and virtually cause no further costs.
Recent research indicates that three contacts proved to be most effective (Cook, Heath &
14
Thompson 2000, pp. 826-829). It is a commonly accepted finding that response increases
with the number of contacts (Kaplowitz, Hadlock & Levne 2004, p. 98). Nevertheless, in
traditional mail surveys there seem to be only marginal gains in response rates after a third
contact (Heberlein & Baumgartner 1975, pp. 450-451) – recent studies reveal similar results
for online surveys (Cook, Heath & Thompson 2000, pp. 826-832; Sheehan 2001). In the
journalist survey and in the student survey conducted in Mainz the first e-mail reminder
proved to be most effective, contributing up to 22 percent to the overall completion whereas
later contacts produced lower gains (:Figure 4; Figure 5)
Figure 4: Developement of Response rate in the Multi-ational Journalist Survey
Different types of contacts have to be distinguished producing different effects on response
behavior: First, it seems that a pre-notice mail is important to motivate target persons to par-
ticipate. Offering information about the upcoming survey significantly raises response
(Schaefer & Dillman 1998, p. 380; Dillman 2000, pp. 150-151; Kaplowitz, Hadlock & Levne
2004, p. 99). This applies especially for online surveys because pre-notice mails can enhance
the respondents’ trust in the survey. Due to spam, viruses, and dubious content many Internet
users are deterred from reacting to e-mails from unknown origin. Therefore, identification
and clarification by pre-notice mails significantly contributes to survey success. Besides the
first contact the last contact deserves particular attention: Research indicates that response
0
5
10
15
20
25
30
35
40
45
30.05.05 06.06.05 13.06.05 20.06.05 27.06.05 04.07.05 11.07.05 18.07.05 25.07.05 01.08.05
Date
Response rate %
First follow
-
(E-mail) on
07.06.2005
Sec
ond follow
-
up
(E-mail) on
28.06.2005
Third follow
-
up
(Telephone) on
05.07.2005
+22 % +15 % +19 %
15
may be influenced positively if the last contact takes place via telephone (Forsman & Vare-
dian 2002; Jackob & Zerback 2005). Compared to e-mail contacts a last telephone reminder
seems to produce a higher response rate (:Figure 4; Figure 5). Additionally, telephone re-
minders help to investigate the reasons why some respondents refused to participate until the
last contact: By telephone contacts, usually regarded to be more personal, researchers can
react and respond to individual problems which might have occurred (Dillman 2000, pp. 187-
188; Jackob & Zerback 2005).
Figure 5: Developement of Response rate in the Student Survey (Sample 1 & 2)
With respect to the quantity of contacts not only the frequency of contacts but also the length
of the intervals between them has to be discussed. After every single contact there is first a
stronger increase in response, followed by continuously declining gains. In the journalist sur-
vey longer time intervals between two contacts did not produce additional responses. There-
fore, it was hypothesized that for online surveys intervals of several weeks between two con-
tacts – as customary in many traditional mail surveys – are not necessary. Contrariwise, it
was hypothesized that members of typical Internet-populations access the WorldWideWeb
25
30
35
40
45
50
55
60
65
70
75
80
26.01.06
27.01.06
28.01.06
29.01.06
30.01.06
31.01.06
01.02.06
02.02.06
03.02.06
04.02.06
05.02.06
06.02.06
07.02.06
08.02.06
Response Rate %
Sample 1 (Poll, n=80)
Sample 2 (Survey , n=180)
Date
First follow-up
(E-mail) +18 %
First follow-up
(E-mail) +13 %
Second follow-up
(E-mail) + 4 %
Second follow-up
(E-mail) + 1%
16
(and their e-mail accounts) quite frequently and therefore a shortening of the time intervals
between single e-mail contacts is possible. To investigate if different intervals are likely to
produce differences in response, in the student survey two different contact strategies were
applied: The members of the first sample were contacted after four days, the members of the
second sample after circa one week. Although the development of the response rates seems to
differ apparently, they do not vary distinctively (:Figure 5). However, when compared with
the development of the response rate in the journalist survey (:Figure 4)it can be con-
cluded that in online surveys contact-intervals exceeding one week do not seem to be appro-
priate. Except for external reasons
8
,there is no need for extremely long intervals between
every single contact because most relevant populations access the Internet quite frequently.
Besides the number of contacts and the time intervals between them, the attributes of
e-mail contacts (quality)influence response behavior. There is only little research concerned
with the effects of different content or design features on survey participation (Porter &
Whitcomb 2003, p. 579). Various elements of reminders are regarded to affect response rates,
e.g. the personalization of correspondence, information about the purpose of the survey, the
names of the participating institutions or senior researchers.
In online surveys the most common contact mode is electronic mailing. E-mails are
fast, cheap, and easy to manage. When discussing multiple-contact strategies it has to be
pointed out that there are important differences between mail contacts used in traditional mail
surveys and e-mail contacts. One genuine feature of electronic mailings is the address-bar:
By this feature researchers can address every respondent individually or integrate all the po-
tential respondents’ e-mail addresses in the address-bar. The decision to send e-mails to sin-
gle addresses or to accumulate addresses has an impact on response rates: Contacting target
persons individually may contribute to higher motivation to participate whereas accumulated
mailings are likely to produce lower response rates (Barron & Yechiam 2002, p. 509).
9
The positive effect of contacting target persons individually also applies in other re-
spects for the content of e-mail contacts: Personalization of correspondence (e.g. personal
salutation) is consensually regarded to have a positive effect on response rates in traditional
mail surveys. Nevertheless there are very few studies investigating if this applies also for
online surveys.
10
In the online surveys conducted in Mainz e-mail correspondence generally
8
Longer intervals between each contact may e.g. help to eliminate non-response caused by respondents’ ab-
sence (e.g. in case of business trips, holidays, stays in hospital).
9
According to the theory of the diffusion of responsibility people are less likely to help (respectively to partici-
pate in a survey) if they expect that there are many others who might help too (Barron & Yechiam 2002, p. 509).
10
Some authors suppose that personalization of e-mails is less effective because today it is very easy to insert
individual salutations into a letter (Dillman 2000, p. 152).
17
included a personal salutation. As response rates in all cases were satisfactory, it can be rec-
ommended to make personalization of correspondence a standard practice. This recommen-
dation is only based on speculation and not on empirical findings. Therefore, it is necessary
to refer to supportive evidence produced by experimental research and meta-analyses (Cook,
Heath & Thompson 2000; Heerwegh 2005).
11
Personalization of correspondence generally
seems to be a reasonable strategy for increasing survey response – especially for online sur-
veys which usually suffer from low credibility caused by the omnipresent “spam”-problem
and other peculiarities of the online realm (e.g. viruses).
4.5 Incentives
In order to raise motivation to participate in surveys one frequently recommended technique
is to offer some sort of incentive (Church 1993, p. 63). Many scholars argue that offering a
benefit for participation raises response rates because a survey situation can be regarded as a
situation of social or economic exchange: This means – quite simplified – that both sides,
researcher and respondent, should benefit in an exchange relationship, giving each side what
it desires.
12
Commonly, researchers offer their target persons some kind of reward for partici-
pation when the questionnaire is presented or attached to a reminder. It is assumed that the
contacted persons feel obligated to reciprocate by answering the questionnaire. Common
types of incentives are cash, stamps, books, phone cards. More exotic alternatives are e.g.
bouquets of flowers or even turkeys (Knox 1951; Church 1993, p. 68). For online surveys it
is usually recommended to offer incentives which are online-suitable, easy to transfer, and
produce little transaction costs (e.g. payments via bank transfer, loyalty points, or entry in a
lottery). For special populations with presumably high interest in the survey topic sometimes
survey results are offered as an incentive.
While the general objective of offering incentives is to keep non-response low, the
problem might occur that some incentives attract extrinsically motivated persons or deter
their intrinsically motivated counterparts – an effect which may decrease data quality (Görlitz
2005, p. 2). Research indicates that material and non-material incentives generally have ef-
11
In an experiment Heerwegh compared the effects of personalized and non-personalized contact e-mails and
found significantly higher login and completion rates when target persons were addressed with a personal salu-
tation (up to 8 percent) (Heerweg 2005, p. 593). In their meta-analysis Cook, Heath, and Thompson found that
the average response rate in studies including personalized correspondence was higher than 40 percent whereas
their counterparts only generated about 30 percent of response (Cook, Heath & Thompson 2000, pp. 826-829).
In contrast to these results, Porter and Whitcomb didn’t find differences in response rates when comparing the
effects of personalized and non-personalized e-mails. The same applied for other attributes of the e-mail con-
tacts (e.g. personalization of the sender’s address and authority of the sender) (Porter & Whitcomb 2003, pp.
583-585). In any case, personalization of correspondence doesn’t seem to influence survey response negatively
and therefore should be part of the contact strategy.
12
For further information about different theoretical approaches see Ryu, Couper & Marans 2006, pp. 91-93.
18
fects and that there are differences between the different forms of incentives. For online sur-
veys of general populations lotteries seem to be a suitable way, the overall completion rates
seem to be slightly higher. If survey results are offered as incentives research findings are
mostly inconclusive – offering this type of incentive may for some populations even raise
non-response (Görlitz 2005, p. 4).
In order to promote response all of the online surveys conducted in Mainz were pro-
vided with incentives offered with the initial mailing: In case of the journalist survey and the
larger partial sample of the student survey target persons were offered survey results. For
experimental control, students selected for the smaller partial sample obtained no offering. In
case of the social worker survey target persons were offered to participate in a lottery – res-
pondents could win several valuable books. Three findings seem to be interesting: Firstly, the
completion rate in the first partial sample (with “results”-incentive) was lower than in the
second partial sample (without “results”-incentive). But because of the fact that both surveys
also had different features (e.g. progress-bar) in this special case the use or absence of an in-
centive does not automatically account for different response rates. Secondly, when asked if
they wished to get the results of the survey, only a small number of contacted students and
journalists were interested. A very small minority asked for the survey data, most students
and journalists weren’t interested (:Figure 6).
Although it is doubtful that one can identify the effects of the incentive by actively asking
target persons if they are interested in it, it seems that this type of incentive does not have
great influence on survey completion rates. Thirdly, this finding corresponds with the find-
ings of the social worker survey – nobody was interested to participate in the lottery. Never-
theless, every single survey produced overall completion rates that fell within the range of
180 Students & 524 Journalists
Survey Results as Incentive
Procedure:
If you are interested in getting the
results please reply to this mail by click-
ing on the reply-button“
133 Students &
207 Journalists
Figure 6: Request for „Results“-Incentive (Student Sample 1/Journalist Survey)
23 Students
2 Journalists
Contacted Target Persons Completed Questionnaires Requested Incenti
ves
19
expectation and were satisfactory. Because of the students’ high motivation the student sur-
vey produced a high response rate – it seems that it did not particularly matter if an incentive
was offered or not. The other surveys produced much lower completion rates which neverthe-
less were acceptable. Yet it is not completely clear which effect the incentives had.
It can be concluded that it seems to be generally recommended to offer some incen-
tive because it is likely to promote response. However, it can be doubted that any kind of
incentive will per se improve response rates. In every case the possible impact on response
rates and data quality should be weighted against the possible costs (Church 1993, p. 73;
Ryu, Couper & Marans 2006, p. 104). Offering lottery participation may be are a good strate-
gy, but it is doubtful that it produces a great share of additional responses. The impact of sur-
vey results as an incentive in unclear as far as our experiences suggest, in the student as well
as in the journalist survey only a very small number of participants were interested in the re-
sults. It may be presumed that this type of incentive at least did no harm to survey quality and
can be regarded as an additional contribution to the general strategy for lowering non-
response. Because of the fact that both incentives were offered as contingent on the com-
pleted return it was clear from the beginning that effects would be quite small compared to
(prepaid/monetary) incentives included with the questionnaire (Church 1993, pp. 73-75). Yet,
there is a need for further research (Görlitz 2005, p. 4; Ryu, Couper & Marans 2006, p. 90).
6. Some Guidelines for Lowering on-Response
The initial objective of this paper was to outline procedures which are likely to increase re-
sponse in online surveys. The authors intended to present some guidelines for online survey
design. The main addressees were students planning their term papers or master theses.
Therefore, the authors compiled findings from literature as well as own research experiences.
(1) The first question was how to design an online survey visually. Based on own ex-
periences and literature research the authors suggest a careful dosage of graphical elements.
It is recommendable to use standard colors, fonts, and screen dimensions. For visual design
the keyword is “self-restraint”: Visual design should not be driven by potentialities but by
necessities. Besides simple logotypes or symbols and some design elements for guiding the
respondent through the questionnaire, it is in most cases not necessary to apply complex
graphical elements, pictures, videos, or sounds. Neither should respondents be confused or
detracted from their task nor should survey design be too complex for standard office com-
20
puters. In order to secure maximum compatibility technical pretests are necessary – e.g. with
different screen resolutions
13
and different hardware configuration.
14
(2) The second question was which form of questionnaire presentation should be pre-
ferred. Although multiple-page presentation may increase the time needed for completing the
questionnaire, the authors recommend multiple-page solutions because they provide a broader
range of features to control and observe the survey process. However, neither one-page ques-
tionnaires nor multiple-page questionnaires are per se superior. The decision about the form
of presentation largely depends on the researchers’ objectives – several factors have to be
considered (e.g. Internet-connection, skip-/filter-models, or context-effects). In all likelihood,
both alternatives will not have particularly strong effects on response rates.
(3) The third question was which types of questions should be used. Although this
question is not online-specific but a general issue of survey methodology, the authors rec-
ommend employing short and easy-to-answer questions models for online surveys. Every
interview situation produces costs target persons will have to pay – in online surveys espe-
cially the time required for completion and user fees are important factors. For most Internet
users time is money – therefore it is necessary to reduce the length of a questionnaire to a
manageable number of easy-to-answer questions.
15
This applies for all types of questions, for
closed-ended questions as well as for their open-ended counterparts. The authors generally
recommend developing carefully worded closed-ended questions.However, in some cases
(e.g. when issues are cognitively present, easily accessible, and easy to verbalize) open-ended
questions can contribute to the survey’s convenience. Sometimes they are reasonable alterna-
tives to closed-ended questions. Most Internet-users are quite experienced with text input via
keyboard, it is a standard procedure of human-computer-interaction.
(4) The fourth question was whether a progress-indicator should be used or not. Ex-
cept for extremely large surveys (which generally are inappropriate for the Internet), the au-
thors recommend informing potential respondents about the expected length of the interview
as well as about their individual progress while processing the form. Even if positive effects
on response rates may be only marginal progress-indicators should be utilized – it is a ques-
tion of politeness.
13
The minimum resolution should be 640 x 480, the maximum 1024 x 768. When deciding which screen resolu-
tion to choose it is important to take into account the number of questions and their length. Especially for mu-
liple-page questionnaires it is recommended to choose a screen resolution which guarantees that every question
will be displayed on one screen without having to use a scroll-bar.
14
Additionally, it is recommended to reduce hand and eye movements (Bosnjak, Tuten & Bandilla 2001, p. 9) –
buttons should be aligned at one side and their number should be limited for maximum clarity. All these proce-
dures help to make questionnaire fill-out easy and comfortable.
15
This does not only shorten the time needed for completing the questionnaire but also counteracts interview
fatigue – question models generally should be simple (e.g. no large-scale arrays).
21
(5) The fifth question was which contact strategy should be chosen. There is no doubt
that multiple-contact strategies significantly lead to higher response rates.They are an indis-
pensable part of online survey design. The methods developed for traditional mail surveys
seem to directly translate to online surveys. It is recommended contacting potential respon-
dents at least three times,to add a final telephone contact (if possible), to present immediate
identification about the survey origin and purpose,and to personalize correspondence con-
sequently.Because of an increasing amount of spam and junk-mail e-mail users usually
quickly decide which mails to read and which to delete. Therefore immediate identification is
necessary. Whereas identification is a sign of seriousness, addressing potential respondents
personally is a sign of politeness. Even if some researchers doubt the effectiveness of person-
al salutations and other elements of personalization: It seems not to influence survey response
negatively and therefore should be part of the contact strategy. Finally, online survey-
designers should consider the pattern of Internet- and e-mail-usage of their potential respon-
dents. Most participants, especially Internet-experienced populations (e.g. students, em-
ployees of business companies), usually respond within a few hours after the emailed invita-
tion.Therefore, it is in most cases not necessary to wait several weeks between each contact.
(6) The sixth question was which incentives should be offered. The discussion fo-
cused on non-monetary (non-material) incentives such as survey results and lottery participa-
tion. Although the effects of these incentives seem to be rather marginal the authors generally
recommend offering some incentive.However, it has to be pointed out that different kinds of
incentives may have different effects on survey response: In many cases such offerings may
not provoke exceedingly high levels of interest or motivation among the target persons. Con-
trariwise, it is highly likely that both incentives at least do not influence survey response or
survey quality negatively – it may be interpreted as an additional sign of politeness. How-
ever, this conclusion is partly based on speculation. But the Internet offers a great number of
other online-specific forms of material incentives such as digital gift coupons. For many
populations, especially those with low intrinsic motivation, the effects of such incentives may
be quite strong. However, such alternatives weren’t part of the authors’ investigation. Like
most other strategies for increasing survey response the usage of incentives is a cost-benefit-
calculation: Target populations have to be taken into account as well as potential positive and
negative effects on their motivation to participate.
In sum, it can be concluded that most of the techniques already proved to be effective
in traditional mail surveys (e.g. multiple-contact strategies, incentives) can be effective in
online surveys as well. Additionally, online surveys provide online-specific features which
22
increase the survey designer’s number of options for improving survey quality (e.g. progress-
bars, visual and structural design elements). Altogether, the different tools are likely to have
positive effects survey response. However, when planning an online survey the positive ef-
fects of most of the strategies discussed above have to be weighted against their costs and
negative impacts. From the authors point of view the most important objectives of online
survey design are speed, comfort, trustworthiness, and politeness.Most of the strategies dis-
cussed in this paper are likely to contribute to these objectives. All of the above-mentioned
procedures are easy to apply and cost-effective. However, the authors’ suggestions should not
be regarded as universal laws for online surveys. Beyond scientifically confirmed methodo-
logical rules survey design always is a question of situational consideration, adjustment to
challenges and circumstances, practical experience, and individual style. Therefore, the au-
thors’ suggestions should be regarded as a provisional version of a short “practical hand-
book” for beginners.
23
7. References
Aoki, K. & Elasmar, M. (2000): Opportunities and Challenges of Conducting Web-Surveys:
Results of a Field Experiment. Paper Presented at the Annual Meeting of the Ameri-
can Association of Public Opinion Research, Portland, Oregon.
Bandilla, W. & Hauptmanns, P. (1998): Internetbasierte Umfragen als Datenerhebungstech-
nik für die empirische Sozialforschung? ZUMA-/achrichten 43/22, pp. 36-53.
Bandilla, W., Bosnjak, M. & Altdorfer, P. (2001): Effekte des Erhebungsmodus? Ein Ver-
gleich zwischen einer Web-basierten und einer schriftlichen Befragung zum ISSP-
Modul Umwelt. ZUMA /achrichten 49, pp. 7-28.
Barron, G. & Yechiam, E. (2002): Private E-Mail Requests and the Diffusion of Responsibil-
ity. Computers in Human Behaviour 18, pp. 507-520.
Bosnjak, M. (2000): Participation in /on-Restricted Web Surveys: A Typology an Explanato-
ry Model for Item /on-response.Paper presented at the 55
th
American Association for
Public Opinion Research Annual Conference. Portland/Oregon.
Bosnjak, M. (2002): (/on)Response bei Web-Befragungen. Auswahl, Erweiterung und empi-
rische Prüfung eines handlungstheoretischen Modells zur Vorhersage und Erklärung
des Partizipationsverhaltens bei Web-basierten Fragebogenuntersuchungen.Aachen.
Bosnjak, M., Tuten, T. L. & Bandilla, W. (2001): Participation in Web Surveys – A Typolo-
gy. ZUMA /achrichten 48, pp. 7-17.
Church, A. H. (1993): Estimating the Effect of Incentives on Mail Survey Response Rates: A
Meta Analysis. Public Opinion Quarterly 57, pp. 62-79.
Cook, C., Heath, F. & Thompson, R. L. (2000): A Meta-Analysis of Response Rates in Web-
or Internet-Based Surveys. Educational and Psychological Measurement 60, pp. 821-
836.
Crawford, S. D., Couper, M. P. & Lamias, M. J. (2001): Web Surveys: Perceptions of Bur-
den. Social Science Computer Review 19, pp. 146-162.
Couper, M. P., Tourangeau, R. & Keynon, K. (2004): Picture this! Exploring Visual Effects
in Web Surveys. Public Opinion Quarterly 68, pp. 255-266.
Dillman, D. A. (2000): Mail and Internet Surveys. The Tailored Design Method.New York.
Dillman, D. A., Tortora, R. D., Conradt, J. & Bowker, D. (1998): Influence of Plain vs. Fancy
Design on Response Rates for Web Surveys.Paper presented at Joint Statistical Meet-
ings. Dallas/Texas.
Evans, J. R. & Mathur, A. (2005): The Value of Online Surveys. Internet Research 15, pp.
195-219.
Faas, T. (2003): Offline rekrutierte Access Panels: Königsweg der Online-Forschung? ZU-
MA-/achrichten 53/27, pp. 58-76.
Forsman, G. & Varedian, M. (2002): Mail and Web Surveys: A Cost and Response Compar-
sion in a Study of Students Housing Conditions.Paper presented at the International
Conference on Improving Surveys. Copenhagen.
Görlitz, A. S. (2005): Incentives in Web-based Studies. What to Consider and How to Decide
(WebSM Guide No. 2, http://www.websm.org).
Heberlein, T. A. & Baumgartner, R. (1975): Factors Affecting Response Rates to Mailed
Questionnaires: A Quantitative Analysis of the Published Literature. American Socio-
logical Review 43, pp. 447-462.
Heerwegh, D. (2005): Effects of Personal Salutations in E-Mail Invitations to Participate in a
Web-Survey. Public Opinion Quarterly 69, pp. 588-598.
Jackob, N. & Zerback, T. (2005): Sampling Procedure, Questionnaire Design, Online Im-
plementation and Survey Response in a Multi-/ational Online Journalist Survey. Pa-
per presented on the joint WAPOR/ISSC-Conference “Conducting International So-
cial Surveys” in Ljubljana, Slovenia.
24
Kaplowitz, M. D., Hadlock, T. D. & Levine, R. (2004): A Comparison of Web and Mail Sur-
vey Response Rates. Public Opinion Quarterly 68, pp. 94-101.
Kaplowitz, M. D. & Lupi, F. (2004): Color Photographs and Mail Survey Response Rates.
International Journal of Public Opinion Research 16, pp. 199-206.
Knox, J. B. (1951): Maximizing Responses to Mail Questionnaires. Public Opinion Quarterly
51, pp. 366-367.
Lozar Manfreda, K., Batagelj, Z. & Vehovar, V. (2002): Design of Web Survey Question-
naires: Three Basic Experiments. Journal of Computer-Mediated Communication 7/3.
Lozar Manfreda, K. & Vehovar, V. (2002): Survey Design Features Influencing Response
Rates in Web Surveys.Paper presented at the International Conference on Improving
Surveys. Copenhagen.
Lozar Manfreda, K., Bosnjak, M., Haas, I. & Vehovar, V. (2005): Web Survey Response
Rates Compared to other Modes – A Meta-Analysis. Paper presented at the 60
th
An-
nual Conference of the American Association for Public Opinion Research.
Molasso, W. R. (2005): Ten Tangible and Practical Tips to Improve Student Participation in
Web Surveys. Student Affairs Online 6 (4).
Noelle-Neumann, E. & T. Petersen (2004): Alle, nicht jeder. Einführung in die Methoden der
Demoskopie.Berlin.
Porter, S. R. & Whitcomb, M. E. (2003): The Impact of Contact Type on Web Survey Re-
sponse Rates. Public Opinion Quarterly 67, pp. 579-588.
Roessing, T. (2004): Prevalence, Technology, Design, and Use of Polling on Websites. Qual-
ity Criteria despite the Impossibility of Representativeness?Paper presented at the
WAPOR Thematic Seminar on Quality Criteria in Survey Research in Cadenabbia,
Italy, June 24-26, 2004.
Ryu, E., Couper, M. P. & Marans, R. W. (2006): Cash vs. In-Kind; Face-to-Face vs. Mail;
Response Rate vs. Non-response Error. International Journal of Public Opinion Re-
search 18, pp. 89-106.
Schaefer, D. R. & Dillman, D. A. (1998): Development of a Standard Email Methology. Pub-
lic Opinion Quarterly 62, pp. 378-397.
Schonlau, M. et al. (2003): A Comparsion Between Responses From a Propensity-Weighted
Web Survey and an Identical RDD Survey. Social Science Computer Review 21, pp.
1-11.
Sheehan, K. (2001): Email Survey Response Rates: A Review. Journal of Computer-
Mediated Communication 6.
... The first is task elements, which includes factors that influence the way the respondents deal with the questionnaire (e.g., question wording, response options, instructions, navigational clues); the second is style elements, comprising a set of elements related to the look of the questionnaire (e.g., typefaces, background color, logotypes, and other elements). Similarly, Jackob and Zerback (2006) arrange web questionnaires features into two groups: visual appearance and structural appearance. The first category comprises the stylistic elements of the questionnaire and the second includes the structural/task features of the questionnaire. ...
... The meta-analysis by Lozar Manfreda and Vehovar (2002) on determinants of nonresponse in web surveys proved question type has a significant effect on dropout rates; higher dropout rates were likely to be found in surveys that included open-ended and/or difficultto-answer questions. Jackob and Zerback (2006) report the results of a survey on media trust conducted with students where a first sample of students was presented with an open-ended question format and a second sample in which the same question was presented as closedended with a list of response options. The results showed an item nonresponse rate of 11% in the open-ended sample against 16% in the closed-ended sample. ...
... But visual enhancements can also cause premature abandon of the survey if the questionnaire becomes difficult to navigate or download due to software or hardware compatibility problems. Although these 262 Social Science Computer Review (2002); Jackob and Zerback (2006) h. Radio buttons versus other response format Couper et al. (2001);Couper, Tourangeau, and Peytchev (2006); Healey (2007) technical problems can be expected to decline in the future as computer and Internet connection speed increases, for now the empirical evidence on the use of graphics in web questionnaires points to a moderate use of graphically complex designs in order not to detract respondents from completing the questionnaire. ...
Article
The technical potential of the Internet offers survey researchers a wide range of possibilities for web surveys in terms of questionnaire design; however, the abuse of technical facilities can detract respondents from cooperating rather than motivating them. Within the web survey methodology literature, many contributions can be found on how to write a ‘‘good’’ questionnaire. The outcomes are however scattered and researchers and practitioners may find it difficult to obtain an overall picture. The article reviews the latest empirical research on how questionnaire characteristics affect response rates. The article is divided into three main sections: an introduction where the various forms of nonresponse in web surveys are described; a second section presenting questionnaire features affecting nonresponse— general structure, length, disclosure of survey progress, visual presentation, interactivity, and question/response format—and a final section that summarizes the options in terms of questionnaire design and its implications for nonresponse rate.
... A survey's contact strategy can be subdivided into qualitative and quantitative components (Jackob & Zerback, 2006). A qualitative component of the contact strategy in web surveys that relatively few studies have considered thus far is the role of a survey invitation's sender. ...
Article
The rising popularity of online surveys in marketing research precipitates a flood of e-mail invitations requesting participation from potential respondents. As a result, response rates are diminishing, reflecting a decline in the willingness to participate in web surveys. To compare the effectiveness of different response-enhancing techniques in a list-based web survey, an experiment with a full factorial between-subjects design varying the factors sender, number of contacts, and questionnaire layout was set up. A total of 1,563 members from a list of information technology (IT) managers employed at Austrian companies were assigned randomly to one of eight experimental conditions. Their willingness to participate was measured in terms of total response and break off. The results indicate that using a prenotification message and a female sender for contacting male sample members increases response rates; using an advanced questionnaire layout significantly reduces break offs, but does not influence total response.
Article
Full-text available
Researchers have increasingly adopted a web survey for data collection. Previous studies have examined factors leading to a web survey’s success. However, virtually no empirical work has examined the effects of the three levels of forced responses or the two styles of question items displayed on a web survey’s response rate. The current study attempted to fill this void. Using a quasi-experiment approach, we obtained 778 unique responses to six comparable web questionnaires of identical content. The analysis confirmed that (1) there were statistically significant differences across the surveys with the 100%-, 50%- and 0%-forced responses, and (2) there is not a significant difference between the response rates between surveys with scrolling and those with paging styles. In addition to extending the theoretical insight into factors contributing to a web survey’s response rate, the findings have offered recommendations to enhance the response rate in a web survey project.
Chapter
Der vorliegende Beitrag befasst sich mit der Repräsentativität von Online-Befragungen. Nach einer einleitenden Begriffsklärung gehen wir auf verschiedene Punkte ein, die für die Sicherung von Repräsentativität entscheidend sind. Dabei folgen wir der theoretischen Systematik von Groves et al. (2004), die Repräsentativitätsprobleme auf verschiedene Fehlerquellen im Forschungsprozess zurückführen. Darauf aufbauend werden ausgewählte Einflussfaktoren behandelt, die speziell für die Repräsentativität von Befragungen im Online-Bereich entscheidend sind, und Möglichkeiten diskutiert, wie sich bestehende Repräsentativitätsprobleme angehen lassen.
Article
Full-text available
One question that arises when discussing the usefulness of web-based surveys is whether they gain the same response rates compared to other modes of collecting survey data. A common perception exists that, in general, web survey response rates are considerably lower. However, such unsystematic anecdotal evidence could be misleading and does not provide any useful quantitative estimate. Metaanalytic procedures synthesising controlled experimental mode comparisons could give accurate answers but, to the best of the authors' knowledge, such research syntheses have so far not been conducted. To overcome this gap, the authors have conducted a meta-analysis of 45 published and unpublished experimental comparisons between web and other survey modes. On average, web surveys yield an 11% lower response rate compared to other modes (the 95% confidence interval is confined by 15% and 6% to the disadvantage of the web mode). This response rate difference to the disadvantage of the web mode is systematically influenced by the sample recruitment base (a smaller difference for panel members as compared to one-time respondents), the solicitation mode chosen for web surveys (a greater difference for postal mail solicitation compared to email) and the number of contacts (the more contacts, the larger the difference in response rates between modes). No significant influence on response rate differences can be revealed for the type of mode web surveys are compared to, the type of target population, the type of sponsorship, whether or not incentives were offered, and the year the studies were conducted. Practical implications are discussed.
Article
Full-text available
This article reports the results of a meta-analysis of 38 experimental and quasi-experimental studies that implemented some form of mail survey incentive in order to increase response rates. A total of 74 observations or cases were classified into one of four types of incentive groups: those using prepaid monetary or nonmonetary rewards included with the initial survey mailing and those using monetary or nonmonetary rewards as conditional upon the return of the survey. Results were generated using an analysis of variance approach. The overall effect size across the 74 observations was reported as low to moderate at d = .241. When compared across incentive types, only those surveys that included rewards (both monetary and nonmonetary) in the initial mailing yielded statistically significant estimates of effect size (d = .347, d = .136). The average increase in response rates over control conditions for these types of incentives was 19.1 percent and 7.9 percent, respectively. There was no evidence of any impact for those incentive types offering rewards contingent upon the return of the survey.
Conference Paper
Web surveys appear to be attaining lower response rates than equivalent mail surveys. One reason may be that there is currently little information on effective strategies for increasing response to Internet-based surveys. Web users are becoming more impatient with high-burden Web interactions. The authors examined the decision to respond to a Web survey by embedding a series of experiments in a survey of students at the University of Michigan. A sample of over 4,500 students was sent an e-mail invitation to participate in a Web survey on affirmative action policies. Methodological experiments included using a progress indicator, automating password entry, varying the timing of reminder notices to nonrespondents, and using a prenotification report of the anticipated survey length. Each of these experiments was designed to vary the burden (perceived or real) of the survey request. Results of these experiments are presented.
Article
Two hundred fourteen manipulations of the independent variables in 98 mailed questionnaire response rate experiments were treated as respondents to a survey, yielding a mean final response rate of 60.6% with slightly over two contacts. The number of contacts and the judged salience to the respondent were found to explain 51% of the variance in final response. Government organization sponsorship, the type of population, the length of the questionnaire, questions concerning other individuals, the use of a special class of mail or telephone on the third contact, and the use of metered or franked mail on the outer envelope affected final response independent of contacts and salience. A causal model of the final response rate, including initial response, explaining 90% of the variance, and a regression equation predicting final response rates are presented to show that high response rates are achievable by manipulating the costs of responding and the perceived importance of both the research and the individual response.
Book
Warum kann man bereits wenige Tage vor einer Bundestagswahl auf ein bis zwei Prozent genau voraussagen, wie die Bevölkerung wählen wird? Wie ist es möglich, wenige hundert Menschen zu befragen und daraus zu schließen, wie 80 Millionen Deutsche denken? Dieses Buch beschreibt gründlich und dennoch leicht verständlich, wie eine Repräsentativumfrage entsteht. Schritt für Schritt wird anhand von echten Beispielen aus der Praxis erläutert, welche Logik hinter den Umfragen steckt, wie die Befragten ausgewählt werden, wie man die richtigen Fragen stellt und die Ergebnisse auswertet. Alle, nicht jeder zeigt, wie die angeblichen "Geheimrezepte" der Demoskopen funktionieren und wie man gute, verlässliche Umfragen von schlechten unterscheiden kann. Für die vierte Auflage wurde der Text gründlich überarbeitet und unter Berücksichtigung der neuesten Kommunikationstechniken aktualisiert.
Article
Dans cet article, les AA. s'interessent a la methodologie d'enquete par questionnaire utilisant Internet et les mails comme contact avec les enquetes. Les AA. comparent ici les methodes d'approches postales et telephoniques a l'approche electronique, et analysent les taux de reponses par mails. Il ressort ici que les techniques eprouvees et efficaces d'approches postales et telephoniques ne correspondent pas aux utlisateurs de mails et de spams, et remettent en question la validite des enquetes electroniques