ArticlePDF Available

Questionnaire Design Guidelines for Establishment Surveys

Authors:

Abstract

Previous literature has shown the effects of question wording or visual design on the data provided by respondents. However, few articles have been published that link the effects of question wording and visual design to the development of questionnaire design guidelines. This article proposes specific guidelines for the design of establishment surveys within statistical agencies based on theories regarding communication and visual perception, experimental research on question wording and visual design, and findings from cognitive interviews with establishment survey respondents. The guidelines are applicable to both paper and electronic instruments, and cover such topics as the phrasing of questions, the use of space, the placement and wording of instructions, the design of answer spaces, and matrices.
Questionnaire Design Guidelines
for Establishment Surveys
Rebecca L. Morrison
U.S. Census Bureau
Don A. Dillman
Washington State University
Leah M. Christian
Pew Research Center
2
This presentation
Washington Statistical Society seminar,
17 January 2008
Based on a paper (same name) currently
under review by Journal of Official
Statistics
Available from Rebecca Morrison
(contact info on last slide)
3
Outline
Introduction
Background
Guidelines
Conclusions
Introduction
5
Introduction
Other national statistical organizations
have guidelines for questionnaire
construction
U.S. Census Bureau: guidelines for
Decennial Census only
Purpose for today: propose specific
guidelines for establishment surveys
within statistical agencies
Background
7
Background
Guidelines are recommendations
Focus on general guidelines that can be
applied across many estab surveys, typically
conducted on a wide variety of business
issues
Must take into account:
Agency context
Visual design research
Respondent perspectives
8
Agency Context
Paper forms, web survey printouts often
used during data collection process
U.S. Census Bureau designs estab
surveys using multiple methods
Guidelines need to be general enough to
take these issues into account
9
Visual Design Research
Words are primary means of
communication with Rs
Rs also draw information from graphical
features of visual layout
Recent research shows how/why visual
layout and design affects interpretation
of survey questions
10
Visual Design Research
Recent theoretical developments in visual
processing
Gestalt psychology, especially these principles:
– Proximity
– Similarity
– Pragnanz
Many experimental findings relevant to
questionnaire design guidelines (too many to
mention here)
11
Respondent Perspectives
Respondents to estab surveys tend to be
answering questions as representatives
of their businesses, not as individuals
Many estab survey Rs have accounting
backgrounds; generally comfortable
with tables, matrices
12
Respondent Perspectives
Evaluation of the process of completing
questionnaires should be considered in
developing guidelines
Cognitive interviews as a technique for
improving survey design
Qualitative meetings with respondents
Focus on 4-step cognitive response
process model
13
Background: Summary
These guidelines link theory and
research on wording and visual layout
to results from cognitive interviews
Affected in turn by agency context and
consideration of use of multiple modes
Guidelines
Guidelines
Address issues of wording and visual
design
Guideline 1:
Phrase data requests as
questions or imperative
statements, not sentence
fragments or keywords.
17
1: Data requests as questions,
imperative statements
Requests for data can be framed as:
Questions: question word (when, how
many, which) and a question mark
Imperative statements: subject (“you”) is
implied, command or request is expressed
Sentence fragment/keywords: no verb, no
punctuation
18
1: Data requests as questions,
imperative statements
Supporting theory:
Complete sentences stand alone, reduce
need for referring to other sources of info
Initial words of sentence lay foundation for
understanding remainder. Question word
implies that response is expected.
Cognitive interview findings: Rs prefer
questions over imperative statements
19
1: Data requests as questions,
imperative statements
Converting fragments to questions can
be relatively easy
2002 Economic Census: “Type of
municipality where this establishment is
physically located”
2007 Economic Census: “In what type of
municipality is this establishment
physically located?”
Guideline 2:
Ask additional, simple questions,
rather than fewer, more
complicated ones.
21
2: Ask additional, simple
questions
Supporting theory for asking additional, simple
questions:
Makes the task easier and less time-consuming,
more manageable
Complex questions overload working memory.
Result: Rs pay more attention to some words than
others
Possible solutions:
Split complex question into pieces
Add a filter/screener question
Simple diagrams
22
2: Ask additional, simple
questions
Example of simple diagrams
23
2: Ask additional, simple
questions
Example: 2002 Survey of Business
Owners
24
2: Ask additional,
simple questions
2007 Survey of
Business Owners
25
Guidelines
Moving on to visual design and layout
guidelines…
Some guidelines linked under larger
themes
Establish a clear navigational path
(Guidelines 3 – 6)
27
Clear navigational path
Helps ensure that Rs complete the
questions in the intended order
Especially necessary in self-administered
surveys, where there are no
interviewers to help guide Rs
28
Complicated
navigational path
Example:
BEA’s former
quarterly foreign
direct investment
questionnaire
29
Revised
Guideline 3:
Use a consistent page or screen
layout.
31
3: Consistent page/screen layout
Rs do not have to reorient themselves to
each new page/screen
Booklet format (paper surveys) good,
since it is familiar to Rs
Generally, one-column format easier,
only have to process information in one
direction (vertically)
32
3: Consistent page/screen layout
Two columns:
Rare for web surveys
May be desirable in paper surveys
Survey of Business Owners:
Categorial info about owners and business
No numerical information
No complex instructions
Guideline 4:
Align questions and
answer spaces / response options
34
4: Align questions and response
options
Supporting theory for aligning questions
and response options:
Gestalt principle of proximity: visual
elements that are closer together are
perceived to be related
Gestalt principle of good continuation:
visual elements arranged along a straight
line are more likely to be perceived as a
group
35
4: Align questions and response
options
36
4: Align questions and response
options
Arrange response options in a single
column, if possible
Multiple columns:
Increases space between options,
increasing the risk that some options will
be missed
Horizontal vs. vertical processing
Guideline 5:
Clearly identify the start of each
section and question
38
5: Clear start
Sections and headings used to help Rs:
Recognize that some questions are related
Discern basic organization of survey
Understand what is being asked of them
39
5: Clear start
Identify questions using numbers, or
some other consistently applied font or
symbol variation
Rs often move back and forth between
paper and web questionnaires;
numbers can help ensure that Rs are
providing their response to each
question
40
5: Clear start
Guideline 6:
When the navigational flow needs
to be interrupted, use strong
visual features.
42
6: Interrupt flow using strong
visual features
Needed for branching/skip instructions
Needed to indicate change in R’s action:
Eliminate visual clutter from the
questionnaire
(Guidelines 7 - 10)
44
Visual clutter
What it is:
Symbols, graphical information that
competes for attention
Information that is irrelevant, from a
respondent perspective
Apparent lack of info organization (BEA’s
old quarterly foreign direct investment
form: lines divided page into small units)
45
Visual clutter: example
46
Visual clutter
Supporting theory for avoiding visual
clutter:
Gestalt principle of proximity
“Near means related” heuristic
Multiple languages (verbal, symbolic,
numeric, graphical)
Guideline 7:
Use blank space to separate
questions and make it easier to
navigate within questionnaires.
48
7: Use blank space: example
Guideline 8:
Avoid unnecessary lines that
break up or separate items that
need to appear as groups.
50
Avoid unnecessary lines: example
Guideline 9:
Use visual cues to achieve
grouping between questions and
answer spaces.
52
9: Group questions and answers:
example 1
53
9: Group questions and answers:
example 2
Guideline 10:
Avoid including images or other
graphics that are not necessary.
55
10: Avoid unnecessary images,
graphics
Symbols can be beneficial
Examples:
56
10: Avoid unnecessary images,
graphics
Symbols can be confusing, unhelpful
Example:
Use visual design to help
respondents process instructions
(Guidelines 11 – 13)
58
Instructions
Need to build common frame of reference
between survey researchers and Rs
Instructions important for conveying
correct specifications, intent of question
Rs tend to believe they understand
exactly what the question is asking, or
know answer without clarification
Guideline 11:
Incorporate instructions into the
question where they are needed.
Avoid placing instructions in a
separate sheet or booklet.
60
11: Incorporate instructions
Supporting theory for incorporating
instructions:
Rs not likely to expend extra effort to look
for instructions in a separate place
Likelihood of using instructions increases
when they are located with question
61
11: Incorporate instructions:
example 1
62
11: Incorporate instructions:
example 1
63
11: Incorporate instructions:
example 2
Guideline 12:
Consider reformulating important
instructions as questions.
65
12: Instructions to questions
Rs pay attention to questions more than
instructions
Can help clarify/correct reported data
Guideline 13:
Consider converting narrative
paragraphs into a bulleted list.
67
13: Paragraphs to lists
Supporting theory for using bulleted lists:
Readers spend more time on initial
sentences of paragraphs
Density of text is reduced
68
13: Paragraphs to lists: example
69
13: Paragraphs to lists: example
Be consistent in how answer
spaces and/or response options
are displayed
(Guidelines 14 – 15)
71
Display of answer spaces /
response categories
Important because this is where Rs
record their response
Convey type of info or level of detail
expected
Should be easy for Rs to locate, and
should stand out from the question,
instructions, other information
72
Display of answer spaces /
response categories
Supporting theory on the display of
answer spaces:
Rs quickly decide what is foreground and
background
Gestalt principle of pragnanz
Gestalt principle of similarity
Guideline 14:
Use white spaces against a
colored background to emphasize
answer spaces.
74
14: White spaces, colored
background
Lighter color and smaller area mean
answer spaces become objects of
interest and become figure
Visual guide to keep answers inside
provided space
75
14: White spaces, colored
background
Example
Guideline 15:
Use similar answer spaces for the
same task.
77
15: Similar answer spaces, same
task
Rs use all available info to help in
formulating an answer, including info
provided by response categories and
answer spaces
Census Bureau (estab surveys): different
types of answer spaces for dollar info
78
15: Similar answer spaces, same
task
When requesting similar info, use the
same type and size of answer space
Electronic surveys:
Radio buttons: only one response
Check box: more than one response
Some Rs might not know the difference;
augment with written instructions
Reduce the use of matrices; when
they are needed, simplify their
visual presentation
(Guidelines 16 – 17)
80
Matrices
Used often in estab surveys
Efficient use of space, but cognitively
burdensome
Rs must keep multiple pieces of info in
mind at the same time
81
Matrices: example
Guideline 16:
Limit the use of matrices.
Consider the potential
respondent’s level of familiarity
with tables when deciding
whether or not to use them.
83
16: Limit use of matrices
Supporting theory for limiting the use of
matrices:
Reading tables is a learned skill
Matrices may be appropriate when survey’s Rs
are likely to have learned the skill
When Rs not likely to be familiar with tables,
minimize their usage, or at least provide
more open space
84
Matrices: example
Guideline 17:
Use lines and spacing to help
respondents process information
vertically and horizontally as
needed to complete the matrix.
86
17: Matrix, using lines and space
Take advantage of Gestalt principles of
proximity and connectedness
Communicate the expected navigational
path through the matrix by
Connecting items
Increasing/decreasing space accordingly
87
Matrices: example
88
Matrices: example
Guideline 18:
Use font variations consistently
and for a single purpose within a
questionnaire.
90
18: Use font variations
consistently
Same font/text style for different
purposes can be confusing for Rs
91
18: Use font variations
consistently: example
92
18: Use font variations
consistently
Supporting theory for using font
variations consistently:
Gestalt principle of similarity
Improves usability, helps Rs learn things
quickly, focuses attention on relevant
information
93
18: Use font variations
consistently
Can be helpful to establish font variation
rules for a given survey(s)
Census Bureau’s estab surveys:
Item numbers in reverse-print bubbles,
e.g., , ,
Font printed in 8-point or larger
Instructions in italics (for some paper
surveys)
94
18: Use font variations
consistently: example
95
18: Use font variations
consistently
Keycodes:
Necessary for data entry, but should be de-
emphasized for respondents
Suggestions:
Reduce size
Print in a darker shade of the background color (or
gray)
Place outside the
navigational path and
answer spaces
Conclusions
97
Conclusions
Historically: paper surveys, cram as
much onto the page as possible
Electronic methods have questioned the
wisdom of that philosophy
Also, research indicates that visual layout
and question wording affect cognitive
burden on Rs
98
Conclusions
18 guidelines for constructing
establishment survey questionnaires
Applicable to both paper and electronic
instruments
Grounded in visual design theory,
experimental evidence, research into
information processing
Informed by evidence from cognitive
interviews
99
Conclusions
This is a first step – developed for use by
one agency’s estab surveys
More issues can and should be
addressed
Need for testing to evaluate applicability
across diverse estab populations in
multiple countries
100
Conclusions
Proposed guidelines can be corroborated
by embedding experiments into estab
surveys
Additions/adjustments might be made,
especially when it comes to matrices
101
Conclusions
Ultimately, move from “what looks good
to me” to “what encourages
respondents to process and pay
attention to what is important”
102
Selected References
Dillman, D.A., Gertseva, A., and Majon-Haft, T.
(2005). “Achieving Usability in Establishment
Surveys Through the Application of Visual
Design Principles.” Journal of Official
Statistics, 21: 183-214.
Gernsbacher, M.A. (1990). Language
Comprehension as Structure Building.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Lidwell, W., Holden, K., and Butler, J. (2003).
Universal Principles of Design. Gloucester,
MA: Rockport Publishers.
103
Selected References
Tourangeau, R., Couper, M.P., and Conrad, F.
(2004). “Spacing, Position, and Order:
Intepretive Heuristics for Visual Features of
Survey Questions.” Public Opinion Quarterly,
68: 368-393.
Ware, C. (2004). Information Visualization:
Perception for Design. (2nd ed.) San
Francisco: Morgan Kaufmann.
104
Author Information: Morrison
Rebecca L. Morrison is a Survey
Methodologist in the Office of Statistical
Methods and Research for Economic
Programs at the U.S. Census Bureau.
rebecca.l.morrison@census.gov
105
Author Information: Dillman
Don A. Dillman is a Regents Professor at
Washington State University in Pullman
in the Social and Economic Sciences
Research Center
dillman@wsu.edu
106
Author Information: Christian
Leah Melani Christian is a Research
Associate with the Pew Research
Center for the People & the Press in
Washington, DC
Lchristian@pewresearch.org
107
Requests for the Paper
Rebecca L. Morrison
301-763-7595
Rebecca.L.Morrison@census.gov
... Las entrevistas cognitivas son un método que permite identificar y corregir problemas relacionados con las respuestas dadas a los ítems entendiendo el porqué de la respuesta dada por los participantes. En este caso, se realizaron 19 entrevistas cognitivas a alumnado con perfil similar a la población diana (Dillman, 2019;Morrison et al, 2010;Willis, 2015) por medio de la cual se reformularon ocho ítems. Estos dos procedimientos de validación dieron como resultado un instrumento que consta de 70 ítems, de los cuales cuatro ítems eran de tipo sociodemográfico (sexo, edad, universidad y rama de conocimiento). ...
Article
Full-text available
Entre las competencias clave que el alumnado debe adquirir podemos encontrar la digital y el emprendimiento. Del análisis comparativo de ambas surge el modelo EmDigital. Este modelo describe la competencia de emprendimiento digital a partir de cuatro áreas y 15 sub-competencias. El objetivo del estudio es validar un instrumento cuantitativo para medir la competencia de emprendimiento digital en universitarios. Para ello se han utilizado las siguientes técnicas: grupo focal, juicio de expertos, entrevistas cognitivas y Análisis Factorial Exploratorio. Se ha utilizado una muestra piloto compuesta por 190 estudiantes de último curso de Grado (60% eran mujeres con edad media de 24.97). La fiabilidad mostrada por el instrumento ha sido muy buena. Los resultados del AFE muestran 4 factores que explican el 43% de la varianza. A partir de los resultados se ha revisado el instrumento y creado la versión definitiva del mismo, que presentamos en el artículo. Los datos solo reflejan diferencias en función del género en una de las dimensiones del instrumento, concretamente en la identificación de oportunidades, donde los hombres puntúan más alto.
... Questionnaire is adapted from the Bontis & Sharabati studies (Bontis et al., 2000;Sharabati et al., 2013). Questions have been phrased, adjusted and adapted according to the guidelines for a good questionnaire (Morrison et al., 2010). ...
Article
Full-text available
Organizational performance and their relationship with intellectual capital is becoming interesting, particularly in times of intense economic turbulence, when companies are looking for new solutions to maintain and grow their business. The aim of this study is to explore the impact of intellectual capital on business performance. Self-administered questionnaire containing the measures of human capital, structural capital, relational capital, business performance, innovation & creation and learning & education has been used for data collection. Quantitative data have been analyzed through PLS-SEM techniques. This study has explored that intellectual capital has positive and significant association with business performance. The study has also examined that intellectual has direct impact on intellectual capital. Moreover, human capital has also indirect impact on business performance as innovation & creation and learning & education positively and significantly mediate the relationship between human capital and business performance. Outcomes of this research are providing insights to higher education institutions, firms and policymakers to consider these factors while making strategies and policies to boost the firm’s value
... Questionnaire is adapted from the Bontis & Sharabati studies (Bontis et al., 2000;Sharabati et al., 2013). Questions have been phrased, adjusted and adapted according to the guidelines for a good questionnaire (Morrison et al., 2010). ...
Article
Full-text available
Organizational performance and their relationship with intellectual capital is becoming interesting, particularly in times of intense economic turbulence, when companies are looking for new solutions to maintain and grow their business. The aim of this study is to explore the impact of intellectual capital on business performance. Self-administered questionnaire containing the measures of human capital, structural capital, relational capital, business performance, innovation & creation and learning & education has been used for data collection. Quantitative data have been analyzed through PLS-SEM techniques. This study has explored that intellectual capital has positive and significant association with business performance. The study has also examined that intellectual has direct impact on intellectual capital. Moreover, human capital has also indirect impact on business performance as innovation & creation and learning & education positively and significantly mediate the relationship between human capital and business performance. Outcomes of this research are providing insights to higher education institutions, firms and policymakers to consider these factors while making strategies and policies to boost the firm's value.
... With more complex questionnaires, form designers must balance usability with accuracy, and some edits may be sacrificed to avoid frustration on the respondents' part. Guidelines are available to help achieve this balance (Morrison et al. 2008;Morrison, Dillman, and Christian 2010). However, the difficulties experienced by a small business may not be the same as those experienced by larger businesses (Nichols et al. 2005). ...
Article
In the past decade, offering multiple modes of data collection has become increasingly popular. However, the benefits of offering multiple modes should not come at the cost of data quality. Using historic data from two federal business surveys, we investigate data quality as a function of mode of data collection using various quality measures, including the unit response rate (the unweighted proportion of responding reporting units) and the quantity response rate (the weighted proportion of an estimate obtained from reported data).
... Sudman and Bradburn (1982) advocate the use of a forced-choice format, i.e. 'applies/does not apply' or 'yes/no' format, instead of the 'check-all-thatapply' format, because it is difficult to interpret what the absence of a check indicates. Dillman (2007:62) and Morrison, Dillman and Christian (2010) also support avoiding 'checkall-that-apply' questions in order to reduce the primacy effects. Furthermore, in the past two decades, it has been repeatedly uncovered that 'yes/no' questions endorse more options than 'check-all-that-apply' questions even when the responses to each item are expressed explicitly (Rasinski, Mingay, and Bradburn, 1994;Mitofsky and Edelman, 1995;Dillman, Smyth, Christian, and Stern, 2003;Smyth, Dillman, Christian, and Stern, 2006;Smyth, Christian, and Dillman, 2008;Thomas and Klein, 2006). ...
Article
The item count technique, used often to investigate illegal or socially undesirable behaviours, requires respondents to indicate merely the number of applicable items from among a list. However, the number of applicable items indicated via the item count question tends to be smaller than when it is calculated from the direct 'applies/does not apply' responses to each item. Because this inconsistency, which we refer to as the underreporting effect, often disturbs proper item count estimates, the causes of this effect are explored in this paper. Web survey results revealed that the order of the response alternatives is irrelevant to the underreporting effect, and that the underreporting effect is caused by the response format in which the item count question requests merely the number of applicable items and not the number of non-applicable items. It is also shown that the magnitude of the underreporting effect decreases when the respondents are asked to indicate the numbers of both applicable and non-applicable items, which we refer to as elaborate item count questioning.
Chapter
This chapter reviews a pair of experiments that examined the effects of mailed survey materials on cooperation rates in an establishment computer‐assisted telephone interview survey conducted by the US Bureau of Labor Statistics. It presents three experiments designed to examine efficiencies in data collection. The United States Department of Agriculture National Agricultural Statistics Service experimented with different techniques to increase response to subsets of the population for the US Census of Agriculture. The first experiment varied timing and types of contacts, the second experiment examined the effect of tailored messaging and alternative data collection materials on farms that are likely to respond online, and the third tested targeted communications for potential farms. The chapter provides an overview of an experiment that altered the level of detail of questionnaire instructions and the impact on item nonresponse in the IAB Job Vacancy Survey conducted by the Institute for Employment Research in Germany.
Article
Research on literacy interventions occasionally focuses on motivation, but such research in low- and mid-income countries is all but nonexistent. Recently, Guzmán, Schuenke-Lucien, D’Agostino, Berends, & Elliot (2021) demonstrated that an intervention, Read to Learn, had a positive influence on literacy skills of first and second grade Haitian students; motivation was assessed, but not examined, in that study. We used the Guzmán, Schuenke-Lucien, D’Agostino, Berends, & Elliot (2021) data set and an integrative conceptual approach to test relations between the intervention, seven theoretically-grounded achievement motivation variables, and two “gold standard” outcomes – reading achievement and intrinsic interest in reading. Results showed that the intervention had a positive influence on mastery-approach goals and importance, and that these variables predicted several indicators of achievement and intrinsic interest; indirect effects of these motivational processes were documented for one indicator of achievement and for intrinsic interest. Findings are discussed with regard to the need for more research on reading motivation in low-income contexts.
Article
Response burden has been a concern in survey research for some time. One area of concern is the negative impact that response burden can have on response rates. In an effort to mitigate negative impacts on response rates, survey research organizations try to minimize the burden respondents are exposed to and maximize the likelihood of response. Many organizations also try to be mindful of the role burden may play in respondents’ likelihood to participate in future surveys by implementing rest periods or survey holidays. Recently, new evidence from a study of cross-sectional household surveys provided an interesting lens to examine burden. The evidence demonstrated that those sampled in two independent surveys are more likely to respond to the second survey if the first survey was more difficult to complete, and that this effect was not significantly influenced by the rest period in between the two surveys. These findings are compelling, and since the mechanisms influencing response in household and establishment surveys differ in important ways, a similar examination in an establishment survey context is warranted. To accomplish this, data are used from the National Agricultural Statistics Service. Overall, our research finds that prior survey features such as questionnaire complexity (or burden), prior response disposition and rest period are significantly associated with response to subsequent surveys. We also find that sample units first receiving a more complex questionnaire have significantly higher probabilities of response to a subsequent survey than do those receiving a simpler questionnaire first. The findings in this paper have implications for nonresponse adjustments and identification of subgroups for adaptive design data collection.
Chapter
Public projects are the projects implemented by public administration units and with the involvement of funds from the budget of this administration. Based on the literature analysis and the case studies of the completed public projects in different countries, it can be concluded that public projects do not always end on time and that many basic methods for project scheduling and task duration estimation are not used. Public projects have a significant impact on the future and consume huge amount of resources, thus their planning cannot be detached from sustainability principles. Therefore, there is a need to recognize scheduling of public projects as one of the key activities carried out by project managers and to develop appropriate methods in this area. In the paper we propose a procedure of scheduling public projects that allow to mitigate the scheduling problems and help in applying sustainability principles where the degree of sustainability is measured using fuzzy terms.
Article
Full-text available
We present the results of six experiments that demonstrate the impact of visual features of survey questions on the responses they elicit, the response process they initiate, or both. All six experiments were embedded in Web surveys. Experiments 1 and 2 investigate the effects of the placement of nonsubstantive response options (for example, "No opinion" and "Don't know" answer options) in relation to the substantive options. The results suggest that when these options are not differentiated visually (by a line or a space) from the substantive options, respondents may be misled about the midpoint of the scale; respondents seemed to use the visual rather than the conceptual mid-point of the scale as a reference point for responding. Experiment 3, which varied the spacing of the substantive options, showed a similar result. Responses were pushed in the direction of the visual midpoint when it fell to one side of the conceptual midpoint of the response scale. Experiment 4 examined the effects of varying whether the response options, which were arrayed vertically, followed a logical progression from top to bottom. Respondents answered more quickly when the options followed a logical order. Experiment 5 examined the effects of the placement of an unfamiliar item among a series of similar items. For example, one set of items asked respondents to say whether several makes and models of cars were expensive or not. The answers for the unfamiliar items depended on the items that were nearby on the list. Our last experiment varied whether a battery of related items was administered on a single screen, across two screens, or with each item on its own screen. The intercorrelations among the items were highest when they were all on the same screen. Respondents seem to apply interpretive heuristics in assigning meaning to visual cues in questionnaires. They see the visual midpoint of a scale as representing the typical or middle response; they expect options to be arrayed in a progression beginning with the leftmost or topmost item; and they expect items that are physically close to be related to each other conceptually.
Chapter
Full-text available
Most designers know that yellow text presented against a blue background reads clearly and easily, but how many can explain why, and what really are the best ways to help others and ourselves clearly see key patterns in a bunch of data? This book explores the art and science of why we see objects the way we do. Based on the science of perception and vision, the author presents the key principles at work for a wide range of applications--resulting in visualization of improved clarity, utility, and persuasiveness. The book offers practical guidelines that can be applied by anyone: interaction designers, graphic designers of all kinds (including web designers), data miners, and financial analysts.
Article
Full-text available
We utilize and apply visual design theory to experimentally test ways to improve the likelihood that web respondents report date answers in a particular format desired by the researcher, thus reducing possible deleterious effects of error messages or requests for corrections. These experiments were embedded in a series of web surveys of random samples of university students. We seek to examine the sequential and cumulative effects of visually manipulating the size and proximity of the answer spaces, the use of symbols instead of words, the verbal language of the question stem, and the graphical location of the symbolic instruction. Our results show that the successive series of visual language manipulations improve respondents' use of the desired format (two digits for the month and four digits for the year) from 45 percent to 96 percent. These results suggest that writing effective questions for web surveys may depend as much or more on the presentation of the answer categories/spaces as the question
Article
Many claims are being made about the advantages of conducting surveys on the Web. However, there has been little research on the effects of format or design on the levels of unit and item response or on data quality. In a study conducted at the University of Michigan, a number of experiments were added to a survey of the student population to assess the impact of design features on resulting data quality. A sample of 1,602 students was sent an e-mail invitation to participate in a Web survey on attitudes toward affirmative action. Three experiments on design approaches were added to the survey application. One experiment varied whether respondents were reminded of their progress through the instrument. In a second experiment, one version presented several related items on one screen, while the other version presented one question per screen. In a third experiment, for one series of questions a random half of the sample clicked radio buttons to indicate their answers, while the other half entered a numeric response in a box. This article discusses the overall implementation and outcome of the survey, and it describes the results of the imbedded design experiments.
Article
Recent experimental research has shown that respondents to forced-choice questions endorse significantly more options than respondents to check-all questions. This research has challenged the common assumption that these two question formats can be used interchangeably but has been limited to comparisons within a single survey mode. In this paper we use data from a 2004 random sample survey of university students to compare the forced-choice and check-all question formats across web self-administered and telephone interviewer-administered surveys as they are commonly used in survey practice. We find that the within-mode question format effects revealed by previous research and reaffirmed in the current study appear to persist across modes as well; the telephone forced-choice format produces higher endorsement than the web check-all format. These results provide further support for the argument that the check-all and forced-choice question formats do not produce comparable results and are not interchangeable formats. Additional comparisons show that the forced-choice format performs similarly across telephone and web modes.
Article
An experimental study of alternatives to the current U.S. decennial census questionnaire shows that shortening the questionnaire and respondent-friendly questionnaire design improve response, whereas asking a potentially difficult and/or objectionable question, that is, social security number, lowers response. This national study of 17,000 household addresses also demonstrates that relatively high mail survey response can be achieved without addressing correspondence to individual names of residents.