ArticlePDF Available

Social Desirability and Undesirability Effects on Survey Response Latencies

Authors:

Abstract and Figures

This article looks at paradata in the form of response latencies to identify socially desirable response behaviour. Response latencies are used as proxies to infer information processing modes. So far, evidence is conflicted as to whether socially desirable responding is indicated by shorter or longer response latencies. Our results show that faster responses are associated with the reporting of desirable attitudes and behaviour while slower responses are linked with those that are undesirable. Trait desirability measures that do not take this difference in direction into account may be responsible for the often contradictory results of various researchers who have employed the method in the past.
Content may be subject to copyright.
Social Desirability and Undesirability Effects on Survey Response
Latencies
Henrik Andersen and Jochen Mayerl
Faculty of Social Sciences, Technische Universität Kaiserslautern
Abstract
Social desirability refers to the tendency of respondents to overstate positive behaviours or characteristics
and understate negative ones. This paper looks at the application of paradata in the form of response
latencies to identify socially desirable response behaviour. Response latencies are used as proxies,
working with cognitive information processing theoretical frameworks, to infer information processing
modes. So far, evidence is conflicted as to whether socially desirable responding is indicated by shorter or
longer response latencies. This paper looks to contribute to the goal of better understanding response
latencies and their application in identifying bias in surveys. Our results show that faster responses are
associated with the reporting of desirable attitudes and behaviour while slower responses are linked with
those that are undesirable. Trait desirability measures that do not take direction into account may be
responsible for the often contradictory results of the various researchers who have employed the method
in the past.
Keywords
Paradata, survey research, social desirability, sensitive questions, response bias
Published in:
Andersen, H./ Mayerl, J., 2017: Social Desirability and Undesirability Effects on
Survey Response Latencies. Bulletin of Sociological Methodology 135: 68-89.
Introduction
The validity of responses to sensitive questions has been a topic in survey research for several decades.
Within the context of sensitive questions, the effects of social desirability are generally the most often
looked at type of response effect. Social desirability refers to the tendency of respondents to overstate
positive behaviours or characteristics and understate negative ones (cf. Holtgraves, 2004: 161).
Various attempts have been made to assess the extent to which responses are a source of bias in
survey results, and to develop ways to avoid having results coloured by social desirability. Besides
classical survey methods such as anonymous interview settings, the implementation of need for social
approval- and trait desirability-scales in surveys, some techniques designed specifically to encourage
respondents to answer truthfully are the randomized response or the item count, faking instructions or
bogus pipeline techniques, for example. These experimental techniques attempt to create truly
anonymous conditions under which the respondent should feel free to report the truth without fear of
reproach, reprimand and/or consequence (cf. Wolter and Preisendörfer, 2013: 2). However, doubts have
been cast in terms of the effectiveness of these techniques in eliciting more valid responses (see for
example Holbrook and Krosnick, 2010a; Holbrook and Krosnick, 2010b; Wolter and Preisendörfer,
2013). Therefore, researchers have turned to other techniques to identify socially desirable responses as
well as to gain a better understanding of why and how people answer in a socially desirable way. One
such technique involves analyzing paradata collected about the survey process, often in the form of
response latencies to answers. In this way, response latencies are used as proxies, working with cognitive
information processing theoretical frameworks, to infer information processing modes. Often longer
response latencies are taken as evidence that the respondent was, among other things, grappling with
inconsistent information, taking more information into account before forming a judgement, or stopping
to “edit” their response in a socially desirable way (cf. Mayerl, 2013: 3). In terms of studies looking at
response latencies and socially desirable response behaviour, several researchers have found evidence to
support this “editing” hypothesis (cf. Holtgraves, 2004; Silber et al., 2013) as well as "faking good"-
effects (e.g. Holden and Hibbs, 1995) leading to slower response latencies. However, others have found
conflicting evidence in which socially desirable responses were actually linked to faster response
latencies (cf. Amelang and Müller, 2001; Kohler and Schneider, 1995; Mayerl and Urban, 2008).
This paper therefore looks to contribute to the still unclear research field. After the relatively
complicated process of preparing raw reaction times into adjusted response latencies, and controlling for
item- and respondent-related effects, we look specifically at the effects of trait desirability and need for
social approval on response latencies. In doing so, we hope to observe patterns, moving us closer to
developing ways to model socially desirable response behaviour. Our data was collected within the
research project EVA3PLUS [1] which looks at preservice teachers’ attitudes towards experiments in
biology and chemistry lessons.
We will begin by outlining our research question and secondary interests. Following that, we will
look at the theoretical background with a review of response effects and their impact on survey results.
After that, we will look at several current models of response behaviour and at ways in which response
effects can be integrated into these models. On the basis of these theoretical deliberations, as well as past
empirical evidence, we will be able to formulate our hypotheses. After an overview of the data and
operationalization we will move on the empirical testing of the hypotheses showing the empirical results,
and finally discuss the substantive findings and their implications for future research.
[1] EVA3PLUS stands for: „Evaluation von Einstellung, Verhalten und Absichten zu Experimenten,
Versuchen und Arbeitsweisen in der Phasenübergreifenden Lehramtsausbildung an Universität und
Schule“. English: „Evaluation of attitudes, behaviour and intended behaviour surrounding experiments
and methods in the overarching teachers’ education programs at universities and schools“.
Research question
This paper looks to contribute to advancing the use of paradata, specifically item response latencies, in
identifying socially desirable responses. By using an external measure for establishing the (un)desirability
of the characteristics covered in our instrument and by using the shortened Crowne-Marlowe (1960) scale
for respondent need for social approval, it is possible for us to establish a measure of the sensitivity of the
situation. Using this information, we then observe how response latencies act. The research question can
therefore be formulated as: is it possible to observe patterns in response latencies based on either item-
based trait desirability or respondent-based need for social approval? Furthermore, the strength and
direction of linear effects of social desirability on response latencies will help to clarify a field that has
been characterized by conflicting findings and interpretations. The secondary research question will be
then: how does the strength and direction of social desirability affect response latencies?
Theoretical background
Questions can be sensitive because of 1) their intrusiveness: such questions usually involve topics that are
not seen as appropriate to talk about generally in conversation and can lead to a “none of your business”
reaction from the respondents. They can deal with taboo topics such as sharing one’s income. Questions
can be perceived as sensitive also if the respondent 2) becomes wary that the disclosure may lead to
negative consequences if third parties gained access to the information. This is often a problem
concerning questions asking the respondent to divulge illegal behaviour such as drug use. Finally,
questions can be perceived as sensitive if they 3) ask respondents to express potentially socially
(un)desirable attitudes or behaviour. This third element is obviously linked to the second in the way that
admitting to drug use could both lead to negative consequences due to its illegality, as well as disapproval
from the interviewer. Both of these last two elements presuppose that there are clear social norms
surrounding the attitudes or behaviour being expressed (see Tourangeau and Yan, 2007 for more in-depth
discussion).
As for socially desirable responding, it is generally seen as a result of three factors: the item’s trait
desirability (which involves the social norms surrounding a survey question), the respondent’s need for
social approval (the topic-independent tendency of some individuals to seek social approval more than
others) and the anonymity of the situation (in conditions of anonymity, the respondent can expect to
receive neither approval nor disapproval for their answers). A basic rational choice model stipulates that
the likelihood of eliciting a socially desirable response increases with the item’s trait desirability and the
respondent’s need for social approval but decreases under conditions of anonymity (cf. Stocké, 2004:
304). This rational choice-based approach can be criticized in several ways. For one, it takes an
unrealistic view of respondents carefully weighing their expected utility before answering every question.
Furthermore, it is unrealistic to assume anonymous surveys eliminate social desirability. As others have
shown in research based on methods designed to ensure absolute anonymity such as the randomized-
response and item count techniques, respondents still tend to report inaccurately even in conditions of
anonymity (cf. Preisendörfer and Wolter, 2014). We agree generally with the sentiment that expectancy
of consequences should logically increase socially desirable responding, but see this as less of a
dichotomous, black-and-white condition. Furthermore, it is conceivable that the expectancy of
consequences can be an inner-, personality based prerequisite for socially desirable reporting in terms of
avoiding cognitive dissonance, for example. In fact, Esser covers this topic shortly in an article from 2000
in which he cites Elster: “norms do not need external sanctions to be effective. When norms are
internalized, they are followed even when violation would be unobserved and not exposed to sanctions”
(Elster, 1989, cited by Esser, 2000: 137).
Although classical rational choice-based models can be helpful for an economical
conceptualization of the conditions that promote socially desirable responding, they are dragged down by
a few of the criticisms discussed above and do not provide a good basis on which to build hypotheses
about the behaviour of response latencies in conditions of social desirability. Several other types of
models lend themselves better to the task at hand. In fact, several authors have attempted to apply
different cognitive models to the explanation of response behaviour and the identification of response
effects. Tourangeau and Rasinski (1988) as well as Strack and Martin (1987) looked at socially desirable
responding using a phase model of information processing. Krosnick (1991) as well as Schaeffer (2000),
Belli et al. (2001), Holtgraves (2004) and Kaminska and Foulsam (2016) have looked at the topic from an
optimizing/satisficing perspective. Fazio’s MODE-model (Fazio, 1990) and Esser’s Model of Frame
Selection (Esser, 2010) have also been applied, see also Mayerl (2009, 2013). What these models all have
in common is the assumption that responding can truly reflect attitudes, or it can be a result of multiple
response effects. Furthermore, according to common dual-mode models of cognitive processes (cf.
Chaiken and Trope, 1999; Smith and DeCoster, 2000) latent attitudes can cause responses in what we
would characterize as “true” values either based on a deliberate attitude generation (much in the way
modeled by classical rational choice theory), or a spontaneous activation of already held attitudes. These
last two points have implications for the present research. For one, referring back to the first point,
response effects can be both topic-dependent and topic-independent. Effects such as acquiescence bias,
tendency to the middle category etc. reflect response effects that are independent of the current topic.
Socially desirable responding, on the other hand, is topic-dependent and hinges on the strength and
direction of norms surrounding an item (trait desirability) in combination with the respondent’s
sensitivity to social desirability in terms of need for social approval (Mayerl, 2013: 7). Secondly, and
more related to the application of response latencies as a proxy for socially desirable responding,
spontaneous activation of already held, chronically accessible attitudes should result in shorter response
latencies (cf. Fazio, 1989, Bassili and Fletcher 1991). Along what is often referred to as ‘deliberate
controlled’ process mode, relevant information must first be accessed, evaluated, translated to a scale
value causing longer response latencies (cf. Yan and Tourangeau, 2008).
The discussion above can be summed up as follows: respondents can be assumed to behave in
either ‘deliberate-controlled’ or ‘automatic-spontaneous’ process modes (although the terminology can
vary from model to model: controlled-deliberate mode would equate to an optimizing behaviour, for
example) and responses can either reflect latent attitudes or merely response effects. Response effects can
be characterized as either topic-independent or topic-dependent. Finally, response latencies along the
deliberate-controlled process mode should generally be longer than along the automatic-spontaneous
mode.
With all this in mind, the task of explicitly incorporating social desirability into the model
however remains incomplete. Several of the above mentioned authors have spent time tackling this
problem. Tourangeau and Ransinski (1988) and Strack and Martin (1987) envision socially desirable
responding as an ‘editing’ process that occurs at the very end of the model, immediately before entering
the final value. Respondents either activate previously generated attitudes or form attitudes on the spot,
come to a “true” value and, depending on situation, take a moment to edit their response to conform to
what is seen as socially desirable. The editing process occurs regardless of the ‘mode’ of responding
(based on whether a judgment of the attitude object is already stored in memory). The implication of this
model is that the editing process takes time and therefore socially desirable responses may be associated
with noticeably longer response latencies.
Both the optimizing/satisficing and dual process models of Krosnick, Fazio and Esser conceive of
both editing processes and automatic, norm-conforming responding. The implication of this is that
socially desirable responses can occur very quickly without much hesitation, as well as due to a deliberate
editing process. This obviously complicates the relationship between socially desirable responding and
response latencies. The next section will discuss the hypothetical model which incorporates much of the
discussion above into a model for using response latencies as an indicator for socially desirable
responses.
Hypothesis model
Based on the findings of the authors referenced above, we can discern two main hypothetical
implications. Tourangeau and Rasinski and Strack and Martin see responding to survey attitude questions
as a four-phase model in which the respondent interprets the question and identifies an attitude object,
retrieves relevant information, beliefs and feelings about the object, forms a judgment and enters the
response (cf. Tourangeau and Rasinski, 1988). Between the judgment formation and the entering of the
answer, pressure to respond in a socially desirable way will trigger an editing phase in which the
respondent takes time to consider whether what he or she is about to answer is socially desirable in order
to present themselves in a positive light.
On the other hand, the satisficing/optimizing-, MODE- and Frame Selection-Models all have in
common the differentiation that, for various reasons, some responses occur in a deliberate, rational
fashion while others occur rather spontaneously without much deliberation. We will use Hartmut Esser’s
Frame Selection Model to illustrate our thoughts on this, although both other mentioned models could be
used to similar effect. The Frame Selection Model stipulates that behaviour is influenced by “frames”, or
as Fazio might refer to them “cues”, that, in specific situations, elicit attitude associations between objects
and previously stored attitudes. The degree to which that object and situation can be ‘matched’ with a
previously stored attitude is then referred to as attitude accessibility (cf. Esser, 2000: 141). A match
between object and situation will lead to an automatic activation of either attitude consistent, stimulus-
reaction style, or normative behaviour, often referred to as a script. If a sufficient match cannot be found
and if the respondent has the motivation, takes the necessary effort and has the opportunity to do so he or
she responds in a deliberate-controlled, rationally utility maximizing way (cf. Esser, 2000: 144).
These two models point to explicitly different expectations. The phase model suggests strong
norms around sensitive topics should illicit longer response times as the respondent edits his or her
response while the frame selection and other dual process models suggest strong norms should ensure
attitude matches, enabling the respondent to respond automatically and quickly in a norm-conforming
way. However, as Tourangeau himself anticipates that both spontaneous and deliberate modes of
responding are possible, and that the editing process can take place along both cognitive routes (cf.
Tourangeau and Yan, 2007: 877), we believe that response latencies are mainly determined by mode of
processing and that the editing process is at most a minor effect that will have only a small impact on
latencies.
Sensitive questions (either due to their intrusiveness, threat of disclosure, the anonymity of the
interview situation or the respondent’s own sensitivity or need of social approval) can be seen as a frame
or cue that informs the respondent about the logic of the situation triggering an awareness for socially
acceptable responses. The more obvious the social norm surrounding an attitudinal or behavioural object,
the more likely the script of a topic-specific answering heuristic will be employed.
In terms of response latencies, as suggested in our theoretical framework, the more obvious the
social norm surrounding the item (both strongly negatively and positively rated items in terms of trait
desirability), the more likely the automatic script of socially desirable responding should be employed
and the shorter the response latencies should be. Figure 1 illustrates this. Methodologically speaking, we
expect an inverted U-shaped effect of trait desirability on response latencies (a significant positive linear
effect and a significant negative quadratic effect). Our study involves surveying student teachers, so in
this example since it is fairly clear that teachers should like working with young people or that teachers
should not be nervous speaking in front of a crowd, respondents simply answering in a socially desirable
fashion should be able to do so quickly. More ambiguous social norms should tend to take more time to
answer because it is not clear what the ‘socially’ correct answer is [2]. For example, should a teacher be
funny? Should they be able to cope well with disappointment? While it is likely desirable to be funny in
general, admitting to not being funny does not necessarily imply that one is unfit for the career they have
studied for more than five years to follow. The lack of a clear social norm means respondents will not be
able to respond quickly in a socially desirable manner.
[2] The ‘ambiguous’ nature of the trait desirability is expressed by the relatively high standard deviations
of all items, especially those with a value of around zero, the average range of which lies on both the
positive and negative sides of the scale; see appendix 1.
Figure 1. Hypothesized effect of trait desirability of an item on response latencies when answering to
these items
Method
Response latencies are used as proxies for different mental processes (cf. Mayerl, 2013: 3). In our study,
we use response latencies as a proxy measure of information processing mode with respect to the degree
of elaboration (see in the same line of reasoning for example Carlston and Skowronski, 1986; Gibbons
and Rammsayer, 1999; Mayerl, 2013; Schaffner and Roche, 2016; Sheppard and Teasdale, 2000; Shiv
and Fedorikhin, 2002). With this research, we are attempting to use response latencies for the
identification of socially desirable responding. Computer-assisted survey modes like CASI (e.g. web
surveys), CATI or CAPI enable the (half-) automatic collection of additional context information of
surveys. Survey researchers call this additional information paradata (e.g. Couper and Kreuter, 2013).
Paradata cover all computer-assisted automatically conducted information about the response process. In
most cases these data are non-reactive and thus not consciously biased by respondents. This includes
item-level data like “mouse click” data or time stamps (response times), survey-level data like duration or
contact information, and in the case of web surveys server-side as well as client-side data. Recently,
response time itself has been in the focus of survey research papers (e.g. Couper and Kreuter, 2013;
Mayerl, 2013; Olson and Smyth, 2015; Yan and Tourangeau, 2008).
There are different possibilities for recording response latencies depending on the survey design.
CATI and CAPI surveys involve active reaction time measurement by the interviewer who manually
stops a timer when the respondent has answered and then must validate in a second-step whether the
measurement was valid or not. Passive, or “latent” reaction time measurement is made possible in web-
surveys in which timestamps are recorded automatically to record various events. Passive reaction time
measurement involves measuring the entire question answering process, including reading the question
and entering the answer (cf. Mayerl, 2013: 3). Passive reaction time measurement almost always entails
more bias than active time measurements because the lack of an interviewer means it is impossible to
ensure that respondents are not distracted, have not left the room or do not have multiple programs open
simultaneously (Mayerl and Urban, 2008: 16-17).
We therefore decided on a rather elaborate survey method. We purchased several dozen tablet
PCs upon which “kiosk” software [3] was installed to prevent other apps and internet webpages being
opened during the survey situation. Screens and resolutions were identical, and the questionnaire was
optimized for the tablet format. Switching between landscape and portrait formats or zooming in on the
screen among other manipulations were locked down. Respondents were given styluses to improve the
tablets recognition of “taps” and to improve comparability. Preservice teachers in Rhineland-Palatinate
complete the final phase of their education at various locations around the state. All the preservice
teachers at a particular location were surveyed at the same time in a room together. One researcher was
present throughout the entire filling-out process to assist with technical difficulties (mostly internet
connection problems) but remained otherwise passive. Anonymity procedures were explained at the
beginning of each survey by the researchers.
The respondents were asked to answer 30 questions related to their self-assessed suitability for the
teaching profession. With this instrument, we attempted to elicit socially desirable response behaviour
from the student teachers; the rationale being that student teachers may be hesitant to admit qualities that
suggest unsuitability. The questions covered 10 sub-dimensions of teacher personalities (see appendix 1
for an overview of the dimensions), with three items per sub-dimension. In each sub-dimension, two
items were positively worded (statement indicating suitability for profession) and one was negatively
worded (indicating unsuitability). The 10 sub-dimensions were presented to the respondent in a random
order and the items within the sub-dimensions were also randomly ordered from first to third item
presented on screen. The instrument was taken from a survey by Herlt and Schaarschmidt from 2007.
Following the pretest several items were slightly modified from the original questions (cf. Herlt and
Schaarschmidt, 2007).
Response latencies are recorded using timestamps that are automatically triggered by actions
taken by the respondents. A timestamp is recorded as soon as the website loads completely on the screen,
which can be referred to as “event” number 1 (e1). The respondent presumably then begins to read the
instructions and question text. Once he or she has decided on an answer and has tapped the screen to
indicate the desired scale value, a second timestamp is recorded (e2). Along with the timestamp, several
other pieces of crucial information are saved in the dataset: the scale value, the item being referenced, the
event’s relative order compared to preceding events, what type of event is being recorded (the loading of
the page, a tap on the item-scale, a tap on the “continue/back” button etc.). If the respondent then moves
on to the next question and taps the screen to indicate the desired scale value for the next item, event 3
(e3) is saved with a timestamp. The same goes for e4. If the respondent then decides to go back and
change their answer for the first item, a new timestamp is recorded (e5), which is, of course, saved in the
dataset with a reference to the first item. The response latency for most answers is then the difference
between the preceding event and the tapping on the item-scale. For the first item, however, because the
hypothetical respondent changed their answer, the overall response latency is calculated as the time
between the page loading (e1) and the first answer for item 1 (e2): ∆t1, plus the time difference between
e4 and e5: ∆t4. The response latency therefore represents an estimate of the time the respondent spent
concentrating on each item.
There are obvious deficits to this method that bear noting. For one thing, the response latency for
the first item on the screen also encapsulates the time spent on reading the instructions at the top of the
page. This solution was a compromise based on the technical procedure used to collect the timestamps. In
order to completely control for the time spent reading the instructions, each page would have to be loaded
in a 2-step process: the first tap on “continue” would load the screen with only the instructions displayed,
and then a second tap on the screen would be needed to signal the questions to be displayed. The already
very long questionnaire (30-40 minutes) would become even more trying if the respondent was forced to
tap the screen twice on each page for questions to be displayed. Furthermore, varying internet speeds and
the fact that the tablets sometimes do not register initial taps leads to respondents often tapping several
times and several millisecond-long latencies being recorded (which were eliminated as outliers, see
below). The two-step loading process would inevitably lead to even more errors and frustration amongst
the respondents. To control for this effect somewhat, a control variable for the position of the item on the
screen was included in the model (the item displayed closest to the top of the screen therefore tends to
register longer response latencies due to the time spent often also reading the instructions).
Furthermore, the problem just discussed is exacerbated by the fact that we cannot rule out whether
respondents begin reading the entire screen first, or whether they start reading from the bottom or middle,
when filling out the questionnaire. Our response latency measurement technique is thankfully not effected
regardless of which item the respondent answers in what order. The fact remains, however, that the
measurement method assumes respondents cognitively handle each item individually and in a one-at-a-
time fashion. Without the use of some sort of eye tracking method, it simply cannot be guaranteed that
this is the case.
In terms of the analytical design, the statistical model will attempt to explain response latencies
based on item trait desirability the more unambiguous the social norm surrounding the item, the more
likely an automatic-spontaneous norm-conforming mode (read: socially desirable) will be activated, and
the shorter the response latencies will be.
We test the model using a cross-classified multilevel linear model. This is due to the structure of
the data. Each respondent is asked to answer the 30 questions about their suitability for the teaching
profession and each of the 30 questions is answered by multiple respondents. The response latencies can
therefore be seen as nested at once within respondents and items certain respondents will respond
generally faster or slower than others, and certain items will cause respondents to take more or less time
responding to them. Furthermore, the study was a panel study in which the 30 characteristic items were
administered in up to two out of three waves of the survey. It is important here to note that response
latencies are then also theoretically nested within different waves of the same respondent. However, for a
multilevel analysis, generally more than 30 (or at least 20) macro-level units are needed for stable results
and in this case we have only two. Therefore we cannot integrate the panel wave as an additional macro-
level in which response latencies would be nested. The fact that the response latencies of the same
respondent at two different times may not be independent of each other means we resort to the more
primitive method of including a dummy variable in the multilevel model. In this way, we can hold any
effects constant that may arise from respondents answering the same questions again.
[3] The kiosk software locks down the operating system of the tablets restricting the access to options and
settings as well as apps.
Data and operationalization
The research presented here resulted from the research project EVA3PLUS conducted at the University
of Kaiserslautern in Germany. It is an interdisciplinary project conducted by the Biology and Chemistry
Didactics Departments together with the Department for Empirical Social Research. The substantive goal
of the research is to assess student teachers’ attitudes and behaviour with regards to conducting
experiments in the biology and chemistry classrooms following a university educational reform. The
project is a longitudinal panel-study with computer assisted self-interview (CASI) web-based tablet
questionnaires with three survey waves taking place at intervals of six months. The project attempts to
conduct a complete sample of all chemistry and biology student teachers at the Gymnasium-level (a
university/college preparation-level secondary school form in Germany) in Rhineland-Palatinate from
mid-2014 to the end of 2016. In total, the sample size is 550 with 357 individual respondents
participating between one to three times.
The 30 teacher characteristic items were partially modified from an instrument by Herlt and
Schaarschmidt, 2007. The instrument contains 10 constructs ranging from “interaction with younger
people”, “humour”, “didactic abilities” and so on. Each construct contained three items; two of which
were formulated positively, one formulated negatively. The items were measured on a 7-point scale
ranging from 1 “does not apply to me at all” to 7 “applies fully and completely to me”.
Respondent-related variables included a baseline speed variable to adjust for generally slow or
fast respondents (the baseline speed is established based on response latencies for the sociodemographic
variables) as well as a dummy variable for whether or not it was the second time the respondent
participated.
For the item-related variables we will specifically be looking at factors that must be controlled for
in order to evaluate the cognitive processes irrespective of several formal characteristics of the questions.
A variable for the item length in terms of syllables was included. Furthermore, the item position was
taken into account in terms of its overall position out of thirty and its appearance on screen through the
use of dummies for being either the first or last of three items presented at once.
Finally, in terms of the variables of central substantive interest, the respondent’s need for social approval
and the item’s trait desirability were modeled.
In order to assess the trait desirability of the instrument’s items, a small pencil-and-paper survey
of students at the TU Kaiserslautern was conducted (n=77). The students were asked to assess how
strongly the presented characteristics of teachers are expected societally. The scale ranged from -4:
“extremely undesirable” to +4: “extremely desirable” with 0 as the middle category: “neutrally seen”.
The mean scores that resulted from the survey can be found in the appendix. The expected nonlinear
relationship between trait desirability and response latency was accounted with the inclusion of a
quadratic term (the simple correlation between the trait desirability variable and its quadratic term was
not large; r=0.230).
The respondents’ need for social approval was measured using an index of two items from the
Crowne-Marlowe social desirability scale (cf. Crowne and Marlowe, 1960: 351). The items used were
“No matter who I’m talking to, I’m always a good listener” and “I am always courteous, even to people
who are disagreeable”. The trick to this scale is that these items should not realistically apply to just about
anyone. Respondents that nevertheless answer strongly in the affirmative have a strong need for social
approval, otherwise they would be comfortable admitting this normal and human behaviour. The index
created was additive, displaying satisfactory characteristics =0.621). The descriptive statistics of the
items are also found in the appendix.
Empirical analysis
Table 1 gives an overview of the dependent variable, the response latencies. Due to the large degree of
non-normality of the distribution, and in order to eliminate outliers as suggested by Mayerl and Urban
(2008), the top and bottom 5% of the distribution was eliminated. This resulted in a mean response
latency of 4.64 seconds, a standard deviation of 4.16 and a range of 1.88 to 11.86. The skewness and
kurtosis values show the adjustment resulted in a more normally distributed variable. Due to the fact that
response latencies are often skewed right (which is indeed the case with our distribution), taking the
natural log of the variable is also advisable when applying statistical procedures that are based on the
assumption of normality of the data (cf. Mayerl and Urban, 2008: 82).
Table 1. Descriptive statistics dependent variable
N
Mean
Med.
Std. Dev.
Var.
Skew.
Min.
Max.
Response latency
(seconds per item)
9713
5.04
4.13
4.08
16.64
6.45
0.08
105.10
Response latency,
outliers removed (<5%, >95%)
8769
4.64
4.16
2.00
3.99
1.20
1.88
11.86
Response latency,
log outliers removed (<5%, >95%)
8769
1.45
1.42
0.40
0.16
0.29
0.63
2.47
The first step in the empirical analysis was to establish a baseline model with only the second-level group
variables as determinants to establish interclass correlations (ICCs). Table 2 provides an overview of this
model. ICCs express the amount of the overall variance that can be traced back to the macro-level
variables. The residual variance is therefore the variance not explained by the macro-level variables. ICCs
are determined based on the simple calculation of the macro-level variance divided by the overall
variance. The table shows the respondent is responsible for roughly 13% (0.523 ÷ (3.2079 + 0.5230 +
0.2944) = 0.1299) and the item for 7% (0.2944 ÷ (3.2079 + 0.5230 + 0.2944) = 0.0721) of the overall
variance of the response times. Generally, if more than 5% of the overall variance can be traced by to
macro-level variables, they should be included in the form of a multilevel analysis (cf. Hox, 2010: 244).
This leads to the conclusion that both group variables are empirically and substantively important.
Table 2. Interclass correlations
Variance
Std. Error
Wald Z
Confidence Interval 95%
ICC
Lower
Upper
Residual
3.2079
0.0495
64.853
3.1124
3.3063
Respondent
0.5230
0.0509
10.266
0.4321
0.6331
0.1299
Item
0.2944
0.0789
3.3731
0.1741
0.4979
0.0721
Table 3 shows the cross-classified multilevel model. The unstandardized parameter estimates are shown
in the column Estimate. The scaling of the independent variables are shown in parenthesis in the column
Parameter in order to aid the interpretation of the unstandardized coefficients. The dependent variables is
the outlier treated response latency. Parameter estimates can therefore be interpreted in seconds. As was
to be expected, the longer the question is in terms of syllables, the more time was needed to respond, the
first item on screen took longer to respond to (due partially to the instructions also being presented at the
top of the screen), and the respondent’s baseline speed has a significant positive effect on response
latencies: the slower the respondent is in general, the longer the response latencies on the 30
characteristics questions.
Table 3. Fixed parameter estimates, dependent variable: response latencies, outliers removed [4]
Parameter
Estimate
Std. Err.
DF
T-statistic
Sig.
Confidence (95%)
Lower
limit
Upper
limit
Intercept
2.1443**
0.4951
185.525
4.331
0.000
1.1675
3.1210
Repeat respondent
(0,1)
-0.1301
0.1276
150.468
-1.019
0.310
-0.3821
0.1220
Respondent baseline speed
(metric)
0.1462**
0.0371
149.390
3.938
0.000
0.0728
0.2195
Number of syllables
(metric)
0.0728**
0.0098
30.124
7.464
0.000
0.0529
0.0927
Item position in questionnaire
(1-30)
-0.0077
0.0048
30.296
-1.597
0.121
-0.0175
0.0021
Item first on screen
(0,1)
1.3011**
0.0992
29.898
13.121
0.000
1.0985
1.5036
Item last on screen
(0,1)
0.2762**
0.0992
29.756
2.785
0.009
0.0736
0.4789
Trait desirability
(-4 - +4)
-0.0714**
0.0206
29.562
-3.456
0.002
-0.1135
-0.0292
Trait desirability x Trait desirability
(0-16)
0.0287
0.0190
29.639
1.511
0.141
-0.0101
0.0674
Need for social approval
(1-7)
0.0376
0.0775
157.100
0.486
0.628
-0.1154
0.1906
** p<0.01, * p<0.05, + p<0.10
[4] The results of the model using the logarithm of the outlier treated response latencies as dependent
variable yielded the same results in terms of coefficient signs and significance.
The substantively interesting coefficients have to do, however, with the item trait desirability and
respondent’s need for social approval. The model results show the respondent’s need for social approval
had no significant effect on response latencies [5]. Also, the coefficient for the squared trait desirability
term was not significant meaning the expected inverted-U shape illustrated in figure 1 did not appear. The
most interesting result has to do with the statistically significant negative coefficient for the original trait
desirability variable. What is important to keep in mind in interpreting this result is that the trait
desirability was measured on a bipolar scale from -4 (strongly undesirable) to +4 (strongly desirable).
This means we can expect values in the positive range to become smaller: i.e. the more desirable an item
is, the shorter the response latency. On the other hand, the more undesirable the item, the longer the
response latencies become. This implies two completely separate effects depending on whether the trait is
desirable or undesirable. Contrary to our expectation (shown in figure 1) that extremely desirable and
undesirable items should lead to the same result (shorter response latencies), we actually observe two
separate patterns for desirable and undesirable items (see figure 2).
[5] We also tested interaction effects involving need for social approval and trait desirability but found no
significant results.
Figure 2. Response latency by trait desirability (all other effects held constant)
Conclusions and discussion
The study attempted to explore the impact of item-based trait desirability and respondent-based need for
social approval on response latencies. We attempted to apply a dual process theoretical approach to these
ideas and argued that sensitive questions will frame the responses to come in the sense of a response set
or ‘short-lived bias’ as opposed to a response style (cf. Tourangeau and Yan, 2007; Paulhus, 2002;
DeMaio, 1984). In this way, the fact that only the trait desirability (item-specific response effect) was
statistically significant and not the respondent’s need for social approval (as a bias consistent across time
and questionnaires, again see Tourangeau and Yan, 2007) is understandable. We assumed that within this
frame, the mode of response will be determined by the strength of the social norm surrounding the
behaviour or attitude.
Taking the work of Sudman and Bradburn (1974) as an example, the social norms of the items
were measured externally in a separate study. We did not find the inverted U-shape that we had expected
and the quadratic term included in the model was not statistically significant. Rather, our findings suggest
that respondents react differently depending on whether the trait (item) was considered to be desirable or
undesirable. The more desirable a trait was rated, the shorter the respondents took to answer the question.
The more undesirable the trait, on the other hand, the longer they needed. Going back to Esser’s Model of
Frame Selection for an explanation of these results, it would seem as if increasingly clear socially
desirable norms lend themselves to an automatic norm-conform script. The interspersion of undesirable
attitudes and behaviours interrupts the automatically activated frame “being a good teacher” and its
associated script of response behaviour (responding in agreement), possibly forcing them into a
controlled-deliberate response mode. Faced with a question of whether the respondent is now a “bad
teacher”, a deliberate controlled response mode may be activated. In this case, the script does not match
the new frame, increasing motivation to consider the response and find an appropriate script.
In terms of dual-process models (e.g. Fazio, 1990, see the discussion above), motivation and
opportunity are the main determinants of cognitive processing modes: when both motivation and
opportunity are given, information is processed in a deliberative mode. Thus, the undesirability of an item
seems to enhance motivation for elaborative processing, while desirability decreases motivation. In this
case, two effects may be responsible: firstly, the undesirable trait and the interruption of the automatic
script of agreement may influence the respondent to confront and admit an uncomfortable truth about
themselves, or secondly the respondent may want to mask the fact that he or she actually agrees to an
undesirable item which, as Tourangeau and Rasinski (1988) state, tends to require more cognitive effort.
In both cases, the interruption of the frame and its associated script leads to a higher level of motivation
to consider the answer and, accordingly, longer response latencies.
In fact, a revised theoretical model which distinguishes between desirability and undesirability
may actually better conform to the ongoing discussion about explanatory models of social desirability
than as originally presented. As some have argued (for example Sudman and Bradburn, 1974; Paulhus,
2002), socially desirable responding is a strategy of dealing with sensitive questions. This would support
our conception that the sensitivity of the question should trigger a socially desirable frame in the sense of
Esser’s model of frame selection. Once acting on the cues of this socially desirable frame, socially
desirable attitudes or behaviours can elicit two distinct response modes. The findings shed light on the
importance of trait desirability, and by extension, item wording when using response latencies as a proxy
for socially desirable responding. The difference between agreeing with the statement that “Spending
time with teenagers is a lot of fun” and disagreeing with the statement “Teenagers tend to annoy me
quickly” or another other set of manifest indicators is evidenced by the diverging response latencies. We
suggest that the conflicting results in the field of response latencies as proxies for socially desirable
responding may in part be traceable to the different effects of socially desirable vs. undesirable item-
traits.
This paper attempted to contribute to the still conflicted research involving the use of response
latencies in identifying socially desirable responses. Our findings suggest that the direction of an item’s
trait desirability plays a role in determining the response mode as evidenced by the significant negative
effect coefficient of the trait desirable variable. The most undesirable traits lead to slower responses. With
increasing desirability, the response latencies become shorter. On a bipolar scale this indicates diverging
effects for desirable and undesirable traits.
This finding highlights our impression that the research field remains in need of a systematic
evaluation as it is characterized by a lack of clear and intersubjective terminology. The difference
between a sensitive question and a socially desirable question is not always clear (“Sensitive questions is
a broad category that encompasses not only questions that trigger social desirability…” Tourangeau and
Yan, 2007: 859); there are various instances in which either trait desirability, social desirable belief,
social norm, and minority opinion were used seemingly interchangeably. For example, Stocké refers to
trait desirability as the ‘relative social desirability of the attitude-position’ (cf. Stocké, 2004: 310) making
the differentiation between social desirability as a type of biasing response effect and as a characteristic
of either or all of the respondent, item or interview situation a difficult proposition. When Tourangeau
and Yan write: “This conception of sensitivity presupposes that there are clear social norms regarding a
given behavior or attitude” (Tourangeau and Yan, 2007: 860), it would suggest only survey items with
high trait desirability can be sensitive questions, to cite another example. Our argument is, however, that
sensitivity acts as a frame, and that the direction and magnitude of an item’s trait desirability informs
which script is carried out. Surely, a question dealing with such topics as homosexuality or refugees can
be seen as sensitive even if the attitude or behaviour suggested in the survey item can vary between
innocuous and extreme; socially desirable and undesirable.
Empirically, we see several areas in which operationalization should be investigated which may
help untangle the various conflicting findings. Besides the direction of desirability, the ambiguity or
rather clarity of the social norm is another indicator which may play a role in influencing the response
mode of the respondent. This refers back to our original hypothesis expressed in figure 1 in which we
expected ambiguous norms to cause longer responses. Here, an individual respondent-related measure of
trait desirability as well as the standard deviation of the aggregate item value (as an indicator of the
degree of consensus involving a trait) may be important to model. Such as of yet under-examined aspects
may be partly responsible for the conflicting findings in which researchers have found evidence for social
desirability leading to both shorter and longer latencies. In the article cited above by Silber et al. (2013),
for example, the researchers created indices of both socially desirable and undesirable items of attitudes
towards minority groups and reported in all but one of the eight models non-significant coefficients for
the response latency variable. Taking our findings into consideration, it is plausible that this would be due
to an ‘averaging-out’ of effects. Based on our results, we would expect the item “Muslim culture fits well
into our Western world” should illicit faster responses, while slower responses should be expected with a
question such as “Immigration to Germany should be prohibited for Muslims” (cf. Silber et al., 2013:
132).
The next step of our research will involve attempting to recreate these results with a new sample.
Furthermore, our findings did not support the initial hypothesized relation between trait desirability and
response latencies. Further research will look at the causal mechanisms responsible for seemingly
interrupting the script of automatic-spontaneous responses a-priori rather than offering post-hoc
explanations for the conflicting results.
Another area which we wish to investigate involves looking at the actual item values entered by
the respondents. In this way, we hope to establish not only that social desirability influences response
latencies, but also to develop ways of using response latencies, together with other types of paradata
(value changes, corrections of previous questions in order to preserve continuity and so on) to correct for
socially desirable response bias. The difficulty of this goal involves establishing ways to actually identify
lies and under/overestimations. In addition, further research on the use of response latencies in
combination with item-count or randomized-response techniques is needed to get a deeper understanding
of cognitive processes underlying social desirable response behaviour.
References
Amelang M and Müller J (2001) Reaktionszeit-Analysen der Beantwortung von Eigenschaftswörtern
(Reaction time analyses of responses to attribute words). Psychologische Beiträge 43(4): 731-750.
Belli R, Traugott M and Beckmann M (2001) What leads to voting overreports? Contrasts of overreports
to validated votes and admitted nonvoters in the American Election Studies. Journal of Official
Statistics 17: 479-498.
Carlston, D and Skowronski, J (1986) Trait Memory and Behavior Memory: The Effects of Alternative
Pathways on Impression Judgment Response Times. Journal of Personality and Social Psychology
50(1): 5-13.
Chaiken S and Trope Y (1999) Dual process theories in social psychology. New York, London: Guilford
Press.
Couper M and Kreuter F (2013) Using paradata to explore item level response times in surveys. Journal
of the Royal Statistical Society 176(1): 271-286.
Crowne D and Marlowe D (1960) A new scale of social desirability independent of psychopathology.
Journal of Consulting Psychology 24(4): 349-354.
DeMaio T (1984) Social desirability and survey measurement: A review. In: Turner and Martin (eds)
Surveying subjective phenomena. New York: Russel Sage.
Elster J (1989) The Cement of a Society. A Study of Social Order. Cambridge, New York, Melbourne:
Cambridge University Press.
Esser H (2000) Normen als Frames: Das Problem der “Unbedingtheit” des normativen Handelns (Norms
as frames. The problem of the absoluteness of normative behaviour). In: Metze R, Mühler K
and Opp KD (eds) Normen und Institutionen: Entstehung und Wirkungen. Leipzig:
Universitätsverlag.
Esser H (2010) Das Modell der Frame-Selektion. Eine allgemeine Handlungstheorie für die
Sozialwissenschaften? (The model of frame selection. A general theory of action for the social
sciences?) In: Albert G and Sigmund S (eds) Soziologische Theorie kontrovers. Kölner Zeitschrift
für Soziologie und Sozialpsychologie, Sonderheft 50: 45-62.
Fazio R (1989) On the Power and Functionality of Attitudes: The Role of Attitude Accessibility. In:
Pratkanis A, Breckler S and Greenwald A (eds) Attitude, Structure and Function. Hillsdale, New
Jersey: Erlbaum.
Fazio R (1990) Multiple Processes by which Attitudes Guide Behavior: the MODE Model as an
Integrative Framework. Advances in Experimental Social Psychology 23: 75-109.
Gibbons, H and Rammsayer T (1999) Auswirkung der Vertrautheit mit einer Reizdimension auf
Entscheidungsprozesse: Der modulierende Einfluss kontrollierter vs. automatischer
Informationsverarbeitung (Influence of familiarity with a stimulus dimension on decision making
processes. The modal influence of controlled vs. automatic information processing). In
Wachsmuth and Jung (eds) KogWis99, Proceedings der 4. Fachtagung der Gesellschaft für
Kognitionswissenschaft. Bielefeld/ St. Augustin.
Herlt S and Schaarschmidt U (2007) Fit für den Lehrerberuf?! (Prepared for the teaching career?!) In:
Schaarschmidt and Kieschke (eds) Gerüstet für den Schulalltag. Psychologische
Unterstützungsangebote für Lehrerinnen und Lehrer. Weinheim, Basel: Beltz Verlag.
Holbrook A and Krosnick J (2010a) Social Desirability Bias in Voter Turnout Reports. Tests Using the
Item Count Technique. Public Opinion Quarterly 74(1): 37-67.
Holbrook A and Krosnick J (2010b): Measuring Voter Turnout by Using the Randomized Response
Technique. Evidence Calling into Question the Method’s Validity. In: Public Opinion Quarterly
74(2): 328-343.
Holden R and Hibbs N (1995) Incremental validity of response latencies for detecting fakers on a
personality test. Journal of Research in Personality 29(3): 362-372.
Holtgraves T (2004) Social Desirability and Self-Reports: Testing Models of Socially Desirable
Responding. Personality and Social Psychology Bulletin 30(2): 161-172.
Hox, J. (2010) Multilevel Analysis. Techniques and Applications. Second Edition. New York: Routledge.
Kaminska O and Foulsham T (2016) Eye-Tracking Social Desirability Bias. Bulletin of Sociological
Methodology 130: 73-89.
Kohler A and Schneider J (1995) Einfluss der Kenntnis der Gruppennorm auf die Beantwortungszeit von
Persönlichkeitsfragebogen-Items (The influence of the knowledge of group norms on the response
time in personality related questionnaire items). Arbeiten der Fachrichtung Psychologie,
Universität des Saarlandes 179.
Krosnick J (1991) Response Strategies for Coping with the Cognitive Demands of Attitude Measures in
Surveys. Applied Cognitive Psychology 5: 213-236.
Mayerl J (2009) Kognitive Grundlagen sozialen Verhaltens. Framing, Einstellungen und Rationalität
(The cognitive basis of social behaviour. Framing, attitudes and rationality) Wiesbaden: VS
Verlag für Sozialwissenschaften.
Mayerl J (2013) Response latency Measurement in Surveys. Detecting Strong Attitudes and Response
Effects. Survey Methods: Insights from the Field.
Mayerl J and Urban D (2008) Antwortreaktionszeiten in Survey-Analysen. Messung, Auswertung und
Anwendungen (Response times in survey analyses. Measurement, evaluation and applications).
Wiesbaden: VS Verlag für Sozialwissenschaften.
Olson K and Smyth J (2015) The Effect of CATI Questions, Respondents, and Interviewers on Response
Time. Sociology Department, Faculty Publications. Paper 268. Availabile at:
http://digitalcommons.unl.edu/sociologyfacpub/268 (accessed 19 January 2017).
Paulhus D (2002) Socially desirable responding. The evolution of a construct. In: Braun, Jackson and
Wiley (eds) The role of constructs in psychological and educational measurement. Hillsdale:
Erlbaum.
Preisendörfer P and Wolter F (2014) Who is telling the truth? A validation study on determinants of
response behavior in surveys. Public Opinion Quarterly 78(1): 126-146.
Schaeffer N (2000) Asking Questions about Threatening Topics A Selective Overview. In: Stone,
Turkkan, Bachrach, Jobe, Kurtzman, Cain, (eds) The Science of Self-Report: Implications for
Research and Practice. Erlbaum: Mahwah, New Jersey.
Schaffner B and Roche C (2016) Misinformation and Motivated Reasoning Responses to Economic
News in a Politicized Environment. Public Opinion Quarterly, First published online: December
8, 2016, doi: 10.1093/poq/nfw043.
Sheppard L and Teasdale J (2000) Dysfunctional thinking in major depressive disorder: A deficit in
metacognitive monitoring? Journal of Abnormal Psychology 109(4): 768-776.
Shiv B and Fedorikhin A (2002) Spontaneous versus controlled influences of stimulus-based affect on
choice behavior. Organizational Behavior and Human Decision Processes 87(2): 342-370.
Silber H, Lischewski J and Leibold J (2013) Comparing Different Types of Web Surveys: Examining
Drop-Outs, Non-Response and Social Desirability. Metodolški zvezki 10(2): 121-143.
Smith E and DeCoster J (2000) Dual-process models in social and cognitive psychology: Conceptual
integration and links to underlying memory systems. Personality and Social Psychology Review
4(2): 108-131.
Stocké V (2004) Entstehungsbedingungen von Antwortverzerrungen durch soziale Erwünschtheit. Ein
Vergleich der Prognosen der Rational-Choice Theorie und des Modells der Frame-Selektion
(Necessary conditions for answer biases through social desirability. A comparison of prognoses of
the rational choice theory and the model of frame selection). Zeitschrift für Soziologie 33(4): 303-
320.
Strack F and Martin L (1987) Thinking, Judging, and Communicating: A Process Account of Context
Effects in Attitude Surveys. In: Hippler, Schwarz, Sudman, (eds) Social Information Processing
and Survey Methodology. New York, Berlin, Heidelberg, London, Paris, Tokyo: Springer-Verlag.
Sudman S and Bradburn N (1974) Response effects in surveys. A review and synthesis. Chicago: Aldine.
Tourangeau R (1984) Cognitive Sciences and Survey Methods. In: Jabine, Straf, Tanur, Tourangeau,
(eds) Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines.
Washington: National Academy Press.
Tourangeau R and Rasinski K (1988) Cognitive Processes Underlying Context Effects in Attitude
Measurement. Psychological Bulletin 103(3): 299-314.
Tourangeau R and Rasinski K (1988) Cognitive Processes Underlying Context Effects in Attitude
Measurement. Psychological Bulletin 103(3): 299-314.
Tourangeau R and Yan T (2007) Sensitive Questions in Surveys. Psychological Bulletin 133(5): 859-883.
Wiggins J (1964) Convergences among stylistic response measures from objective personality tests.
Educational and Psychological Measurement 24(3): 551-562.
Wolter F and Preisendörfer P (2013) Asking Sensitive Questions: An Evaluation of the Randomized
Response Technique Versus Direct Questioning Using Individual Validation Data. Sociological
Methods and Research 00(0): 1-33.
Yan T and Tourangeau R (2008) Fast times and easy questions: The effects of age, experience and
question complexity on web survey response times. Applied Cognitive Psychology 22(1): 51-68.
Appendix
Appendix 1. Mean trait desirability score per item (-4: strongly undesirable…0: neutral…+4: strongly
desirable)
Construct
Item
Mean trait
desirability
Standard
Deviation
1: Interaction with
younger people
1
Spending time with teenagers is a lot of fun.
2.82
1.10
2
Teenagers tend to annoy me quickly.
-2.77
1.40
3
I always get along with teenagers.
2.38
1.35
2: Humour
4
I find it easy to make others laugh.
1.29
1.57
5
My friends and acquaintances appreciate my friendly disposition.
1.74
1.59
6
I sometimes have trouble being funny at the right moment.
-0.81
1.40
3: Tolerance for
frustration
7
I take being insulted well.
1.64
1.67
8
I am very sensitive to personal accusations and attacks.
-2.01
1.50
9
I can cope with disappointment better than many other people.
0.74
1.82
4: Ability to assert oneself
10
I am able to stick by my opinions in conflicts.
1.73
1.42
11
When I am challenged I sometimes find it difficult to argue my
point convincingly.
-1.70
1.73
12
I am good at winning arguments.
1.69
1.41
5: Flexibility
13
I deal well with unforeseen situations.
2.08
1.49
14
I need things to go as planned.
-0.91
1.61
15
I can adapt myself to new situations without any problems.
1.90
1.18
6: Social sensibility
16
I find it difficult to put myself in someone else’s shoes.
-2.32
1.82
17
I have good feeling for how to deal with people.
2.55
1.32
18
I am aware of problems other people may be having.
2.22
1.12
7: Didactic abilities
19
I am good at explaining complex situations.
2.82
1.33
20
Sometimes I am not able to communicate complex topics so that
other people are able to understand.
-1.91
2.09
21
I find it easy to teach others.
2.83
1.31
8: Comfort speaking in
front of others
22
I don’t mind talking in front of a group unprepared.
1.60
2.02
23
When I have to speak or present in front of a group, I am able to
overcome my nervousness.
2.17
1.27
24
I feel insecure when I have to speak in front of others.
-2.45
1.47
9: Ability to express
oneself
25
My ability to express myself in discussions is sometimes limited.
-1.66
1.77
26
I am able to express complicated things clearly and concisely.
2.09
1.36
27
I can adjust the way I express myself depending on who I am
talking to.
1.94
1.30
10: Ability to awake
interest
28
I am good at getting people excited about things.
2.45
1.29
29
I find it difficult to convince others of things.
-1.94
1.50
30
I am good at getting people interested in things.
2.34
1.40
Appendix 2. Descriptive statistics
Item #
1
2
3
4
5
6
7
8
9
10
N
Valid
352.00
352.00
352.00
352.00
352.00
352.00
351.00
351.00
351.00
351.00
Missing
105.00
105.00
105.00
105.00
105.00
105.00
106.00
106.00
106.00
106.00
Mean
6.11
5.66
5.54
5.37
5.78
4.99
4.36
4.33
3.99
5.18
Std. Err. Mean
0.05
0.05
0.05
0.06
0.05
0.07
0.08
0.08
0.07
0.06
Median
6.00
6.00
6.00
5.50
6.00
5.00
4.00
4.00
4.00
5.00
Mode
6.00
6.00
6.00
6.00
6.00
6.00
4.00
4.00
4.00
5.00
Std. Dev.
0.93
1.00
0.92
1.04
0.97
1.29
1.44
1.44
1.35
1.06
Variance
0.86
1.00
0.85
1.09
0.94
1.68
2.08
2.07
1.81
1.12
Skewness
-1.35
-0.79
-0.75
-0.66
-0.75
-0.45
-0.24
-0.15
0.08
-0.72
Kurtosis
2.92
0.77
1.24
0.67
1.04
-0.44
-0.53
-0.57
-0.35
0.93
Minimum
1.00
2.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Maximum
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
Item #
11
12
13
14
15
16
17
18
19
20
N
Valid
350.00
350.00
352.00
351.00
352.00
351.00
350.00
351.00
352.00
352.00
Missing
107.00
107.00
105.00
106.00
105.00
106.00
107.00
106.00
105.00
105.00
Mean
4.85
4.91
4.89
4.36
4.93
5.69
5.57
5.30
5.25
4.87
Std. Err. Mean
0.07
0.06
0.06
0.07
0.06
0.06
0.05
0.06
0.05
0.06
Median
5.00
5.00
5.00
4.00
5.00
6.00
6.00
5.00
5.00
5.00
Mode
5.00
5.00
5.00
4.00
5.00
6.00
6.00
6.00
5.00
5.00
Std. Dev.
1.22
1.16
1.15
1.29
1.12
1.06
0.99
1.06
0.92
1.14
Variance
1.48
1.34
1.33
1.68
1.24
1.13
0.99
1.12
0.84
1.29
Skewness
-1.35
-0.79
-0.75
-0.66
-0.75
-0.45
-0.24
-0.15
0.08
-0.72
Kurtosis
2.92
0.77
1.24
0.67
1.04
-0.44
-0.53
-0.57
-0.35
0.93
Minimum
1.00
2.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Maximum
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
Item #
21
22
23
24
25
26
27
28
29
30
N
Valid
352.00
350.00
350.00
350.00
350.00
350.00
350.00
351.00
351.00
351.00
Missing
105.00
107.00
107.00
107.00
107.00
107.00
107.00
106.00
106.00
106.00
Mean
5.46
4.72
5.15
5.52
4.79
5.03
5.27
5.27
5.21
5.38
Std. Err. Mean
0.05
0.08
0.07
0.06
0.06
0.05
0.05
0.05
0.06
0.05
Median
6.00
5.00
5.00
6.00
5.00
5.00
5.00
5.00
5.00
5.00
Mode
6.00
6.00
6.00
6.00
5.00
5.00
5.00
5.00
5.00
5.00
Std. Dev.
0.89
1.55
1.26
1.18
1.19
1.01
0.91
0.96
1.04
0.88
Variance
0.79
2.41
1.59
1.40
1.42
1.01
0.82
0.92
1.08
0.77
Skewness
-0.71
-0.41
-0.74
-0.80
-0.37
-0.58
-0.58
-0.42
-0.89
-0.60
Kurtosis
2.04
-0.56
0.38
0.55
-0.45
0.51
1.01
0.87
1.54
1.66
Minimum
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Maximum
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
7.00
1=does not apply at all 7=applies completely, negatively worded items recoded
Appendix 3. Descriptive statistics, need for social approval items
No matter who I’m talking to, I’m always a good listener.
I am always courteous, even to people who are disagreeable.
N
Valid
173,00
173,00
Missing
284,00
284,00
Mean
5,77
5,83
Std. Err. Mean
0,07
0,08
Median
6,00
6,00
Mode
6,00
6,00
Std. Dev.
0,96
1,08
Variance
0,92
1,17
Skewness
-0,95
-1,19
Kurtosis
1,75
1,74
Minimum
2,00
2,00
Maximum
7,00
7,00
... Apesar destes contributos, esta investigação revela algumas limitações: a solicitação de opiniões sobre temáticas sociais implica uma consciência de inclusão de certos segmentos ou indivíduos. A abordagem desses temas e a colocação das respetivas perguntas no questionário podem ter tido um efeito de desejabilidade social, ou seja, a tendência para responder de modo a representar atitudes orientadas por normas socialmente aceitáveis e que pressupõem uma aprovação na sociedade (ALMIRO, 2017;ANDERSEN;MAYERL, 2017). A utilização de variáveis latentes, compostas por indicadores, e não perguntas diretas sobre as temáticas, pretendeu diminuir esse efeito. ...
... Apesar destes contributos, esta investigação revela algumas limitações: a solicitação de opiniões sobre temáticas sociais implica uma consciência de inclusão de certos segmentos ou indivíduos. A abordagem desses temas e a colocação das respetivas perguntas no questionário podem ter tido um efeito de desejabilidade social, ou seja, a tendência para responder de modo a representar atitudes orientadas por normas socialmente aceitáveis e que pressupõem uma aprovação na sociedade (ALMIRO, 2017;ANDERSEN;MAYERL, 2017). A utilização de variáveis latentes, compostas por indicadores, e não perguntas diretas sobre as temáticas, pretendeu diminuir esse efeito. ...
Article
O objetivo deste estudo foi o de analisar a perceção e atitude dos consumidores relativamente à publicidade baseada em mensagens inclusivas. Analisaram-se quatro dimensões sobre a perceção dos consumidores quanto aos motivos subjacentes a estas campanhas por parte das marcas, a capacidade de persuasão dessas mensagens e a intenção de compra que pode advir desse processo. O estudo foi contextualizado em Portugal e focou-se na perceção da campanha para as loções “Body Love” da marca Dove por parte de respondentes masculinos e femininos nas faixas etárias 18-25, 26-35 e 36-41.
... Adding more positive and detailed questions might have added a more nuanced picture, since reporting experiences can be subjective and interpreted differently. 36 Another limitation is that the questions did not allow us to discern those who already felt confident but experienced the practice as not adding any extra confidence or enjoyment when there was no perceived need, from those who were less satisfied due to unfavorable experiences because of the intervention. The high data completeness with low proportion of missing values, especially with regard to the primary midwives' responses with only 2.2%-4.3% ...
... Further, the primary and second midwives used separate CRFs, thus ensuring anonymity and thereby improving the conditions for reliable answers. 36 The trial was conducted in clinics with different numbers of births, which increases its generalizability. Furthermore, similar challenges in terms of working conditions within birth care have been seen in other high-income countries with similar health care systems. ...
Article
Full-text available
Background Collegial midwifery assistance during the active second stage of labor that involves a second midwife being present has been shown to reduce severe perineal trauma (SPT) by 30%. The aim of this study was to investigate primary midwives' experiences of collegial midwifery assistance with the purpose of preventing SPT during the active second stage of labor. Methods This study uses an observational design with data from a multicenter randomized controlled trial (Oneplus). Data consist of clinical registration forms completed by the midwives after birth. Descriptive statistics as well as univariable and multivariable logistic regression were used to analyze the data. Results The majority of the primary midwives felt confident (61%) and were positive (56%) toward the practice. Midwives with less than 2 years' work experience were more likely to completely agree they felt confident (aOR 9.18, 95% CI: 6.28–13.41) and experience the intervention as positive (aOR 4.04, 95% CI: 2.83–5.78) than those with over 20 years' work experience. Factors such as duration of time spent in the birthing room by the second midwife, opportunity for planning and if the second midwife provided support were further associated with the primary midwife's experience of the practice as being positive. Conclusions Our findings indicate that having a second midwife present during the active second stage of labor was an accepted practice, with the majority of primary midwives feeling confident and positive toward the intervention. This was especially pronounced among midwives with less than 2 years' work experience.
... que fossem ao encontro do que acreditaram ser mais aceitável. Todavia, apesar do tempo de resposta já ter sido abordado em estudos, como forma de avaliar a desejabilidade social (e.g.Andersen & Mayerl, 2017), para o número de cliques não foram encontradas quaisquer pesquisas.Não obstante as questões abordadas, há um elemento que deve ser mencionado que pode ter afetado os dados recolhidos. Ainda que estudos anteriores tenham, igualmente, investigado a compra por impulso através de cenários experimentais representativos de contextos de compra presenciais (e.g.Campos, 2018;Gonçalves, 2018), a situação atualmente vivida, devido à COVID-19, poderá ter levantado implicações que poderão ter afetado os resultados.É inegável a influência da pandemia e das medidas impostas sobre o comportamento do/a consumidor/a, incluindo o medo e a necessidade de isolamento e distanciamento social.A tendência de recorrer à compra online é crescente, pelo que, remeter os/as participantes para um supermercado poderá ter levado, só por si, a um menor interesse nas questões ou a tendências menores de compra por impulso.Além disso, nesta fase, os/as consumidores/as centram os seus gastos em bens essenciais, como alimentares e de saúde. ...
Article
Full-text available
RESUMO: Determinados grupos sociais permanecem vítimas de discriminação, o que se reflete também na publicidade, que elege para as campanhas, preferencialmente, indivíduos do grupo maioritário. Este estudo propõe-se a avaliar em que medida o racismo e a presença de uma modelo negra num anúncio publicitário influenciam o comportamento do/a consumidor/a. Foi conduzido um estudo experimental, quantitativo, com 454 portugueses/as, com idades entre os 18 e os 76 anos (M=30.8, DP=13.6). Através de um questionário online, foi apresentada uma situação de compra, após a visualização de um anúncio publicitário. Foram criados três cenários experimentais com anúncios publicitários diferentes: um com uma modelo negra, um com uma modelo branca e um onde não surge nenhuma modelo. As emoções mais positivas revelaram ser um bom preditor para a tendência de comprar por impulso. No cenário com a modelo branca, a identificação com a modelo mostrou, igualmente, ser um preditor importante para os/as participantes. Palavras-chave: Tendência para comprar por impulso; Anúncios Publicitários; Racismo; Emoções. ABSTRACT: Certain social groups remain discrimination victims, which are also reflected in advertisements, that hire, preferably, members of the dominant group. This study attempts to evaluate the racism and influence in consumers' behavior when faced with the presence of a black model in a campaign advertisement. A quantitative experimental study was conducted with 454 Portuguese participants, with ages between 18 and 76 years (M=30.8; SD=13.6), through an online questionnaire, where a sale situation was presented, after the visualization of an advertisement. Three experimental scenarios were created with different advertisements: one with a black model, one with a White model and one where no model appears. Positive emotions were a good predictor for impulse buying tendency. In the scenario with the White
... Due to participants' propensity to present a more favorable view of themselves due to social desirability, self-reporting questionnaires may introduce bias [38]. Socially desirable responses are most likely to occur to socially sensitive questions [39]. Nevertheless, results have shown important insights regarding the importance of respondent's unique demographics as a potential in influencing the safety perceptions and provided the status of safety culture in a different view. ...
Article
Full-text available
Background The non-punitive approach to error investigation in most safety culture surveys have been relatively low. Most of the current patient safety culture measurement tools also lack the ability to directly gauge concepts important to a just culture (i.e. perceptions of fairness and trust). The purpose of this study is to assess nurses’ percep‑ tions of the six just culture dimensions using the validated Just Culture Assessment Tool (JCAT). Methods This descriptive, cross-sectional study was conducted between November and December 2020. Data from 212 staf nurses in a large referral hospital in Qatar were collected. A validated, self-reported survey called the JCAT was used to assess the perception of the just culture dimensions including feedback and communication, openness of communication, balance, quality of event reporting process, continuous improvement, and trust. Results The study revealed that the overall positive perception score of just culture was (75.44%). The strength areas of the just culture were “continuous improvement” dimension (88.44%), “quality of events reporting process” (86.04%), followed by “feedback and communication” (80.19%), and“openness of communication” (77.55%) The dimensions such as“trust” (68.30%) and“balance” (52.55%) had a lower positive perception rates. Conclusion A strong and efective just culture is a cornerstone of any organization, particularly when it comes to ensuring safety. It places paramount importance on encouraging voluntary error reporting and establishing a robust feedback system to address safety-related events promptly. It also recognizes that errors present valuable opportunities for continuous improvement. Just culture is more than just a no-blame practice. By prioritizing account‑ ability and responsibility among front-line workers, a just culture fosters a sense of ownership and a commitment to improve safety, rather than assigning blame.
... In addition, long surveys and items presented in later parts of questionnaires reveal a lower data quality with more frequent omissions, dropouts, and response patterns indicating careless responding (Bowling et al., 2021a;Deutskens et al., 2004;Galesic & Bosnjak, 2009;Liu & Wronski, 2018;Marcus et al., 2007). Response times likewise indicate declining test-taking effort and a shift towards heuristic processing: They were found to be shorter for items presented toward the end of a questionnaire (Galesic & Bosnjak, 2009;Yan & Tourangeau, 2008), and such fast responses are associated with less motivation (Bowling et al., 2021b;Callegaro et al., 2009), satisficing responses in general (Andersen & Mayerl, 2017;Zhang & Conrad, 2014), and even more notably, responses that match the person-specific RS (Henninger & Plieninger, 2020). Thus, conditional on a substantial length of a questionnaire, respondents are likely to decrease their investment of cognitive capacity, and rather fall back to fast, heuristic processing. ...
Article
Full-text available
It is essential to control self-reported trait measurements for response style effects to ensure a valid interpretation of estimates. Traditional psychometric models facilitating such control consider item responses as the result of two kinds of response processes—based on the substantive trait, or based on response styles—and they assume that both of these processes have a constant influence across the items of a questionnaire. However, this homogeneity over items is not always given, for instance, if the respondents’ motivation declines throughout the questionnaire so that heuristic responding driven by response styles may gradually take over from cognitively effortful trait-based responding. The present study proposes two dynamic IRTree models, which account for systematic continuous changes and additional random fluctuations of response strategies, by defining item position-dependent trait and response style effects. Simulation analyses demonstrate that the proposed models accurately capture dynamic trajectories of response processes, as well as reliably detect the absence of dynamics, that is, identify constant response strategies. The continuous version of the dynamic model formalizes the underlying response strategies in a parsimonious way and is highly suitable as a cognitive model for investigating response strategy changes over items. The extended model with random fluctuations of strategies can adapt more closely to the item-specific effects of different response processes and thus is a well-fitting model with high flexibility. By using an empirical data set, the benefits of the proposed dynamic approaches over traditional IRTree models are illustrated under realistic conditions.
... Due to the fact that face-to-face data collection was conducted during working hours, nurses may have exaggerated their positive thoughts and attitudes to win the favor of their managers or because they were concerned that the results might be visible to others. Supporting this notion, Andersen and Mayerl (2017) emphasize that social desirability is not considered in social sciences, but has a significant effect on the results. Because social desirability bias was not controlled in the included individual studies, it is possible that these results might be affected by social desirability bias. ...
Article
Full-text available
Purpose: This study aimed to clarify the mixed conclusions regarding work engagement and job satisfaction in the nursing research literature by conducting a systematic review and meta-analysis. Design and Methods: This meta-analytic review synthesized 15 independent studies published between 2007 and 2021. Findings: An overall effect size of random-effect model was calculated as r = .47 ( k =15, N = 3,818, 95% CI [.43, .51]). Data collection method and presence of control variable, as significant moderators, accounted for 43.6% and 43.8% respectively of the effect size heterogeneity. Practice Implications: While the positive relationship between work engagement and job satisfaction was higher in studies using the face-to-face data collection method, it was lower in studies using the control variable in their research models.
Article
Most vote validation studies assume that socially desirable responding is the cause of turnout overreports. Still, very little has been done to test this assertion. Using response latency measures from the 2020 Cooperative Election Study and its vote validation data, I examine the relationship between overreporting turnout and response latencies. Emulating research on the effect of deception on response latencies I test whether turnout overreports have a similar effect to that of deception on the response latencies for self-reported turnout. I find that the respondents who overreport turnout have higher mean response times than validated voters on average, and address the role memory in predicting the length of response latencies for the turnout self-reports. This study sheds light on the cognitive mechanism that underlies the occurrence of overreports in survey research, and provides new evidence to support the view that overreports of voter turnout occur due to socially desirable responding.
Article
Full-text available
Aim Conduct a college level assessment of the perspective of staff, faculty and students on the climate, educational adequacy, and understanding on LGBTQ+ health and healthcare. Background LGBTQ+ individuals face health disparities and discrimination. Inadequate educational preparation allows students to enter their professions with inadequate knowledge and biases. Design Cross-sectional study of faculty, staff, students (N = 343) in a college of health and human services college in the Midwest. Methods Data were gathered via anonymous internet surveys focused on LGBTQ+ climate, knowledge, and comfort. Descriptive statistics and chi square were used for analysis. Results Witnessing or experiencing LGBTQ+ harassment was more common in the community (33%, n = 115) than the university campus (5.6%), classrooms (2.9%, n = 10), clinical areas (0.3%, n = 1). While the majority (81.4-100%) correctly answered knowledge-based health questions regarding the LGBTQ+ population, 21% noted that they experienced more discomfort discussing sexual health with LGBTQ+ patients compared to non-LGBTQ+ patients.
Article
The implications of the persistent gender gap in political knowledge are a puzzle that the literature is still disentangling; and research has evidenced important differences in the way women and men respond to survey questions. We argue in this article that political knowledge survey items not only inform about differences in cognition but also about other latent traits related to gender stereotyping. Gender stereotypes around political knowledge push men to be knowledgeable but not so much women, which we expect to affect men and women’s survey responses differently. To test this expectation, we explore response times of do not know answers to political knowledge items. Our results show that men, particularly those who declare being interested in politics, take longer than women to admit that they do not know the answer to political knowledge items.
Chapter
Der Beitrag gibt einen Überblick über mögliche Gemeinsamkeiten hinsichtlich der Erhebung von Umfragedaten in den Themenbereichen Korruption sowie COVID-19. Anhand einer Gegenüberstellung wird verdeutlicht, dass auf dem Weg zu einer (möglichst) unverzerrten Datenerhebung ähnliche Hürden zu bewältigen sind und beide Befragungsthemen nennenswerte Parallelen zeigen. Als zentrale Gemeinsamkeit stellt sich heraus, dass beide Themenbereiche als sensitiv kategorisiert werden können. Daher wird die Verwendung spezifischer Umfragemethoden für sensitive Inhalte (z. B. von sogenannten List Experimenten) als sinnvoll herausgestellt.
Article
Full-text available
The paper reports on measurement and data treatment of response latencies in computer assisted surveys. Applying response latencies as a measure of mental processes, empirical hypotheses are tested to explain the occurrence of response effects (here: acquiescence bias) and the predictive power of generalized attitudes. Theoretically, it is assumed that behavioural and other specific evaluative judgments in surveys are stronger influenced by acquiescence bias and generalized attitudes when answering in a rather automatic-spontaneous response mode, i.e. when response latencies are fast. Additionally, it is assumed that chronic attitude accessibility acts as a moderator of acquiescence effects and predictive power of attitudes within spontaneous mode processing. Empirical tests show evidence in favour of these assumptions, demonstrating the usefulness of response time measurement in surveys.
Article
Full-text available
Eye tracking is now a common technique studying the moment-by-moment cognition of those processing visual information. Yet this technique has rarely been applied to different survey modes. Our paper uses an innovative method of real-world eye tracking to look at attention to sensitive questions and response scale points, in Web, face-to-face and paper-and-pencil self-administered (SAQ) modes. We link gaze duration to responses in order to understand how respondents arrive at socially desirable or undesirable answers. Our novel technique sheds light on how social desirability biases arise from deliberate misreporting and/or satisficing, and how these vary across modes.
Article
Public opinion scholars have recently focused on understanding why surveys report such high levels of misinformation among otherwise knowledgeable and engaged partisans. In this paper, we take advantage of a natural experiment involving the October 2012 jobs report announcement to gain a more complete understanding of how individuals' beliefs are influenced by new salient information in a politicized environment. We examine reactions by Republicans and Democrats on a factual question about the unemployment rate immediately before and after the announcement that unemployment had fallen below 8 percent for the first time during the Obama presidency. Using a variety of techniques, including response latency measures, we conclude that partisans did react to the jobs report by engaging in motivated reasoning, providing a clearer understanding of why individuals respond to factual questions in vastly different ways.
Book
Das Buch erläutert das Wie und Warum'' der Messung und Analyse von Antwortreaktionszeiten für computergestützte Bevölkerungsumfragen. Verfahren zur empirischen Erhebung von Antwortreaktionszeiten (z.B. aktive oder passive Zeitmessmethoden) werden ebenso erläutert wie statistische Methoden zur Behandlung und Bereinigung der erhobenen Reaktionszeiten (z.B. Operationalisierung und Kontrolle individueller Basisgeschwindigkeiten und statistische Identifizierung ungültiger Zeitmessungen). Das Buch beschreibt viele Anwendungsbeispiele von Reaktionszeitanalysen aus der aktuellen Forschungspraxis. Anwendungen aus der Einstellungs- und Verhaltensforschung und der Untersuchung von Befragtenverhalten (z.B. Aufdecken von Response-Effekten) verdeutlichen, welch vielfältige Möglichkeiten die Auswertung von Reaktionszeitmessungen zur Verbesserung von Survey-Analysen eröffnet.; Inhaltsverzeichnis: I Einführung ; II Antwortreaktionszeitmessung in Survey-Studien ; II.1 Techniken zur Erhebung von Reaktionszeiten in computergestützten Surveys ; II.2 Einsatzmöglichkeiten und Interpretationsvarianten von Reaktionszeitmessugen ; II.3 Bestimmungsfaktoren und Störeffekte von Reaktionszeiten ; III Kontrolle von Störeffekten und Datenbehandlung; III.1 Methodische Kontrolle von Störeffekten durch das Erhebungsdesign ; III.2 Statistische Verfahren zur Behandlung von Reaktionszeitdaten ; III.2.1 Identifikation ungültiger Messwerte ; III.2.2 Kontrolle der individuellen Basisgeschwindigkeit ; III.2.2.1 Berechnung von Basisgeschwindigkeiten ; III.2.2.2 Konstruktion und Analyse von Latenzzeitmaßen ; III.2.3 Verteilungsprobleme, Aggregation und Indexbildung ; III.2.4 Empfehlungen zur Datenbehandlung ; IV Empirische Anwendungen ; IV.1 Moderation von Response-Effekten ; IV.2 Moderation der Einflussstärke sozialer Urteile ; IV.3 Moderation der temporalen Stabilität von Einstellungen ; IV.3.1 Modellierung mit aktiven Reaktionszeiten ; IV.3.2 Modellierung mit passiven Reaktionszeiten ; V Resümee ; Literaturverzeichnis
Book
Die Erklärung sozialen Verhaltens nimmt in den Sozialwissenschaften eine zentrale Stellung ein. Als wichtigste Erklärungsansätze gelten einerseits die Einstellungs-Verhaltens-Forschung mit der Unterscheidung eines spontanen und überlegten Informationsverarbeitungsmodus und andererseits Framing-Modelle im Kontext der modernen Rational Choice Theorie. In der Arbeit wird ein integratives Framing-Modell entwickelt und empirisch getestet, welches die theoretischen und empirischen Vorzüge von Einstellungs-Verhaltens-Modellen und der Rational Choice Theorie nach Maßgabe höchst möglicher Kompatibilität vereint. Die entwickelten theoretischen Annahmen werden zudem einer survey-basierten empirisch-statistischen Überprüfung im Themenbereich des Spendenverhaltens unterzogen.; Inhaltsverzeichnis: Vorwort; 1 Einleitung; 1.1 Hinführung zum Gegenstand; 1.2 Verlauf der Arbeit; 2 Einstellungs-Verhaltens-Forschung; 2.1 Klassische Konzepte der Einstellungs-Verhaltens-Relation; 2.1.1 Einstellungskonstrukt und Einstellungs-Verhaltens-Forschung; 2.1.2 Konsistenzmodell und Moderatoren der Einstellungs-Verhaltens-Relation; 2.1.3 Einstellungsstärke und spontanes Prozessieren ; 2.1.3.1 Einstellungsstärke; 2.1.3.2 Fazios Modell spontanen Prozessierens und die Bedeutung der Einstellungszugänglichkeit; 2.1.4 Einstellungstheorie überlegten Prozessierens: TRA bzw. TPB; 2.2 Integrative Prozessmodelle der Einstellungs-, Persuasions- und Social Cognition- Forschung; 2.2.1 Duale Prozessmodelle der Informationsverarbeitung; 2.2.1.1 Das MODE-Modell; 2.2.1.2 Duale Prozessmodelle der Persuasionsforschung: ELM und HSM; 2.2.1.3 Weitere duale Prozessmodelle der Informationsverarbeitung; 2.2.1.4 Dimensionen und Operationalisierungsvarianten von Motivation, Möglichkeit und Elaboration; 2.2.2 Prozessmodelle mit mehr als zwei qualitativen Modi; 2.2.3 Unimodell und Elaboration als Kontinuum versus Dualität; 2.2.4 Ein generisches „duales“ Prozessmodell der Einstellungs-Verhaltens-Beziehung; 3 Framing und Modus der Informationsverarbeitung in der soziologischen Handlungstheorie; 3.1 Rational Choice als soziologische Handlungstheorie: Grundannahmen; 3.2 Brückenannahmen, Paradoxien und Anomalien der Rational Choice Theorie; 3.3 RC-Modell der Logik der Situation und Selektion; 3.3.1 Die soziale Definition der Situation; 3.3.2 Die subjektive Definition der Situation: Der Prozess des Framings; 3.3.2.1 Essers Modell der Frame-Selektion (MdFS) ; 3.3.2.2 Prospect Theory und Diskriminationsmodell; 3.3.2.3 Kronebergs modifiziertes MdFS; 3.3.2.4 Vorschlag zur Modifikation der MdFS-Modellierung: das MdFSE; 3.3.3 Zur Modellierung von low cost- und high cost-Situationen; 4 Empirisch-statistische Analyse; 4.1 Ziele der empirisch-statistischen Analyse; 4.2 Methodische Vorbemerkungen; 4.2.1 Stichprobe und Daten; 4.2.2 Statistische Verfahren; 4.2.3 Antwortlatenzzeiten in CATI-Surveys; 4.3 Modusbedingte Genese und Prädiktorstärke von Bilanzurteilen (T2, T4, T5, T6) ; 4.3.1 Modusbedingte Bestimmungsfaktoren der bilanzierenden Verhaltenseinstellung (T2) ; 4.3.2 Zur Bedeutung der Einstellungszugänglichkeit für modusbedingte Bestimmungsfaktoren der bilanzierenden Verhaltenseinstellung (T4, T5, T6) ; 4.3.3 Modus und Einstellungszugänglichkeit als Moderatoren der Einstellungs-Intentions-Beziehung (T4, T5) ; 4.4 Indirekte versus direkte Einstellungs-Verhaltens-Beziehung in Hoch- und Niedrigkostensituationen (T3, T4, T5) ; 4.4.1 Die Prädiktion von Spendenverhalten in Hoch- und Niedrigkostensituationen (T3) ; 4.4.2 Zur Bedeutung der Einstellungszugänglichkeit bei der Prädiktion von Spendenverhalten in Hoch- und Niedrigkostensituationen (T3, T4, T5) ; 4.5 Zwischenfazit und Implikationen für die MdFS-Varianten; 4.6 Zur Interaktion von chronischer Zugänglichkeit, Motivation und Möglichkeit als Bestimmungsfaktoren des Elaborationsgrads; 4.7 Frame-Selektion im MdFS: Anwendung und empirischer Test; 5 Fazit und Ausblick; Literatur; Anhang
Article
Zusammenfassung Die Rational-Choice Theorie und deren Generalisierung im Modell der Frame Selektion beanspruchen beide eine vollständige und die Interdependenz der Entstehungsfaktoren berücksichtigende Erklärung des sozial erwünschten Antwortverhaltens in Umfragen. Die hierbei vorhergesagten Erklärungsfaktoren wurden bisher nur unvollständig und oft mit widersprüchlichen Ergebnissen empirisch untersucht. Der vorliegende Beitrag überprüft daher theorievergleichend zentrale Vorhersagen der beiden Ansätze am Antwortverhalten über Einstellungen zu Ausländern. Die Ergebnisse bestätigen im ersten Schritt die Vorhersagen der „reinen“ Rational-Choice Theorie, wonach mit sozial erwünschtem Antwortverhalten nur dann zu rechnen ist, wenn die motivationalen, kognitiven und sozialen Vorbedingungen gleichzeitig erfüllt sind. Diese bereits sehr differenzierte Prognose erweist sich allerdings bei der Analyse der im Modell der Frame-Selektion zusätzlich angenommenen kulturellen Faktoren als unvollständig. Mit diesen Determinanten einer „kooperativen“ Rahmung der Interviews lässt sich die vollständige Ausschaltung instrumentell rationaler Anreize durch soziale Erwünschtheit vorhersagen. Die Gültigkeit der Rational-Choice Theorie ist demnach auf wichtige, aber eben nur spezielle und im Rahmen des Modells der Frame-Selektion vorhersagbare Bedingungen beschränkt.
Article
This paper aims to compare different types of web surveys in terms of response behaviour and data quality. To do so, the data of four online samples, two online access panels, a student sample, and a generated mail sample - randomly drawn from a systemically generated pool of email addresses - were contrasted. To investigate expected sample differences in drop-out rates, non-response, and data quality, closed and open-ended questions of varying levels of sensitiveness were employed. The main findings were that the two access panels lead to lower item non-response, but especially when sensitive questions were asked, data quality problems were revealed. Moreover, the access panelists showed a tendency to take short-cuts in the response process and to edit their answers in favour of social desirability.